added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2022-06-30T15:16:14.104Z
|
2022-06-28T00:00:00.000
|
250129150
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academic.oup.com/iti/advance-article-pdf/doi/10.1093/iti/liac004/44270512/liac004.pdf",
"pdf_hash": "38801ab0864fe3a0db9cecd8dd4638385c5f3a5a",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44089",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "fa25bbb2045668cdf032a2987fb2978fa2868b63",
"year": 2022
}
|
pes2o/s2orc
|
A Review of Research on Design Theory and Engineering Practice of High-speed Railway Turnout
This paper systematically reviews the research progress, problems, specific countermeasures and development trends of the dynamic design theory as applied to high-speed railway turnouts. This includes wheel-rail contact solving, high-speed vehicle -turnout dynamic interaction simulation, analysis of long-term turnout performance deterioration, safety assessment of train passing through the turnout, and the maintenance and management of turnout serviceability. High-speed turnouts still face severe technical challenges with regard to their acclimation to the future development of rail transit technology. Some of these challenges include the suitability of next-generation higher-speed turnouts to a complex environment, life-cycle design, optimization of wheel-rail matching and vehicle-turnout dynamic performance, real-time status capture and performance assessment, health management and damage prediction. It is now necessary to deepen the basic theoretical study of high-speed railway turnouts, and integrate cutting-edge techniques, such as advanced materials and manufacturing, intelligence and automation, big data and cloud computing, in an effort to enhance China’s capabilities for original innovation in high-speed railway turnout technology. By analysing the present situation in a problem-orientated manner, this paper aims to provide a new perspective, as well as some basic data for academic research into technological innovations for railway engineering.
S C R I P T
Introduction
The turnout lies at the intersection of railway lines, and is a piece of rail equipment essential for shunting a high-speed train. Because of its complex structure, changeable condition, and various potential defects, the turnout is a vulnerable spot in high-speed railway lines and also a key but difficult point of maintenance. It is also a piece of critical infrastructure that affects both running stability and overall safety. In order to ensure the safety and long-term serviceability of a high-speed railway system, high-speed railway turnouts should always be kept stationary, stable and reliable, so that trains can run safely, steadily and without interruption under the combined action of multiple factors such as trainload, temperature variation, foundation deformation and the local environment [1]. However, there is also a wheel load transition zone and single-wheel/multi-rail contact behaviour in the turnout area. A significant change in contact parameters takes place during the rolling contact between wheels and turnout rails [2]. This is a type of distinct transient rolling contact, directly affecting the wheel-rail contact and its damage, and results in a much more complex and changeable wheel-rail relationship in the turnout area than in railway lines [3]. Moreover, it is more susceptible to changes in structural operating modes and the external load environment. With improper design or maintenance, the train-turnout dynamic interaction will inevitably be aggravated. That is the root cause for the shortened service life of turnouts and the reduced running quality of trains [4]. It is therefore particularly urgent to improve and develop a dynamic theory and corresponding method for simulating the serviceability of high-speed railway turnouts, and to conduct an in-depth study on the evolution of wheel-rail contact in the turnout area. In this way, it will be possible to reveal the interaction mechanism between wheel-rail contact behaviour evolution and serviceability evolution in the turnout zone, and discover the damage mechanism of turnout parts to resolve the conflict between service behaviour and service life of turnouts. Only in this way can integrated control be assured over the long-term serviceability of high-speed railway turnouts. To achieve this, however, certain challenges must be overcome.
Under the long-term combined action of multiple factors, the wheel-rail relationship in a high-speed railway turnout area becomes extremely complex. Since it is used to shunt trains, the high-speed railway turnout involves a complicated arrangement of rail lines. A switch-and-lock mechanism can be used to draw the turnout rails closer to each other, forming a combined profile so that the wheels can roll from one rail to another. However, the combined profile of high-speed railway turnout rails is characterized by a dynamic spatial-temporal evolution. Owing to the different supporting stiffnesses and constraint modes between the switch rail and the stock rail (or between the point rail and the wing rail), as well as their dynamic interaction, a change takes place in the geometric shape of the combined profile during dynamic wheel-rail interaction. Furthermore, in the long-term service of turnouts, the two rails in the combined profile inevitably suffer varying degrees of wear, damage and cumulative deformation, which are in turn accompanied by deterioration of the turnout structure. As a result, the combined profile of the turnout rails evolves dynamically over time. The dynamic evolution of the combined profile will cause the wheel-rail contact and its corresponding relationship to seriously deviate from the original design goal, reducing the running stability and comfort of high-speed trains in the turnout area. Meanwhile, the vulnerable parts, including the switch rail and point rail, will bear greater force and suffer even more severe damage. In addition, the wheel-rail contact status also changes as wheels suffering from varying degrees of wear pass through the turnout area. All this requires the establishment of a wheel-rail contact model and a numerical method that considers the basic characteristics of the high-speed railway turnout structure and the actual serviceability of the trains. In this way, a more advanced simulation theory can be developed, along with techniques for studying the wheel-rail rolling contact behavior in the high-speed railway turnout area.
With continuously increasing train speeds, an increasing number of high-frequency vibrating components appear in various structures along the high-speed railway, exerting a more significant impact on dynamic behaviour and serviceability. However, it is now an international scientific problem on how to accurately grasp the characteristics and evolution behaviour of the wheel-rail system under high-frequency loads. At present, domestic and foreign research into rail vehicle systems and basic structural dynamics is limited to low-frequency excitation. There is, however, an interaction between material degradation/component damage and high-speed wheel-rail high-frequency vibrations that should also be take into account. The evolution mechanism is complex, too. Traditional multi-rigid-body wheel-rail system dynamics cannot accurately simulate the high-frequency vibration of a wheel-rail system, so it is necessary to develop rigid-flexible coupling dynamics that consider the high-frequency flexible deformation of the wheel-rail structure, or even multi-flexible or multi-elastic system dynamics. In a highspeed running environment, the vehicle and track structure suffer from flexible high-frequency deformation.
On the one hand, the force-displacement relationship derived from the half-space hypothesis is no longer applicable, while on the other, the deformation of the wheel-rail structure has a great influence on the relative motion or sliding of the wheel-rail contact surface, directly affecting the stick-slip distribution of contact patches and the calculation results of wheel-rail creepage/force. Therefore, vehicle-turnout system dynamics should also be developed gradually, in order to include wheel-rail system dynamics for multirigid bodies and rigid-flexible coupling, multi-flexible bodies and multi-elastic body system dynamics. Starting from the challenges confronting high-speed railway turnouts, this paper first introduces the dynamic design theory of such turnouts, including wheel-rail rolling contact algorithms and vehicle-turnout coupling dynamic analysis models. Then, the practical application of the dynamic design theory is discussed, which includes a study on the mechanism of turnout performance deterioration under long-term service conditions and an assessment of the safety of trains passing through turnouts. Further to this, the importance of health management and condition monitoring is explained with regards to the safe operation of high-speed railway turnouts. Finally, the future direction of high-speed turnout research is discussed, aiming to provide researchers and engineers with a theoretical basis and ideas for further study.
Dynamic design theory and evaluation method of a turnout
In the early days of high-speed railway construction, China either focused on independently developing, or on importing, digesting, absorbing and re-innovating various key techniques for CRH trains and high-speed railway turnouts. However, at the design stage, no consideration was given to the actual wheel-rail dynamic action generated when trains passed through turnouts. Neither were assessment of the safety, stability and comfort of trains passing through turnout. The wheel-rail dynamic interaction in the turnout area is the root cause of the shortened service life of turnouts and the reduced quality of train operation. Under the coupled action of inevitable structure irregularity and external excitation in the turnout area, medium and high-frequency large-amplitude wheel-rail vibrations may be caused, thus aggravating the damage to turnout parts, significantly affecting the performance of vehicles passing through turnouts, and shortening the maintenance cycle of wheels. Therefore, it is of great importance to study the dynamic design theory and evaluation method of high-speed railway turnouts.
Wheel-rail rolling contact theory of high-speed turnouts
In a railway system, wheel-rail rolling contact serves to support and shunt trains. Train traction and braking are achieved through the rolling friction force on the wheel-rail contact interface. Therefore, the wheel-rail relationship has always been one of the core issues of railway research. This is because it concerns the safety and quality of train operation, the generation of wheel-rail frictional noise, and the evolution of damage on the wheel-rail contact interface. The relationship between the wheel and turnout is more complex than that between the wheel and plain line. The wheel-rail contact model is the link that couples the vehicle system and the switch system. Researchers have carried out wheel-turnout studies based on wheel-rail interaction in the section line, combing the characteristics of turnout structures. Researches have undergone the following stages: analysis of wheel-rail contact geometry, 3D asymmetric wheel-rail rolling contact calculations, establishment of fast algorithms for rolling contact in the turnout area to satisfy vehicle/turnout dynamics simulation, and solution of complex rolling contact problems based on the finite element (FE) method.
Contact mechanics originatied from Hertzian theory [5] in 1882 to solve the normal contact problem, and it, broke through the limitation of elastic support method proposed by Winkler, which could only obtain an approximate solution. The most widely accepted theory is the Kalker's three-dimensional non-Hertzian rolling contact theory in 1979, also known as the exact theory, or Kalker's variational approach [6]. Besides, Kalker proposed simplified theory FASTSIM [7] in 1973~1982 to deal with the tangential contact problem, which is still the most commonly used rolling contact model for vehicle-track dynamics analysis. Shen-Hedrick-Elkins model was established in 1983 which is also widely applied for vehicletrack dynamics to calculated creep forces [8]. In addition, a creepages/force table using CONTACT is also a highly efficient method [9]. All the above-mentioned creep models used in vehicle-track dynamics are based on Hertzian contact patches. In order to analyze wheel-rail damage more accurately, a non-Hertzian contact model that is more advanced than ones based on Hertzian contact theory has since been introduced into simulation analysis. At present, virtual penetration methods are widely used to rapidly solve non-Hertzian wheel-rail rolling contact problems. There are currently three most typically models, including Kik-Piotrowski model [10][11], Linder model [12][13] and Ayasse-Chollet model [14]. FASTSIM is used to solve tangential contact problems in all the above three non-Hertzian contact algorithms based on virtual penetration. Nonetheless, Sichani found that the calculation accuracy of FASTSIM failed to meet the requirements under certain non-Hertzian contact conditions. To solve this problem, the FaStrip algorithm was developed by Sichani [15].
When the vehicle passes through a small-radius curve or turnout, the wheel flange may come into contact with the gauge corner or switch rail. The contact surface is curved, forming "conformal" contact. Kalker's 3D non-Hertzian rolling contact theory applies only to planar contact patches. When the wheel flange touches a rail corner, this theory can therefore only help to obtain an approximate solution. The authors created CURVE for surface contact using Kalker's variational approach, expanding the application of CONTACT [16][17]. At the same time, owing to influencing factors such as the sideslip and yawing of wheelset, and asymmetric track arrangement in the turnout area, there is a very complex contact relationship between the wheels and the stock and switch rails (or wing and point rails), and two-point or even multi-point contact occurs in many cases. The Hertzian rolling contact theory based on many assumptions, cannot accurately reflect the complex wheel-rail contact relationship. Furthermore, the damage prediction of the wheel and rail, such as wear and rolling contact fatigue, needs accurate determination of the wheel-rail contact patch and calculation of normal and tangential stress distribution.
Therefore, it is necessary to adopt a non-Hertzian rolling contact method with higher calculation accuracy and efficiency. A rolling contact model based on the relevant influencing factors, as well as its algorithm (INFCON) are proposed by authors, with both calculational accuracy and efficiency taken into consideration [18].
Dynamic model of wheel-rail transient rolling contact in a high-speed railway turnout area
Classical contact mechanics theory has great difficulty meeting the needs of high-speed railway development, mainly because it cannot be used to solve certain specific problems arising from the wheelrail rolling contact process. This includes such issues as two-point contact, conformal contact and elasticplastic contact, residual deformation accumulation, contact surface fatigue, rail corrugation, polygonal wheel wear, scratching, weld irregularity, excessive wheel load in the turnout area, and impact from a "third medium" or inertial force. As a result, the FE method has become the main approach for solving this type of complex wheel-rail relationship problems. Such problems usually appear as rolling contact behaviour in vibrating conditions, i.e., transient rolling contact, which is a focus of concern in contact dynamics and beyond the scope of application of Kalker's steady-state rolling contact theories.
With on-going improvements in computer performance and the development of finite element algorithms in recent years, the FE method is now increasingly used to solve complex wheel-rail rolling contact problems. For the introduction of rolling behaviour into the FE model, there are currently two ways being adopted by researchers: ALE (Lagrangian Eulerian) modelling and transient modelling. This works under the idea that wheel rolling is divided into rigid body motion and deformation, which can be described using Eulerian and Lagrangian methods, respectively, meaning that rolling and sliding problems can be considered, but it remains suitable only for solving steady-state rolling behaviour.
At present, for transient rolling contact problems, the most common solution is to make use of the central difference method to explicitly solve the FE model described by the Lagrangian method. For the 3D wheel-rail transient rolling contact model built using the FE method, it can deal with the strain rate-related constitutive relationship of materials, as well as with the complex variable friction model and geometric irregularities in any status of 3D contact. The real geometric structural deformation of wheels and rails can be considered in a solid model. After interfacial rolling contact is coupled with structural vibrations, highspeed wheel-rail transient rolling contact behaviour can also be reconstructed numerically. In 2005, Zefeng Wen and Xuesong Jin took the lead in building a dynamic impact model for the insulating joint between rail and wheelset using ANSYS/LS-DYNA. They then used the implicit-explicit sequential method to reveal the effects of axle load and rolling velocity on the wheel-rail impact force and the dynamic stress on the rail [19]. After that, researchers all over the world developed a variety of transient rolling contact models, using them to solve different transient rolling contact problems [20][21][22]. The most famous one is a transient model built in ANSYS/LS-DYNA by Zili Li's team at the Delft University of Technology. This model is similar to Wen's, but considers the wheel-rail rolling contact status under traction conditions by applying torque. The research team then continued to explore ways of verifying the effectiveness of the model and extending it to engineering applications [23].
On the one hand, the movable rail component in the turnout area needs to be pulled so that trains can be shunted, ensuring that the switch rail and the stock rail (or the point rail and the wing rail), are kept close to each other in a certain range. Therefore, the high-speed turnout rail component with variable crosssection is very long, and the switch rail is not bound by the fastener within this range. The switch-and-lock mechanism has a weak restraining action at the traction point only. The switch rail is freely placed on the surface of the sliding bed platen, maintaining rigid contact, while the outer and inner flanges of the stock rail are buckled with an elastic bar and a resilient clip, respectively, coming into elastic contact with the sliding bed platen through the sub-rail rubber mats of the fastener system. As a result, there are different constraints and support conditions for the stock rail and the switch rail. What is more, owing to wheelset sideslip and yawing caused by structural irregularities, as well as the complicated rail line arrangement, there is an extremely complex relationship between the wheels, the switch rail and the stock rail in the turnout area. Significant dynamic displacement is thus generated, leading to dynamic reconstruction of the combined profile. The spatial location of the wheel-rail contact point changes abruptly in the wheel load transition zone within the turnout area, causing two-point or multi-point contact. Steady state hypothesis cannot accurately reflect the wheel-rail contact behaviour in a turnout switch, meaning an explicit FE method is required that considers both macro-dynamics and the coupling characteristics of wheel-rail mesoscopic contact behaviour. As a result of this, a wheel-rail elastoplastic transient contact dynamics model considering real geometrical and rolling-sliding characteristics was built, in order to make up for the deficiency of the traditional models, which fail to consider the effects of material and geometric nonlinearities [24][25]. This model therefore enables wheel-rail transient rolling contact behaviour in the turnout area to be more accurately described.
Rigid-flexible coupling dynamics model of a high-speed turnout
Wheel-rail system dynamics focus on issues of complex wheel-rail contact mechanics arising from train operation. From the perspective of system engineering, in order to solve specific problems, the wheelrail system dynamics model and efficient numerical analysis methods were applied to simulate the vibration of the wheel-rail system under various excitation conditions. In this way, the various dynamic responses of the train system, track system, bridge and tunnel infrastructure in both the time and frequency domains can be revealed. Based on this, and according to relevant evaluation criteria and standards, the safety, stability and comfort of train operation can then be assessed, along with the bearing capacity, deformation characteristics and load transfer characteristics of the track structure, as well as the fatigue characteristics and reliability of all wheel-rail system components. This can be used to guide and optimize the structural design, material selection and parameter configuration with regard to the wheel-rail system, with an aim to devising optimal wheel-rail contact matching and controlling the wheel-rail excitation source. The ultimate goal is to provide theoretical support to ensure the safe, stable and uninterrupted operation of trains at specified speeds.
The main task of wheel-rail system dynamics is to study the derailment safety, capsizing stability, straight-route running stability, riding comfort and curve passage capacity of the train, along with the fatigue characteristics of wheel-rail components following excitation due to track irregularities. When the train runs at low speeds, the excitation wavelength of track irregularities is long, while the vehicle and track system vibrate within the medium and low-frequency range. The vibration frequency of the wheelset, bogie frame, car body and track system is generally less than 50Hz, 5Hz, 2Hz and 500Hz, respectively. Therefore, the wheelset, bogie frame and car body can be regarded as rigid bodies, while components can be connected together through elastic and damping elements. The wheel and rail can be coupled together by means of the steady-state rolling contact theory. Therefore, the research of traditional wheel-rail system dynamics is primarily carried out on the basis of multi-rigid-body system dynamics, and so far, satisfactory research results have been achieved [26][27].
According to reference [28], a vibration frequency below 20Hz is enough to consider the curve passage capacity, riding comfort and running stability of the vehicle alone. Thus, the track model can be simplified. For higher vibration frequencies, the mass inertia of track components is very important. Below 250Hz, the vibration of ballast and subgrade is quite significant, while the rubber mat is of great importance for vibrations at frequencies of less than 700Hz; at frequencies above 700Hz, it is the rail itself that is most important. Schmid et al. built a bogie-turnout coupling dynamics model, and studied the wheel-rail dynamic interaction when the bogie passed through the switch panel of the turnout [29]. Kaiser et al. analysed the effects of rigid wheelsets and elastic wheelsets with different order modes on the bogie transfer function, and pointed out that in a medium-frequency band (50~500Hz), the fourth-order vertical bending mode would meet the accuracy requirements; for the vibration response at a higher frequency band (>500Hz), it would be necessary to select more vertical bending modes for the wheelsets [30]. Kassa and Nielsen established an FE model for a standard turnout, which could be used to study the dynamic vehicleturnout interaction in a higher frequency domain. The effective calculated frequency can reach up to 300Hz when the first 500-order eigenmodes of the turnout are included in the model [31]. Alfi et al. built a mathematical model and studied the vibration characteristics of the vehicle-turnout system at low and medium frequencies [32]. This model, which fully considers the simultaneous contact between a single wheel and multiple rails, can accurately predict the dynamic performance when the wheel load transition occurs in the switch and crossing panels. Reference [33] provides a vertical model considering the torsional vibration of the rail section. In this model, the Timoshenko beam is used in place of the railhead, rail web and rail flange, while elastic elements are used to connect the middle. Not only can the model represent the flapping motion of the rail flange, but it can also consider vibrations up to 6, 500Hz.
A traditional multi-rigid-body vehicle system dynamics model is usually used to study dynamics problems in the low frequency domain. The increased train speed and the existence of wheel-rail wear intensify the wheel-rail interaction in the medium and high frequency domains. The inherent medium and high-frequency vibration characteristics of the vehicle-track system are also very likely to be excited, thus endangering the safety of vehicle operation, accelerating the development of wheel-rail wear and causing wheel-rail noise. In order to solve the above problems, a rigid-flexible coupling dynamic model of the vehicle-turnout system that considers the rotation effect of wheelsets was built by the authors [34][35]. It was then used to accurately simulate the dynamic characteristics and rolling contact behaviour of flexible wheelsets in the high-speed turnout area, revealing the effects of wheel-rail evolution in the turnout area on riding quality and dynamic damage of turnouts, providing technical support for the wheel-rail design of high-speed turnouts.
Applications of vehicle-turnout dynamic design theory
This section discusses the practical applications of the vehicle-turnout dynamic design theory, including turnout rail damage analysis and dynamic behaviour evolution in long-term service conditions, as well as an assessment of turnout-crossing safety.
Analysis of long-term dynamic behaviour of high-speed railway turnouts
Like the rails in railway lines, turnout rails are susceptible to cumulative plastic deformation such as wear, conquassation and plastic flow, as well as RCF such as shelling, spalling and head check . Depending on the service status of high-speed turnouts, certain types of rail damage can be clearly seen on the turnout rails due to the particularity of the turnout structurethese include side wear of curved switches, horizontal cracks on the non-working edge of straight switches, and head check on rails. Since it was opened to traffic, the Chinese high-speed railway system has been beset with emerging problems of how to understand the occurrence and development of special damage in turnout rails, its influencing factors and its impact on the safety of CRH trains passing through turnout and stability of CRH trains. This has led to the need for finding suitable methods to improving the materials used and optimizing the profile, as well as scientifically devising criteria for maintenance and replacement of turnout rail components. At the early stages, focus has primarily been put on the short-term dynamic behaviour of high-speed turnouts in the process of high-speed railway turnout design and assessment. As for its long-term dynamic behaviour, such as profile evolution and behaviour deterioration, an in-depth study needs to be carried out referencing the theories of wheel-rail material wear and fracture mechanics.
A wheel-rail wear model, which is the most straightforward tool for the simulation of turnout rail wear, determines the distribution range of rail profile wear. Usually, the amount of wear is calculated according to the dynamic response of the wheel-rail system in a wheel-rail wear model. There are two types of wheelrail wear models: the friction work model, or wear index model, which is based on wheel-rail frictional energy dissipation, and the wear model, which is based on sliding friction and wear. The above-mentioned wear models are derived on the basis of experimental studies in combination with theoretical research, and there are applicable conditions for each model. Typical wear models include Archard's sliding friction and wear model, the wear work model and the wear index model [36][37][38]. With the development of vehicleturnout system dynamics and wheel-rail rolling contact theory, it has become increasingly possible to simulate turnout rail wear. Because of the mutual restrictions between rail wear and the dynamic performance of the vehicle-turnout system, it is necessary to constantly update the rail profile in the simulation process to obtain the real-time dynamic response of the vehicle-turnout system as the input data for subsequent calculations.
Chudzikiewicz and Myslinski studied the thermo-mechanical coupling problems arising from wheelrail contact, of which friction heat, thermal flow and material wear were all considered to be quasi-static problems [39]. T. Jendel developed a prediction model for wheel profile changes. This model consisted of a vehicle dynamics model, dynamic calculation time (by simulating actual operating conditions), a discrete line model (by simulating curve distribution), track irregularity and a wheel-rail friction coefficient. The dynamic calculation results, such as contact force and location, were imported into the wear model to reveal the distribution and amount of wear. Then, data smoothing and profile updating were performed for subsequent dynamics calculations. These steps were repeated in this way until the wheel reached its maximum allowable mileage [40~41]. A. Orvnas performed a simulation in GENSYS and used Archard's wear model to predict the wear of Stockholm's light rails. A relatively rational wear profile was obtained by simulation, but there were differences observed between the simulation results and the test results [42]. I.Y. Shevtsov and V.L. Markine considered the effects of RCF and wheel-rail wear during wheel profile design, and determined the optimal equivalent conicity to ensure the running stability of wheelsets and reduce wear and contact stress [43]. B. Dirks et al. proposed a model for predicting both wheel-rail contact fatigue and wear. They expanded two contact fatigue prediction models, and analysed the effects of curve radius, wheel-rail profile, wheel-rail friction coefficient and track irregularity on wheel-rail service life [44]. B. A. Palsson and J.C.O. Nielsen used wear work to characterize the severity of damage to turnout rails, and studied the effects of wheel profile and wheel-rail friction coefficient on the damage of turnout rails [45]. E Doulgerakis considered the effects of dynamic wheel-turnout response on wheel wear, and built a wheel wear prediction model to predict the uneven wear behaviour of wheels [46]. A simulation model was built in this paper for high-speed turnout rail wear, revealing the distribution of rail wear caused in the turnout area. This model was also used to predict the evolution law of a turnout rail profile under long-term service conditions and study the effects of rail wear on the dynamic wheel-rail performance in the turnout area. In addition, a simulation method was built to analyse high-speed turnout rail RCF, and the penalty function method was used to characterize the coupling competition relationship between rail wear and RCF, ascertaining the development law of turnout rail RCF under long-term service conditions [47][48]. A method was proposed to optimize the switch rail profile to reduce the dynamic wheel-rail interaction and control rail damage, so as to offer guidance on the maintenance of high-speed turnouts currently in service [49].
Research on the derailment mechanism and criteria in the turnout area
Because of the open constraints between the train and the track, vehicle derailment may occur objectively, and this is one of the main factors affecting the safety of train operation. Vehicle derailment can be divided into five types: climbing on rail, sliding from rail, jumping onto rail, dropping off rail, and derailing due to overturning. Upon the construction of the first railway, academic researchers all over the world began to study vehicle derailment. Their research focused mainly on three aspects: derailment evaluation indexes, derailment tests and derailment simulations. The derailment coefficient and wheel load reduction rate based on the Nadal criteria, two key safety indexes, are most widely used, and have always been important parameters for vehicle and track structure design. Along with the development of heavyduty, high-speed railways and urban rail transit in China, the track structure and train operating performance have been continuously strengthened, while train control and transportation organization are constantly being optimized. This has led to an on-going decrease in the number of derailment accidents. However, such accidents do still occur from time to time in the turnout area, especially in No. 6 symmetrical turnouts in the marshalling yard and in small-sized common turnouts made of common steel on tram rails, accounting for a high proportion of vehicle derailments. This shows that the mechanism of derailment caused by the complex wheel-rail relationship in the turnout area has yet to be fully understood, and that derailment criteria based on interzonal rail lines are not necessarily applicable to the turnout area. Severe consequences will arise when a high-speed train derails in the turnout area, making it of critical importance to study the mechanism of such derailment by means of the wheel-rail rolling contact theory and dynamic simulation. In this way, it will be possible to put forward proper derailment criteria and corresponding measures for optimizing the wheel-rail relationship. Research on derailment dates back to the late 19th century. In 1896, French scientists figured out the critical derailment condition based on the principle of static equilibrium, taking it as a criterion for a single wheel derailment. This became the famous Nadal formula [50]. Considering the complexity of derailment problems, and the difficulty of carrying out the required research, the problem has never been successfully solved, in spite of related studies being conducted without pause over the past more than 100 years. What is worse, there are many problems with the derailment criteria themselves. There is no unified, common criterion in the world for accurate judgment of derailment, let alone a derailment criterion suitable for turnout rails with a variable cross-section rail and a combined dynamic profile. Despite the wide application of the Nadal derailment coefficient, it has been found to be conservative in long-term operation and testing. Yokose argued that the limit of the Nadal derailment coefficient was related to the duration of lateral force [51]. By taking both steady-state derailment and derailment due to jumping into consideration, Japan Railway established derailment coefficient evaluation indexes based on the duration of lateral force. In the 1960s, to make the Nadal derailment theory less conservative in dealing with small and negative angles of attack, Yokose extended the Nadal derailment theory to 3D space in accordance with the nonlinear creep theory and the 3D force condition for equilibrium, conducting experimental research on 1: 5 and 1:10 single wheelset models [52]. TTCI carried out extensive research into steady-state derailment by testing and simulation using NUCARS, the multi-body dynamics simulation software, and a track-loading vehicle. Elkins and Shust argued that wheels climbing on to a rail depend on the vehicle running distance related to the derailment coefficient, rather than on the duration [53]. Barbosa simulated the derailing process of a wheelset under lateral load using a 6-DoF single wheelset derailment model [54]. Jeong Seo Koo et al. built a single wheelset derailment simulation model considering wheel-rail collision according to the collision theory. It was found during the simulation process that different types of derailment, including derailment due to sliding, derailment due to climbing, derailment due to jumping, and derailment due to overturning, were closely related to the external load acting on the wheelset [55][56]. O'Shea et al. put forward a three-point wheel-rail contact method in place of the traditional two-point wheel-rail contact method based on multi-body dynamics theory to simulate wheelset derailment, achieving accurate simulation results [57][58]. Cheli et al. built a multi-compartment metro vehicle simulation model, and used this to study the effects of track irregularity on vehicle derailment [59].
Although vehicle derailment can be divided into many types, there is only one result: the wheels become separated from the rail, so that the wheelsets cannot continue to run normally. Therefore, in order to clarify the mechanism of vehicle derailment in the turnout area, it is necessary to simulate the dynamic process of wheelset derailment and then analyse the dynamic wheel-rail contact relationship on every typical cross section of the turnout where the vehicle is climbing on the rails [60][61]. A dynamic derailment simulation model was built in this paper to simulate the dynamic process of derailment in the turnout area. The calculation results were much the same as those obtained in field investigations. On this basis, the effects of different factors such as wheel-rail wear, vehicle speed, friction force and wheelset axle load on train derailment were investigated. In addition, a technique was proposed to enhance the safety of trains in the turnout area [62][63][64].
Health management and condition monitoring of high-speed railway turnouts
With a large number of high-speed railways built and opened to traffic in China, there is an increasingly obvious conflict between traffic safety and equipment condition management. After establishing an effective monitoring system, we can conduct long-term monitoring of fixed equipment to ensure the safe, reliable and efficient operation of high-speed trains. In particular, monitoring must be conducted over certain key vulnerable parts of fixed equipment, such as turnouts, important bridges, important tunnels, and weak subgrades. The turnout is one of the vulnerable parts affecting train operating safety, and more than 3000 groups of turnouts have been laid throughout the country. The high-speed railway turnout is a critical component for shunting trains, and consists of movable parts directly related to the wheel-rail relationship. Therefore, it is characterized by complex wheel-rail contact status, changeable rail sections, complex rail support and constraints, a large number of parts in the turnout area, assembly of many types of parts consisting of different materials, and worker and electricity linkage for shunting. All of these characteristics indicate that the turnout is a reasonably vulnerable piece of fixed high-speed railway equipment, and disastrous consequences can be caused by failure to solve turnout faults in time. Moreover, China's high-speed rail is under closed-off management, meaning the current management and detection means cannot help to quickly and comprehensively learn about the conditions of these key parts to ensure the safe operation of trains. At present, with turnouts in operation, the key to ensuring the safe and efficient operation of high-speed trains is ensuring turnout-crossing safety, enhancing the monitoring of turnout serviceability and improving the ability to deal with faults efficiently.
With the advancement of modern signal processing and sensor technology, non-destructive testing techniques based on acoustic emission technology are drawing increasing attention and being further developed. Acoustic emission (AE) is an elastic wave phenomenon that arises along with energy release in the process of material deformation or fracture. AE detection refers to collecting AE signals generated from turnout rail damage using customized sensors. According to fracture mechanics, there are four influencing factors of AE signals, including the material itself, the material structure, load and forms of crack. For the monitoring object, the first three influencing factors are constant, so that a one-to-one mapping relation can be built between the forms of crack and AE signals. In other words, crack damage can be detected by monitoring AE signals. Big data-based AE signal processing is based on modern signal processing technology and big data mining technology. It works by re-characterizing massive data using modern signal processing techniques. The selected data tool is Wigner-Ville fourth-order spectrum, which can realize high-resolution time-frequency characterization of signals while suppressing noise. Based on highresolution characterization of AE emission signals and noise suppression, a Wigner-Ville fourth-order spectrum can be used as the feature of AE emission signals required for big data mining and clustering. Then, existing massive data can be clustered together as a priori information to classify newly-collected AE signals, and subsequently, the turnout monitoring system will respond differently depending on the classification results.
Elastic wave detection, which has the advantages of long distance, large range, total cross-section and high convenience [65], has been a focal point for researchers throughout the world in recent years. Hayashi et al. [66] studied the propagation characteristics of elastic waves in rails, providing a basis for the potential application of elastic wave monitoring technology to rail breakage monitoring; Zhang et al. [67] studied the propagation characteristics of AE waves in turnout rails to identify and locate sub-rail defects, and investigated the Lamb wave dispersion characteristics of defect signals; Burger [68] used elastic waves to continuously monitor the integrity of a track, and elastic wave emission points were set up at an interval of 1 km along the track to confirm the integrity of the rail according to whether the receiving station received elastic wave signals; Loveday et al. [69] added a pulse echo operating mode to the broken rail detection system to locate railhead cracks and monitor their growth. Considering the effects of asymmetric crosssections and longitudinal changes of cross-section on the propagation characteristics of guided waves, the researchers proposed a method for analysing the propagation characteristics of guided waves in the turnout rail, revealing the effects of cross-sectional changes on the propagation characteristics of guided waves in high-speed turnout rails [70]. Then, based on the principle of linear acoustics, they investigated the interaction mechanism between guided waves and typical crack injuries in high-speed turnout rails, and built an open turnout service status monitoring platform.
Conclusions
The high-speed turnout is one of the core devices in high-speed railway construction, operation and maintenance. During the continued design, development, operation and maintenance of high-speed turnouts, it is necessary to deepen and improve upon the dynamic analysis theory of turnouts in order to solve relevant dynamic problems such as wheel-rail relationships, component strength and vehicle-turnout coupled vibration. A theory needs to be established that can analyse the fatigue strength of components and reveal the occurrence mechanism for various types of damage and their influencing regularity on running stability and safety, in an attempt to solve the problem of turnout performance deterioration after long-term service. A high-speed turnout structure condition monitoring system also needs to be devised to solve the problem of durabilitythis involves issues such as frequent turnout failure, time-consuming overhaul, high maintenance loads, and similar. In this paper, the dynamic design theory and key techniques for high-speed railway turnouts were summarized in a systematic manner. In terms of wheel-rail contact behaviour in the turnout area, vehicle-turnout coupling dynamics, turnout serviceability and safety assessment, and turnout structure condition monitoring, this paper comprehensively discussed the practical application of dynamic design theory to high-speed turnouts, in order to provide researchers and engineers with a theoretical basis and ideas for further studies.
Future research directions
The theory and key techniques of high-speed railway turnout design are a promising and engaging topic of research, and one which abounds with interesting and challenging tasks. Future research directions can be considered from four aspects: (1) Life cycle design of high-speed railway turnouts for LCC (life cycle cost) and RAMS LCC and RAMS-related indexes are decomposed and put into the life cycle of high-speed railway turnouts to form corresponding demand constraints. These include DFC (design for cost), which involves high-speed turnout operating costs, maintenance costs, assembly and disassembly costs, process costs, design costs and scrap costs, and reliability design, which involves safety and reliability index distribution and constraints at each stage. Cost and reliability models are built at each stage to ensure reasonable cost allocation and a reliable stability margin for the required design parameters. After this, an LCC and a reliability model are built for design optimization and cost control, in order to achieve satisfactory life cycle design results.
(2) Adaptability improvement for high-speed railway turnouts in a complex operating environment In response to a series of major national strategic needs, including the "Belt and Road" Initiative, maritime power construction, the western development and high-speed railway expansion, rail transit infrastructure worldwide is now starting to cover areas with a more complex natural environment at higher speeds. Owing to the complexity of the operating environment, the diversity of structural materials, the spatial effects of structural facets, the time required during the service process, and the coupled effect of other various factors, the high-speed railway turnout boasts a very complex spatio-temporal evolution mechanism and associated laws for its dynamic performance. This requires that high-precision, high-speed railway turnouts and substructures show excellent adaptability to the challenges presented by harsh natural environments and complex geological conditions such as severe cold, sandstorms, rain and snow, freezing and thawing, scouring and corrosion. Therefore, leading researchers are now studying the spatio-temporal evolution mechanism and associated laws for the dynamic performance of the high-speed turnout system in extreme climate or under adverse geological conditions.
(3) Research on basic scientific problems concerning higher-speed railway turnouts When vehicles pass through a turnout at a speed of 400km/h or above, high-frequency dynamic or transient behaviour plays a very significant role in the process of high-speed wheel-rail interaction. Vibration wavelength is of almost the same order of magnitude as the diameter of the contact patch, making it easy to cause flutter or co-vibrations within the wheel-rail coupling system components, thus disrupting passenger comfort, accelerating the damage to vehicle-track system components, and seriously affecting the safety and reliability of high-speed train operation. Therefore, for the development of the next generation of high-speed railway turnouts, the key basic scientific problem demanding a prompt solution is the creation of a minimum-time-step transient rolling contact model for three elastomers or elastoplastic bodiesone that considers the non-linearity and geometric non-linearity of system components. Contact irregularity and inter-facial thermal effects must be built in to investigate the rule of high-frequency interaction between high-speed rail turnouts and the vehicle system.
(4) Innovative design for next-generation high-speed railway turnout structures The next generation of high-speed railways will put increasingly higher requirements on the safety, intelligence, durability and eco-friendliness of turnouts. With the development of structural dynamics, aerodynamics and acoustic theories, it has become necessary to press ahead with the innovation of materials, structures, technology and other aspects of high-speed turnouts. Key technologies are as follows: new supporting structures suitable for next-generation high-speed turnouts, reinforced advanced fibre composites, adaptive materials, and high-performance ferroconcrete. A high-speed railway turnout design system needs to be built on the basis of reliability theory, life cycle and sustainable engineering to enhance the industrialization, intelligence and automation of high-speed railway turnout manufacturing and laying. Another matter of great urgency is developing a high-speed railway turnout damage assessment method and an intelligent damage remediation technology based on the degree of structural damage. In this way, vibration attenuation and noise control techniques can be developed, creating a useful environmental evaluation system. In addition, a big data-based high-speed railway turnout risk management system can be created, along with other information management and maintenance decision-making technologies.
|
v3-fos-license
|
2018-04-03T03:32:10.072Z
|
2016-05-01T00:00:00.000
|
6632691
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.1002020&type=printable",
"pdf_hash": "67656c85fd6581f988c29422fcc18eac88a52891",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44092",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "67656c85fd6581f988c29422fcc18eac88a52891",
"year": 2016
}
|
pes2o/s2orc
|
Smoking Behavior and Healthcare Expenditure in the United States, 1992–2009: Panel Data Estimates
Background Reductions in smoking in Arizona and California have been shown to be associated with reduced per capita healthcare expenditures in these states compared to control populations in the rest of the US. This paper extends that analysis to all states and estimates changes in healthcare expenditure attributable to changes in aggregate measures of smoking behavior in all states. Methods and Findings State per capita healthcare expenditure is modeled as a function of current smoking prevalence, mean cigarette consumption per smoker, other demographic and economic factors, and cross-sectional time trends using a fixed effects panel data regression on annual time series data for each the 50 states and the District of Columbia for the years 1992 through 2009. We found that 1% relative reductions in current smoking prevalence and mean packs smoked per current smoker are associated with 0.118% (standard error [SE] 0.0259%, p < 0.001) and 0.108% (SE 0.0253%, p < 0.001) reductions in per capita healthcare expenditure (elasticities). The results of this study are subject to the limitations of analysis of aggregate observational data, particularly that a study of this nature that uses aggregate data and a relatively small sample size cannot, by itself, establish a causal connection between smoking behavior and healthcare costs. Historical regional variations in smoking behavior (including those due to the effects of state tobacco control programs, smoking restrictions, and differences in taxation) are associated with substantial differences in per capita healthcare expenditures across the United States. Those regions (and the states in them) that have lower smoking have substantially lower medical costs. Likewise, those that have higher smoking have higher medical costs. Sensitivity analysis confirmed that these results are robust. Conclusions Changes in healthcare expenditure appear quickly after changes in smoking behavior. A 10% relative drop in smoking in every state is predicted to be followed by an expected $63 billion reduction (in 2012 US dollars) in healthcare expenditure the next year. State and national policies that reduce smoking should be part of short term healthcare cost containment.
Methods and Findings
State per capita healthcare expenditure is modeled as a function of current smoking prevalence, mean cigarette consumption per smoker, other demographic and economic factors, and cross-sectional time trends using a fixed effects panel data regression on annual time series data for each the 50 states and the District of Columbia for the years 1992 through 2009. We found that 1% relative reductions in current smoking prevalence and mean packs smoked per current smoker are associated with 0.118% (standard error [SE] 0.0259%, p < 0.001) and 0.108% (SE 0.0253%, p < 0.001) reductions in per capita healthcare expenditure (elasticities). The results of this study are subject to the limitations of analysis of aggregate observational data, particularly that a study of this nature that uses aggregate data and a relatively small sample size cannot, by itself, establish a causal connection between smoking behavior and healthcare costs. Historical regional variations in smoking behavior (including those due to the effects of state tobacco control programs, smoking restrictions, and differences in taxation) are associated with substantial differences in per capita healthcare expenditures across the United States. Those regions (and the states in them) that have lower smoking have substantially lower medical costs. Likewise, those that have higher smoking have higher medical costs. Sensitivity analysis confirmed that these results are robust.
What Did the Researchers Do and Find?
• This study examined the year-to-year relationship between changes in smoking and changes in medical costs for the entire United States, taking into account differences between different states and historical national trends in smoking behavior and healthcare expenditures.
• The study found that 1% relative reductions in current smoking prevalence and mean packs smoked per current smoker are associated with 0.118% and 0.108% reductions, respectively, in per capita healthcare expenditure (elasticities).
• Historical regional variations in smoking behavior (including those due to the effects of state tobacco control programs, smoking restrictions, and differences in cigarette taxation rates) are associated with substantial differences in per capita healthcare expenditures across the United States.
• A 10% relative drop in smoking in every state is predicted to be followed by a $63 billion reduction (in 2012 US dollars) in healthcare expenditure the next year.
What Do These Findings Mean?
• Changes in healthcare costs appear quickly after changes in smoking behavior.
• State and national policies that reduce smoking should be part of short term healthcare cost containment.
Introduction
Smoking causes a wide range of diseases, including cardiovascular and pulmonary disease, complications of pregnancy, and cancers [1,2]. While the risks for some of these diseases, such as cancer, evolve over a period of years when people start and stop smoking, the risks for other diseases begin to change within days or months following changes in smoking behavior. For example, the risk of heart attack and stroke fall by about half in the first year after smoking cessation [3], and the risk of having a low birth weight infant due to smoking almost entirely disappears if a pregnant woman quits smoking during the first trimester [4]. There is a substantial literature showing that reductions in smoking behavior have substantial short and long run health benefits that reduce real per capita healthcare expenditures, beginning with reductions in cardiovascular disease, particularly heart attack and stroke [3], and respiratory disease [5].
Smoking cessation and reduction in secondhand smoke exposure in pregnant women, mothers, and children produce both very short run and long run reductions in healthcare expenditures [4,6]. The 2014 Surgeon General's report The Health Consequences of Smoking-50 Years of Progress ( [1], pp. 435-443) summarized 59 studies that reported immediate (often within 1 mo) 10%-20% drops in hospital admissions for acute myocardial infarction, other cardiac events, stroke, asthma, and other pulmonary events following implementation of smoke-free laws. These benefits extend to the elderly population [7], complications of pregnancy [8], and young children [8,9] and grow with time as the effects on slower-evolving diseases, such as cancer [10,11], emerge. Previous research found that increases in per capita funding for population-based tobacco control programs in California [12,13] and Arizona [14] were associated with reductions in cigarette consumption and, in turn, with reductions in per capita healthcare expenditure in those states compared to control populations in the rest of the United States. These studies reached similar conclusions using two different aggregate measures of population smoking behavior: (1) per capita cigarette consumption in California and Arizona [12,14] and (2) smoking prevalence and cigarette consumption per smoker in California [13]. This paper extends the second approach to estimate the link between smoking behavior and healthcare expenditure for the entire United States.
Methods
This paper estimates how much on average a 1% relative reduction in smoking prevalence in a US state reduces health costs in that state a year later. The analysis estimates this association (elasticity) while controlling for the effects of a variety of other differences between states that may produce a spurious association between reduction in smoking prevalence and reduced health expenditure, e.g., changes in population composition and other health behaviors that may also reduce health expenditure. To obtain this estimate for each state, we use a regression approach, with various refinements that take account of correlated time series. In the main and supplemental sensitivity analysis, we control-as much as possible when using state aggregated data-for the effects of other variables that may influence health care expenditure at the state level in addition to smoking (e.g., demographic factors, such as population age composition and ethnic composition; other health risk behaviors in the population, such as alcohol use; and obesity). We also control for the possible effects of unmeasured variables (e.g., cross-state cigarette purchases) on the validity of the measure of cigarette consumption per smoker in each state.
The dependent variable in the regression model (Fig 1) is real (inflation-adjusted) annual per capita healthcare expenditure (including both public and private payers). The independent Real annual per capita state healthcare expenditure in each of the 50 states and the District of Columbia modeled as a function of smoking behavior (current smoking prevalence and mean annual cigarette consumption per smoker). Because available data on mean consumption per smoker may be contaminated with measurement error that increases over the sample period due to increasing interstate tax differentials, the individual state cigarette tax rates are included to adjust for the effects of this possible measurement error. Other state-specific control variables that might affect per capita healthcare expenditure are included. To account for long run trends in healthcare expenditure that are correlated with the observed state-specific explanatory variables as well other correlated but unobserved trends, the national averages of the dependent and explanatory variables are included in the regression. Finally, state-specific intercepts are included in the regression to model regional and state-specific factors that may affect (explanatory) variables include two state-specific measures of smoking behavior (prevalence of current smoking and mean cigarette consumption per current smoker) as well as other statespecific factors that could affect healthcare expenditure (real per capita income, proportion of the population that is elderly, proportion of the population that is Hispanic, and proportion of the population that is African-American). Finally, state-specific intercepts were included in the regression to account for other factors that affect state healthcare expenditure that, while constant over time, could differ across states.
Measures of smoking behavior, the other population factors we are considering, and healthcare costs change over time unpredictably because of changes in technology, access to care, and the nature of the population itself. From a statistical perspective, that means that the underlying process is nonstationary, and we need to account for this in the analysis. To do so, we also include the national cross-sectional averages of the dependent and independent variables as independent variables in the regression equation to account for their long run trends and trends in other correlated but unobservable variables associated with per capita healthcare expenditure that vary over the sample period [15][16][17]. Examples of overall national trends in per capita healthcare expenditure that are difficult or impossible to measure include developments in medical technology and the economic, regulatory, legal, or legislative environment that affect access to care and therefore utilization. Including the overall national trends as independent variables means that the regression coefficients for the state-specific explanatory variables are interpreted as the effects of the variation of the state-specific variables around the overall trends included in the model. For example, the coefficient of the prevalence of current smoking in each state can be interpreted as the effect of the departure of prevalence of smoking in that state from the overall national trend in prevalence of smoking on that state's per capita healthcare expenditure, after accounting for all the national trends included in the model.
There is also a possibility that the reported cigarette sales in a state (which we used to estimate annual per smoker cigarette consumption) might not be equal to the numbers of cigarettes smoked in a state. To adjust for possible measurement error in mean cigarette consumption per smoker, state-specific cigarette tax rates are also included in the regression model (Fig 1).
The independent variables are taken from the year before the healthcare expenditure data (i.e., lagged by 1 y), to allow for time for the independent variables to affect healthcare expenditure.
Data
The estimated effects of smoking on healthcare costs are based on cross-sectional time series (panel) data on smoking, healthcare costs, and demographics for the 50 states and the District of Columbia (considered and referred to hereafter as 51 "states") for the years 1992 through 2009.
Healthcare expenditures. The main results use the Centers for Medicare and Medicaid Services (CMS) estimates of total (public and private payer) healthcare expenditure by state of residence [18]. We chose the CMS state of residence measure because it measures healthcare expenditures consumed by residents of each state, rather than the expenditure of healthcare providers located in each state regardless of the state of the recipient. Previous research [12][13][14] used aggregate state data for California or Arizona compared to an aggregate population from many control states, and there was no practical or statistically significant difference in regression results using the resident-and provider-based measures. State per capita healthcare expenditure was calculated by dividing total real state expenditure by the state resident population from the US Census Bureau.
Smoking behavior. Prevalence of current smoking and state and federal cigarette tax data were from the Behavioral Risk Factor Surveillance System (BRFSS) provided by the Centers for Disease Control and Prevention (CDC) State Tobacco Activities Tracking and Evaluation (STATE) System [19]. State-specific per capita cigarette consumption and cigarette tax rates were from the The Tax Burden on Tobacco [20] provided by the CDC STATE System [19]. Cigarette consumption per smoker was calculated by dividing per capita cigarette consumption for each state's resident population by current smoking prevalence from the US Census Bureau.
Demographic control variables. Total state resident population data and the proportion of state resident population age 65 y or older were from the US Census Bureau [21][22][23]. The proportion of the population that is Hispanic and African-American was calculated from the BRFSS survey data [24]. The proportion of the population by race and ethnicity, used for sensitivity analysis, was calculated from the BRFSS data [24] rather than census data because complete data using consistent definitions were not available from the US Census Bureau over the whole sample period, and the effects of the adjustments following the decadal census on the annual census population estimates by race and ethnicity are so large that the estimates cannot be used in regression analysis without introducing spurious results due to breaks in the modelbased trends across census years. State per capita personal income was taken from the US Bureau of Economic Analysis (BEA) regional economic accounts [25].
Adjusting for inflation. All monetary values are expressed in year 2010 US dollars using the regional medical care (for healthcare expenditures) and regional all-item (for cigarette taxes and personal income) Consumer Price Index for All Urban Consumers (CPI-U) [26].
Missing data. There were up to 18 annual observations for the individual 51 states, making 918 data points. There are only 27 missing data points (2.9%) because of individual states not participating in the BRFSS in some years. All but three missing observations are due to delayed entry of 11 states into the BRFSS or a BRFSS component. Fisher's exact test and continuity-corrected Spearman's and Kendall's tau-a correlation coefficients were used to evaluate the association between the presence and length of lagged state entry into BRFSS and each state's smoking behavior and socio-demographics used in the analysis, state population, and geographic region. No statistically significant geographical or socio-demographic or economic relationships were found to explain the patterns of delayed entry among the states, so we consider the missing observations to be missing completely at random.
Model
The regression model explains state per capita healthcare expenditure as a function of state per capita income, population age structure (proportion of the population that is elderly), proportion of the population that is African-American, proportion of the population that is Hispanic, and additional control variables that describe national trends in health care expenditure, such as changes in medical technology and the market for health care. Other variables that may affect the results were missing for some years and states, such as prevalence of insurance coverage and prevalence of other health risks (e.g., obesity and high blood pressure). A sensitivity analysis (detailed in S1 Text, Sensitivity Analyses) to determine whether inclusion of these variables would change the estimates substantially was conducted on the available observations.
Previous research compared smoking behaviors and per capita healthcare expenditures in California [12,13] and Arizona [14] to various control populations in the United States. Instead of selecting a distinct control population, this model uses the pooled common correlated effects (CCE) fixed effects estimator [15][16][17] on annual time series data for each of 51 cross-sectional units (the 51 states). The CCE fixed effects estimator uses the national cross-sectional averages (the arithmetic average of the 51 state-specific values for each year) of the dependent and explanatory variables to control for national trends in per capita healthcare expenditure, the other explanatory variables, and any correlated but unobservable common trends.
The model used for these national estimates has two parts (Fig 1). The details of the model appear in S1 Text (Detailed Description of the Model). The first part of the model is a first order autoregression (i.e., a regression that uses explanatory variables that are lagged one period) that models the effect of smoking behavior, adjusted for other explanatory variables, on state residential per capita healthcare expenditure. The first part of the model assumes that individual mean state cigarette consumption per smoker is observed without measurement error.
The natural logarithm of state per capita healthcare expenditure in each state is explained using the lagged natural logarithms of state smoking prevalence, mean cigarette consumption per smoker, per capita income, and several demographic variables and the lagged natural logarithms of their associated national averages across all the states. Using logarithms in this way yields regression coefficients that are interpreted as elasticities, which are dimensionless constants that give the percent change in the dependent variable associated with a 1% (relative) change in each explanatory variable. The logarithmic transformation produced better behaved residuals for individual state data than the linear specifications used in earlier work [12][13][14].
The second part of the model adds an adjustment for possible measurement error in individual state observations of mean cigarette consumption per smoker due to untaxed cigarette consumption induced by differences in state cigarette taxes. A state-specific model for this type of measurement error (that would use different coefficients for each of the 51 states) led to severe multicollinearity and model specification problems, so the eight BEA economic regions were chosen as the most appropriate grouping for modeling variations in the effect of the individual state-specific cigarette tax rates over time. In particular, we retained information on individual state variation in cigarette tax rates while restricting the associated coefficients' values regionally. The BEA regions were chosen for the regional pattern of cigarette tax adjustment effects because the BEA regions reflect economically homogenous groups of states [27]. (The BEA regions are New England, Mideast, Great Lakes, Plains, Southeast, Southwest, Rocky Mountain, and Far West; the component states are listed in the first table in S1 Text.) Each individual state tax rate is assumed to have the same effect on unmeasured cigarette consumption within each BEA region, but this effect was allowed to vary across BEA regions. The implicit assumption used in choosing regional coefficients for the tax variables but not for other variables is that regional characteristics that affect unmeasured consumption (such as average size of state, distance from population centers to state borders, and cross-border commuting and other travel patterns) vary more by region than the relationship between the other explanatory variables and healthcare expenditure. This assumption was relaxed in one of the sensitivity analyses reported in S1 Text (Sensitivity Analyses).
Sensitivity Analysis
Several sensitivity analyses were conducted to check the possibility that the estimates that attribute changes in population health to smoking are related to other risk factors than smoking (and secondhand smoke exposure). The results of these sensitivity analyses are summarized below. Detailed results appear in S1 Text (Sensitivity Analyses).
Other health risk factors. The prevalence of other health risk factors were measured in the BRFSS surveys (prevalence of high blood pressure and high cholesterol among respondentswho had those checked, prevalence of abusive drinking, no insurance coverage, no regular exercise, diabetes, and obesity), and these prevalence estimates were all added to the final model (Table 1), both singly and simultaneously. Inclusion of other health risk factors produced elasticity estimates that were almost identical to those shown in the final model in Table 1. In keeping with the CCE modeling strategy, these factors were added to the model as state-specific and cross-sectional trend variables. None of the variables approached statistical significance when entered into the model together or one by one (S1 Text, Sensitivity Analyses). Many states did not have observations on the other health risk factors for all years, so including these variables caused instability in the residual diagnostics. Therefore, these variables were omitted from the final analysis.
Public policies that affect smoking behavior. Changes in smoking behavior may be correlated to other public health measures and general population awareness of healthy lifestyles, environmental health, and public policies that affect access to care. A sensitivity analysis of possible confounding by these factors was conducted by adding available time series variables that would be correlated with these factors, in the same way as was done for other health risks (S1 Text, Sensitivity Analyses). Variables describing the proportion of each state population that was covered by 100% smoke-free laws (i.e., complete smoking bans at specific venues, such as workplace, restaurants, etc.) and prevalence of lack of health insurance were added to the model in this sensitivity analysis.
Other factors. Consistent time series are not available for other factors that may be correlated with unmeasured changes in health risks or public health programs and policies. Perhaps the most prominent such variable is educational attainment in the population. A robustness check of the omission of this variable was conducted by studying the stability of relative state levels of educational attainment across time. Another robustness check was conducted by estimating the correlation over time between state educational attainment and a variable that should be highly correlated: state real per capita personal income. Sensitivity to selection of estimation technique. Additional sensitivity analyses were conducted to evaluate the results of instrumental variable estimation for cigarette consumption per smoker by including instruments for the variables mean consumption per smoker, prevalence of cigarette smoking, per capita income, and proportion of the population age 65 y or older (S1 Text, Sensitivity Analysis). Sensitivity analyses were also conducted to account for possible correlation in healthcare expenditure between states due to unobserved factors and for other departures from standard assumptions on regression errors.
Estimated Change in Regional Healthcare Expenditures Attributable to Smoking
The estimated elasticities in Table 1 were used to estimate the net average annual BEA regional healthcare expenditure attributable to regional cigarette smoking behavior deviations from the national average over the sample period. The unit of observation and analysis is the individual state. Therefore, the estimated changes in state expenditures were aggregated to the regional level using equal weights to calculate the aggregate results for the eight BEA economic regions. Using equal weights gives the average experience of each state in the region, which is relevant for evaluation of policy at the state level. The estimates of population-weighted changes presented in S1 Text (Effect of Weighting Scheme on Regional Healthcare Expenditures Attributable to Smoking) were used as a measure of changes in expenditure for the regional populations. The national panel regression coefficients were used for this analysis (Table 1) because eight estimates of coefficients in the model (one for each BEA region) were more reliable than 51 estimates (one for each state, with a small sample size for each regional panel regression-less than 20-for each state).
Deviations in per capita healthcare expenditures from the average national level (savings below or excess expenditures above) were calculated for each state in four steps, and then aggregated to the BEA regional level. First, for each state, the arc elasticity estimate of the deviation in state healthcare expenditure attributable to the two smoking behavior variables were calculated by multiplying the estimated elasticities of per capita healthcare for prevalence of current smoking and measured mean cigarette consumption per smoker by the average percent difference between the respective individual state and national averages of the smoking behavior variables over the sample period. The elasticities estimated in the coefficients are valid for modeling the effect of infinitesimal changes in the explanatory variables; the arc elasticity is an adjustment to account for finite differences in the data. Second, the adjustments to per capita healthcare expenditures due to state tax differentials were calculated in the same way: arc elasticities for the tax rates were calculated by multiplying the estimated elasticities of healthcare expenditure by the average percent difference between the respective individual state and national averages of the state cigarette tax variables over the sample period. Third, the net regional healthcare expenditure attributable to smoking adjusted for mismeasurement was calculated for each state by subtracting the results of the second step from the results of the first step, by state. Fourth, the excess per capita expenditures for each BEA region were calculated by taking the simple arithmetic average of each state in each respective region. Total aggregate values for each state and region were calculated by multiplying the state or regional per capita estimates by the state or regional residential populations.
As a check on the reasonableness of the results, the proportion of measured cigarette consumption per smoker due to estimated untaxed consumption was calculated. The calculation was done by dividing the healthcare expenditure due to tax differentials-and therefore attributable to mismeasurement of cigarette consumption (found in step two above)-by the average regional price of cigarettes to calculate the estimated unmeasured consumption in packs of cigarettes per capita. Estimated unmeasured consumption in packs of cigarettes per capita was then divided by the prevalence of current smokers to calculate the estimated unmeasured consumption in terms of packs per smoker. Then the estimated unmeasured consumption in terms of packs per smoker was divided by the measured mean cigarette consumption per current smoker to estimate the estimated unmeasured consumption as a proportion of measured consumption. This estimate gives the proportion of measured cigarette consumption in each region, which can be compared to survey estimates of the proportion of untaxed cigarettes consumed in the United States [28] and specific regions [29] to check the adequacy of our adjustment for measurement error in cigarette consumption and the plausibility of the resulting estimates of untaxed cigarette consumption.
Interval estimates for the excess expenditures and proportion of measured cigarette consumption that is untaxed were calculated using the covariance matrix of the elasticities (which for the logarithmic transformation is the same as the covariance matrix of coefficient matrix of the regression coefficients). The distributions of excess expenditures and proportion of unmeasured cigarette consumption were normally distributed, so formulas for the variances of functions of normal distributions were used to calculate standard errors (SEs).
Because we used the estimated elasticities to calculate the healthcare expenditure attributable to differences in smoking behavior, the estimates are independent of the sample distributions of the other variables in the model. The results can be thought of as quantifying the effects of changes in smoking behavior while holding all the other variables, such as per capita personal income and age distribution of the population, constant.
Results
The elasticities of healthcare expenditure with respect to smoking prevalence and measured mean cigarette consumption per smoker are 0.118 (SE 0.0259, p < 0.001) and 0.108 (SE 0.0253, p < 0.001), respectively (Table 1). What these elasticities mean is that 1% relative reductions in current smoking prevalence and in packs smoked per current smoker are associated with relative reductions of 0.118% and 0.108% of per capita healthcare expenditures, respectively. For example, the average prevalence of smoking, consumption per smoker, and per capita healthcare expenditure over the sample period were 21.2%, 372 packs per year, and $6,426, respectively. A 1% relative reduction in smoking prevalence from an absolute prevalence of 21.2% to 21.0% is associated with a $7.58 reduction in per capita healthcare expenditure. Likewise, a 5% relative drop in smoking prevalence (from 21.2% to 20.1% absolute prevalence) is associated with a reduction in per capita healthcare expenditure of $37.9. A 1% relative reduction in consumption per smoker from 372 packs per year to 368 packs per year is associated with a $6.94 reduction in per capita healthcare expenditure. A 5% relative drop in consumption per smoker (from 372 packs per smoker per year to 353 packs per year) is associated with a reduction in per capita healthcare expenditure of $34.7. The R 2 statistics indicate that the regression has good explanatory power, particularly for describing variations in per capita healthcare expenditure within each state over time ( Table 2).
These estimates of decline in per capita healthcare expenditure associated with changes in smoking behavior are counterfactual predictions that assume that all other factors other than smoking behavior remain constant. The actual observed changes in healthcare expenditure in future years will also depend on additional state-specific variables such as per capita income and age structure of the population, in addition to their evolution via common trends across states.
Sensitivity Analyses
None of the sensitivity analyses for omitted variables produced a statistically significant or even barely noticeable change in the regression coefficients of the estimated model (S1 Text, Sensitivity Analyses). The other health risk factors and policy variables do not seem to be highly correlated, at least on a population level. In other conditions, there are significant state and regional differences-and therefore significant correlation between the variables and smoking behavior-at any one point in time, but there is little variation between states over time. For example, in the case of obesity, at any one point in time, some states with high smoking prevalence have a higher than average prevalence of obesity. However, the prevalence of obesity in all states is increasing at approximately the same rate over time, albeit from different starting levels. For this reason, state-level variations in obesity in a particular year do not confound state-level variations in smoking behavior over time. The robustness analysis on education showed that the correlation between states in educational attainment over time was high, particularly for the prevalence of bachelor degrees in the population over time. However, state prevalence of both high school completion and bachelor degrees was highly correlated over time with state real per capita personal income; therefore, we believe the possible direct effects of education on health care expenditure or indirect effects through correlation with smoking behavior are accounted for in the per capita income variable.
The results of the sensitivity analysis on instrumental variables did not produce evidence of serious bias produced by problems with the instruments used for cigarette consumption per smoker, except for proportion of the population age 65 y or over (S1 Text, Sensitivity Analyses). When the proportion of the population that was elderly was instrumented, the coefficient of that variable was reduced by about half, but the change in the coefficient was not statistically significantly different from that presented in Table 1. There were no substantial changes in the coefficients of the other variables. There was no trend in the coefficient estimates as a function of factors that could produce bias, such as the strength of autocorrelation in the regression residuals, and the SEs of the estimates presented in Table 1 were consistent with the point coefficient estimates of the sensitivity analysis.
Estimated Change in Regional Healthcare Expenditures Attributable to Smoking
Without adjustment for mismeasurement of cigarette consumption per smoker, the Far West region has the largest estimated savings in annual per capita healthcare expenditure associated with departures of its smoking behavior from the national average: $210 (SE $45.5); the Southeast region has the largest excess expenditure: $154 (SE $30.7) ( Table 3).
After adjustment for state tax differentials, the Far West still has the largest total estimated annual per capita savings, $182 (SE $51.7), but the New England region now has the largest excess per capita expenditure, $104 (SE $25.4); the Southeast has the next largest, $94.4 (SE $90.2) ( Table 3). Total annual estimated expenditure per year due to the differences between regional and national smoking behavior ranges from a savings of $9,470 million (SE $2,690 million) in the Far West to a total excess expenditure of $7,330 million (SE $7.010 million) in the Southeast region ( Table 3). The difference between measured and estimated true cigarette consumption per smoker was less than 20% for all BEA regions except the Southeast, where estimated true consumption was 23.6% (SE 29.2%) less than measured consumption, and New England, where estimated true consumption was 41.6% (SE 9.06%) higher than measured (Table 3). These estimates are similar to estimates from survey data collected by examining the source of cigarette packs in different states in 2009 and 2010 [28]. The model's statewide estimates of the proportion of cigarette consumption that is untaxed track survey estimates [29] for major urban centers in the Mideast and New England reasonably well ( Table 4). The comparisons are complicated by two factors: the difference in areas in the regions covered and that the survey estimates provide only ranges based on modeling assumptions. For example, untaxed consumption may be unusually high in New York City due to high local cigarette tax rates and may be higher there than on average in other areas of New York state. See S1 Text (State-Specific Healthcare Expenditures Attributable to Smoking) for population-weighted regional and individual state estimates of excess expenditure associated with smoking behavior.
Discussion
Our estimates provide strong evidence that reducing smoking prevalence and cigarette consumption per smoker are rapidly followed by lower healthcare expenditure. The model is dynamic and predicts per capita healthcare expenditures in the current year as a function of smoking behavior in the previous year. For example, 1% relative reductions in current smoking prevalence and mean cigarette consumption per smoker in one year are associated with a reduction in per capita healthcare expenditure in the next year of 0.118% + 0.108% = 0.226% (SE 0.0363%), with all other factors including common trends held equal. In 2012, total healthcare expenditures in the US were $2.8 trillion [30]; our results suggest that, holding other common trends and factors affecting health care expenditures constant, a 10% relative drop in smoking prevalence (about a 2.2% absolute drop) combined with a 10% relative drop in consumption per remaining smoker (about 37 fewer packs/year) would be followed in the next year by a $63 billion reduction in healthcare expenditure (in 2012 dollars). These are short run 1-to 2-y predictions, and while they indicate that the effects of changes in smoking on healthcare expenditure begin to appear quickly, they do not imply that all changes in the costs and savings of smoking in the population are immediate. If all states reduce their prevalence of smoking and cigarette consumption per smoker, then the corresponding common trends will gradually change over time. The elasticity of the common trend for the prevalence of smoking (from the model estimated with all cross-sectional averages entered as separate variables, rather than using principal components) is relatively small and not statistically significant (−0.0545, SE 0.0581, p = 0.348), so it is unlikely to play a large role in longer run predictions. The elasticity of the common trend for cigarette consumption per smoker (−0.255, SE 0.0488, p < 0.001) is not small relative to the state-specific cigarette consumption per smoker variable. Over the longer run, changes in both smoking behavior variables will change the age structure of the population and trends in changes in healthcare expenditures related to the prevalence of elderly people in the population. Therefore, longer run predictions require a formal out-of-sample forecast study. The short run illustrative predictions presented here also assume the continuation of historical aggregate trends that have been associated with tobacco control policies, such as the declines in exposure to secondhand smoke and in prevalence of smoking during pregnancy. These estimates are consistent with previous research on healthcare expenditures attributable to cigarette smoking in California [12,13] and Arizona [14]. The previous research used the aggregate population in control states to account for common trends in healthcare expenditure, while the present study used the cross-sectional average expenditure across states. The regression specifications also differ. In the previous research, specification searches were used to determine the best regression model to use to estimate the effects of smoking in California and Arizona versus the control states. Similar specification searches for each of the 51 crosssectional units (i.e., states) in the present study were not feasible, and variables that are probably irrelevant for California and Arizona were left in the specification because they are required to be in the model for other states. However, inclusion of irrelevant variables for a state will not bias the estimated elasticities and permits estimating an average effect across all states with a simple panel regression specification.
This analysis uses aggregate measures of population characteristics to estimate the relationships between smoking behavior variables and per capita healthcare expenditures. The elasticity estimates are not directly comparable to estimates of the economic burden of cigarette smoking using cross-sectional data on individuals in national health surveys [31]. Those estimates use data on individuals to calculate the healthcare expenditure attributable to cigarette consumption in individual current smokers or ever-smokers, contrasted to individual nonsmokers or never-smokers, respectively. Therefore, the expenditure estimates in the present study should not be interpreted as healthcare costs arising in, or due to, individual smokers or any specific individuals in the population. These estimates reflect all the healthcare expenditures associated with smoking that arise in a population, which include short and long term indirect effects on smokers and short and long term effects of second-and third-hand [32] smoking exposure in non-smokers. However, previously published aggregate estimates for California [13] that are similar to those presented here are somewhat larger than, but consistent with, cross-sectional estimates for that state using individual survey data [33], and the difference between these estimates is comparable to variation among different published cross-sectional estimates based on individual data [6,34,35].
Our estimates do avoid some problems of estimates based on cross-sectional data. An example is the "quitting sick" effect, which imputes expenditure savings to smokers who quit smoking after being diagnosed with a serious chronic tobacco-related disease, such as lung cancer or cardiovascular disease. The expected expenditure savings from quitting by a smoker who remains well will not be realized in those who quit sick because expensive and irreversible health effects of smoking have already occurred. The quitting sick effect is a consequence of incorrectly imputing missing information (the unobservable health status of the smoker at the time of cessation) that is not present in cross-sectional data. This study uses longitudinal data on measures of smoking behavior and healthcare expenditures on large populations and therefore is not subject to quitting sick effects because the excess health care costs of those who quit sick will be included in a state's total aggregate healthcare expenditure data along with the reduction in prevalence that occurs when the reduction in smoking of comparable people is recorded in surveys that represent the population of that state. It should be noted that some estimates of the health burden of cigarette smoking that account for quitting sick and other problems with estimates based on cross-sectional data find a higher burden of smoking-related disease and therefore higher smoking-attributable expenditures than most published cross-sectional estimates [36][37][38][39][40].
The estimates presented here cannot be used to reliably estimate the change in healthcare expenditure associated with complete elimination of cigarette consumption because the estimated elasticities apply only to modest variation around the status quo, but they do capture expenditures attributable to cigarette smoking in a large population that are difficult to measure from national health surveys (such as the effects of second-and third-hand smoke exposure, and long term effects of developmental problems from premature birth and low birth weight or asthma contracted during childhood, attributable to parental cigarette smoking).
Our methods may suffer from spurious regressions and attribute non-smoking public health factors that are correlated with smoking behavior to the smoking behavior. Specifically, this research does not estimate a smoking attributable fraction of healthcare costs for each state that corresponds to a measure that can be derived from individual survey data. Rather, it estimates the average national effect of variations in aggregate-level state-specific smoking behavior variables around the national trend in those variables on variations in state-specific real per capita healthcare expenditure around its national trend.
Limitations
The results of this study are subject to the limitations of analysis of aggregate observational data. A study of this nature that uses aggregate data and a relatively small sample size cannot, by itself, establish a causal connection between smoking behavior and healthcare costs, and that is not the goal of this study. Rather, this study should be evaluated in the context of the existing body of research that has already established that the relationship between smoking behavior and healthcare costs is causal using a variety of study designs [41][42][43][44][45].
These estimates do not address the issue of whether, over the whole life cycle, a population without any cigarette smoking would have higher healthcare expenditures due to longer lived non-smokers. Forecasting the very long run effects of reductions in smoking over the life cycle in a US population would require the construction of a model to forecast the eventual changes in the age structure of the population and resulting changes in per capita healthcare expenditures as a function of smoking behavior.
Conclusions
Lower smoking prevalence and cigarette consumption per smoker are associated with lower per capita healthcare expenditures. Historical regional variations in smoking behavior (including those due to the effects of state tobacco control programs, smoking restrictions, and differences in taxation) are associated with substantial differences in per capita healthcare expenditures across the United States. Those regions (and the states in them) that have implemented public policies to reduce smoking have substantially lower medical costs. Likewise, those that have failed to implement tobacco control policies have higher medical costs. Changes in healthcare costs begin to be observed quickly after changes in smoking behavior. State and national policies that reduce smoking should be part of short term healthcare cost containment.
Supporting Information S1 Text. Model estimation, additional detailed results, and sensitivity analysis. (PDF)
Author Contributions
Conceived and designed the experiments: JL SAG. Analyzed the data: JL SAG. Wrote the first draft of the manuscript: JL. Contributed to the writing of the manuscript: JL SAG. Agree with the manuscript's results and conclusions: JL SAG. All authors have read, and confirm that they meet, ICMJE criteria for authorship.
|
v3-fos-license
|
2020-06-25T09:10:05.846Z
|
2020-06-01T00:00:00.000
|
220045862
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4409/9/6/1506/pdf",
"pdf_hash": "c9e4bc730cd8943421c06aef24b9a80f0bf700bf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44093",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "2cc5ba051511db36bb827cd111896189cfecd8fc",
"year": 2020
}
|
pes2o/s2orc
|
Phosphorylation of PLK3 Is Controlled by Protein Phosphatase 6
Polo-like kinases play essential roles in cell cycle control and mitosis. In contrast to other members of this kinase family, PLK3 has been reported to be activated upon cellular stress including DNA damage, hypoxia and osmotic stress. Here we knocked out PLK3 in human non-transformed RPE cells using CRISPR/Cas9-mediated gene editing. Surprisingly, we find that loss of PLK3 does not impair stabilization of HIF1α after hypoxia, phosphorylation of the c-Jun after osmotic stress and dynamics of DNA damage response after exposure to ionizing radiation. Similarly, RNAi-mediated depletion of PLK3 did not impair stress response in human transformed cell lines. Exposure of cells to various forms of stress also did not affect kinase activity of purified EGFP-PLK3. We conclude that PLK3 is largely dispensable for stress response in human cells. Using mass spectrometry, we identify protein phosphatase 6 as a new interacting partner of PLK3. Polo box domain of PLK3 mediates the interaction with the PP6 complex. Finally, we find that PLK3 is phosphorylated at Thr219 in the T-loop and that PP6 constantly dephosphorylates this residue. However, in contrast to PLK1, phosphorylation of Thr219 does not upregulate enzymatic activity of PLK3, suggesting that activation of both kinases is regulated by distinct mechanisms.
Introduction
Polo-like kinases (Plks) are evolutionary conserved Ser/Thr protein kinases that play critical roles in progression through the cell cycle and mitosis [1]. All Plks share similar structures with N-terminal catalytic domain and two or more C-terminal polo boxes that serve as a substrate-binding domain [2]. In vertebrates, Polo-like kinase family comprises of five members including PLK1 that is essential for formation of a bipolar mitotic spindle and for cytokinesis [3,4]; PLK2 and PLK4 that are involved in centriole biogenesis and duplication [5]; and PLK5 that lacks the kinase activity and plays a structural role in neurons [6]. Out of the Plks, the function of PLK3 is the least explored. Originally, human PLK3 (previously also reported as FGF-inducible kinase (Fnk)) was found to localize to plasma membrane where it was shown to modulate cell adhesion in specialized cell types including macrophages [7,8]. PLK3 was also implicated in the cell cycle, in particular in control of G1/S and G2/M transitions through promoting nuclear translocation of CDC25A and CDC25C, respectively [9,10]. However, the function of PLK3 in the cell cycle is likely not essential as PLK3 knock-out mice are viable and fertile [11]. PLK3 was suggested to control cell response to various forms of stress including osmotic stress, hypoxia, DNA damage, and Golgi stress [12]. Whereas PLK1 is rapidly inhibited after stress, activity of PLK3 is believed to be stimulated by stress [10,12,13]. Following DNA damage, PLK3 is supposed to phosphorylate CHK2 at Ser62 and p53 at Ser20, leading to activation of the cell cycle checkpoint [13,14].
In addition, PLK3 was shown to promote DNA repair in G1 cells by phosphorylating a chromatin-bound C-terminal binding protein-interacting protein (CtIP) [15]. Hypoxia or hypoxia-mimicking treatment with CoCl2 was reported to activate PLK3 in the nucleus and to negatively regulate HIF1α levels in murine cells [16][17][18]. Upon hyperosmotic stress, PLK3 was reported to phosphorylate c-Jun and γH2AX in human corneal epithelia [19,20]. Finally, PLK3 was shown to be activated upon and contribute to Golgi fragmentation induced by nocodazole or brefeldin A treatment [21][22][23].
Here we used CRISPR/Cas9-mediated gene editing and inactivated PLK3 in human non-transformed cells to study the involvement of PLK3 in the cellular response to stress. Surprisingly we find that PLK3 plays redundant roles in cell response to DNA damage, osmotic stress and hypoxia. In agreement with these findings, we did not observe significant changes in the kinase activity of PLK3 purified from cells exposed to various forms of stress. To search for protein interactors that could modulate the function of PLK3, we used mass spectrometry and identified PP6 holoenzyme in a stable complex with PLK3. We find that similarly to PLK1, PLK3 is also phosphorylated in the T-loop and that inhibition of protein phosphatases increases the level of PLK3 modification. However, we found that mutation of the Thr219 in the T-loop did not affect the activity of PLK3, suggesting that the mechanism of its regulation is distinct to PLK1.
Cells
Human hTERT-immortalized RPE1 cells (hereafter referred to as RPE) were obtained from ATCC and were grown in DMEM supplemented with 6% FBS (Gibco, Waltham, MA, USA), Penicillin and Streptomycin. Cells were regularly tested for mycoplasma infection using MycoAlert kit (Lonza, Basel, Switzerland). To generate PLK3 knock out cell line, RPE cells grown at 6-well plate were transfected with synthetic sgRNA (CRISPRevolution sgRNA EZ Kit; Synthego, Menlo Park, CA, USA) and recombinant EnGen Spy Cas9 NLS (New England Biolabs, Ipswich, MA, USA) using CRISPRMAX reagent (ThermoFisher Scientific). Two independent targeting sequences in exon 2 of human PLK3 Cells 2020, 9,1506 3 of 17 were UGUCAGUGGCCUCGUAGCAG and GGGCUUCGCCCGCUGCUACG. Three days after transfection, single cells were seeded on 96-well plates and individual clones were expanded. Genomic DNA was isolated from individual clones and the fragment corresponding to DNA from intron 1 to exon 3 was amplified by PCR, sequenced and analyzed by TIDE software (Desktop Genetics, Cambridge, MA, USA). For selected clones, PCR fragments were inserted into pcr2.1-TOPO plasmid, and plasmid DNA from 10 bacterial colonies was sequenced to confirm individual alleles of PLK3. The following two independent clones were selected for further functional testing: RPE-PLK3-KO clone cr1.2 carries a single nucleotide insertion in the target site and a 54 bp deletion at intron/exon2 transition; RPE-PLK3-KO clone cr2.3 is a homozygote carrying a single nucleotide insertion within the target sequence in exon 2. Loss of PLK3 expression in the knock-out cells was further validated by immunoblotting. Silencer Select siRNA targeting PLK3 (PLK3 siRNA1 GGCUUUGGGUAUCAACUGU and PLK3 siRNA2 GCAUCAAGCAGGUUCACUA) was transfected at a final concentration of 5 nM using RNAiMAX (ThermoFisher Scientific) and cells were collected 48 h after transfection. HEK293 cells were transfected with pcDNA4/TO/EGFP-PLK3-myc plasmid and stable clones were selected by treating cells with zeocin for 3 weeks. Where indicated, cells were grown for 12 h in media supplemented with 150-300 µM CoCl2 to mimic hypoxia [16]. Hyperosmotic shock was induced by incubation of cells with media supplemented with 350 mM NaCl or 480 mM mannitol for 40 min [20]. Hypotonic shock was induced by incubation of cells with media diluted 1:1 with water.
Real-Time Quantitative Reverse Transcriptase PCR (qRT-PCR)
Total RNA was isolated 48 h after transfection of RPE cells with control or PLK3 siRNA using RNeasy Mini Kit (Quiagen, Hilden, Germany). cDNA was synthesized from 3 µg total RNAs using random hexamer and RevertAid H Minus Reverse Transcriptase (ThermoFisher Scientific). Real-time quantitative PCR was performed on LightCycler 480 Instrument II (Roche, Basel, Switzerland) using LightCycler 480 SYBR Green I Master (Roche) and the following primers: PLK3-forward TGAGGACGCTGACAACATCTAC, PLK3-reverse CAGGTAGTAGCGCACTTCTGG, ATP5B-forward TGAAGAA-GCTGTGGCAAAAGC, and ATP5B-reverse GAAGCTTTTTGGGTTAGGGGC. Relative amount of PLK3 mRNA is presented as the ratio to ATP5B mRNA.
Immunofluorescence
Cells grown on coverslips were fixed with 4% paraformaldehyde for 15 min at room temperature and permeabilized with 0.5% Triton X1−00 for 10 min. Cells were further incubated with ice-cold methanol for 5 min and blocked with 3% BSA in PBS for 30 min. Coverslips were incubated with primary antibodies for 3 h, washed with PBS, and incubated with AlexaFluor-conjugated secondary antibodies for 1 h. Mounting was performed using Vectashield. Imaging was performed using Leica Sp8 confocal microscope equipped with 63× oil objective (NA 1.40). Images were analyzed using LAS AF Lite software (Leica, Wetzlar, Germany). Induction of DNA damage response was evaluated as described previously [32]. Briefly, cells were exposed to ionizing radiation (3 Gy) using X-RAD 225XL instrument (Precision; Cu filter 0.5 mm), fixed with 4% PFA, permeabilized with 0.5% Triton X1−00, and probed with antibody against γH2AX (Cell Signaling Technology). Images were acquired using Olympus ScanR system equipped with 40×/NA 1.3 objective (Olympus, Tokio, JApan). Number of γH2AX-positive foci per nucleus was determined using spot detection module. More than 300 nuclei were quantified per condition.
Immunoprecipitation
HEK293 cells stably expressing EGFP or EGFP-PLK3 were extracted by IP buffer (20 mM HEPES pH 7.5, 10% glycerol, 150 mM NaCl, 0.5% NP40) supplemented with cOmplete protease and PhosSTOP phosphatase inhibitors (Sigma) and sonicated for 3 × 20 s on ice. Cell extracts were cleared by centrifugation at 15,000 rpm 10 min at 4 • C and incubated with GFP-Trap beads (Chromotek, Planegg, Germany) for 2 h. After three washes in IP buffer, bound proteins were eluted from beads by Laemli buffer and analyzed by immunoblotting. Alternatively, bound proteins were analyzed by mass spectrometry using Orbitrap Fusion (Thermo Scientific). Proteins bound to EGFP-PLK3 that were enriched compared to the empty EGFP control in at least two out of three independent experiments were considered as potential interactors and were validated by immunoprecipitation followed by immunoblotting. For in vitro kinase assay, wild-type or mutant EGFP-PLK3 was immunoprecipitated using GFP Trap, washed three times in IP buffer and incubated with casein in kinase buffer (10 mM HEPES pH 7.4, 5 mM MgCl 2 , 2 mM EGTA, 1 mM DTT, 2.5 mM β-glycerolphosphate, 100 µM ATP and 5 µ Ci 32 P-γ-ATP) for 20 min at 30 • C. Proteins were separated using SDS-PAGE, and phosphorylation was visualized by autoradiography.
Cell Fractionation
RPE cells were fractionated as described before [33,34]. Briefly, soluble cytosolic fraction was obtained by incubating cells in buffer A [10 mM HEPES pH 7.9, 10 mM KCl, 1.5 mM MgCl 2 , 0.34 M sucrose, 10% glycerol, 1 mM DTT, 0.05% Triton X1−00 and protease inhibitor cocktail] at 4 • C for 10 min and spinning down at 1500× g for 2 min. Pelleted nuclei were further extracted with an equal amount of buffer B [10 mM HEPES pH 7.9, 3 mM EDTA, 0.2 mM EGTA, 1 mM DTT] and spinning down at 2000× g for 2 min yielding a soluble nuclear fraction. Insoluble chromatin was washed with buffer B and resuspended in SDS sample buffer.
Statistical Analysis
Signal intensity of the bands in Western blots was measured from biological replicates (n ≥ 3) using gel analysis plug-in in ImageJ. After background subtraction, signal was normalized to the corresponding loading control and to non-treated condition. Statistical significance was evaluated using two-tailed Student's T-test in Prism 5 software (GraphPad). Values p < 0.05 were considered statistically significant.
PLK3 Localizes to Plasma Membrane, Golgi and Centrosome
Subcellular localization of PLK3 has been reported controversially. Whereas some studies identified PLK3 at the plasma membrane and Golgi apparatus, others observed enrichment of PLK3 in the nucleus and nucleolus [7,9,16,19,23]. Here, we screened all commercially available PLK3 antibodies and tested them in immunoblotting and immunofluorescence. Using siRNA-mediated knock down of PLK3, we have found that most of the antibodies recognized major cross-reacting bands but failed to recognize endogenous PLK3 migrating on the electrophoretic gel in close proximity ( Figure 1A,B). The only antibody that in our hands recognized endogenous PLK3 was the rabbit monoclonal antibody (clone D14F12) from Cell Signaling Biotechnology. This antibody specifically recognized two bands migrating at 65-75 kDa (presumably corresponding to two isoforms of PLK3) and also showed a cross-reactivity with a protein of approx. 80 kDa ( Figure 1A). Both bands corresponding to PLK3 disappeared after depletion of PLK3 but not of its major homologue PLK1 ( Figure 1B). As none of the tested PLK3 antibodies recognized endogenous PLK3 in immunofluorescence (data not shown), we generated HEK293 cells stably expressing EGFP-PLK3 or transiently expressed EGFP-PLK3 in RPE cells. In both cell lines, EGFP-PLK3 was strongly enriched at the plasma membrane ( Figure 1C,D). In addition, EGFP-PLK3 colocalized with GM130 at the Golgi apparatus, and with γ-tubulin at the centrosome ( Figure 1C,D). Staining with the rabbit monoclonal antibody from Cell Signaling showed a perfectly overlapping signal with EGFP-PLK3, suggesting that it can recognize PLK3 in immunofluorescence, but its titer may be too low for detection of endogenous protein ( Figure 1E). In contrast, most other PLK3 antibodies failed to recognize overexpressed EGFP-PLK3 in immunofluorescence and immunoblotting ( Figure 1E,F and data not shown). Rabbit polyclonal PLK3 antibody from St. John laboratory stained centrosomes and cell nuclei, however, the nuclear signal did not co-localize with EGFP-PLK3 and was present also in cells lacking PLK3 ( Figure 1E,F and data not shown). In contrast to previous reports of nuclear localization of PLK3, we did not observe accumulation of EGFP-PLK3 in the nucleus neither in non-stressed conditions or after exposure of cells to genotoxic stress ( Figure 1G) [9,19]. Similarly, we found that treatment of cells with leptomycin B did not result in accumulation of EGFP-PLK3 in the nucleus, suggesting that contrary to the murine homologue Fnk, human PLK3 is not significantly shuttling between nuclear and cytosolic compartments ( Figure 1H) [35]. Fractionation of RPE cells revealed that PLK3 localizes mainly to the Triton X1−00 soluble fraction containing cytosolic proteins, whereas we did not detect PLK3 in the chromatin fraction ( Figure 1I). Our observations are thus in agreement with a newly described function of membrane-associated PLK3 in FasL-mediated cell death and with earlier reports implicating PLK3 in Golgi apparatus integrity [22,36]. Further, we explored the changes in EGFP-PLK3 distribution throughout the cell cycle. We found that PLK3 was expressed at comparable levels from G1 phase to mitosis (Figure 2A). Localization of PLK3 to the plasma membrane was preserved throughout mitosis ( Figure 2B). In agreement with a previous report, we observed that PLK3 was also present at spindle poles during mitosis [37]. Further, we explored the changes in EGFP-PLK3 distribution throughout the cell cycle. We found that PLK3 was expressed at comparable levels from G1 phase to mitosis (Figure 2A). Localization of PLK3 to the plasma membrane was preserved throughout mitosis ( Figure 2B). In agreement with a previous report, we observed that PLK3 was also present at spindle poles during mitosis [37].
PLK3 is Disposable for Cell Response to Genotoxic Stress and Osmotic Stress
Next, we wished to reevaluate the importance of PLK3 for various cellular functions. To this end, we knocked-out PLK3 in human diploid RPE cells using CRISPR/Cas9-mediated gene editing. We confirmed successful targeting of both alleles in the exon 2 of PLK3 by sequencing of genomic DNA and loss of the protein expression by immunoblotting ( Figure 3A-C). We exposed parental RPE and two independent clones of RPE-PLK3-KO cells to various forms of stress and probed their ability to trigger the downstream signaling. As expected, exposure of cells to UVC led to the induction of γH2AX and phosphorylation of c-Jun in parental RPE cells ( Figure 4A). Surprisingly, however, we did not observe any decrease in activation of these pathways in cells lacking PLK3 ( Figure 4A) [38]. Similarly, we did not observe a decreased level of the phosphorylated c-Jun after exposure of PLK3 knock-out cells to osmotic stress, suggesting that PLK3 is not involved in c-Jun phosphorylation
PLK3 is Disposable for Cell Response to Genotoxic Stress and Osmotic Stress
Next, we wished to reevaluate the importance of PLK3 for various cellular functions. To this end, we knocked-out PLK3 in human diploid RPE cells using CRISPR/Cas9-mediated gene editing. We confirmed successful targeting of both alleles in the exon 2 of PLK3 by sequencing of genomic DNA and loss of the protein expression by immunoblotting ( Figure 3A-C). We exposed parental RPE and two independent clones of RPE-PLK3-KO cells to various forms of stress and probed their ability to trigger the downstream signaling. As expected, exposure of cells to UVC led to the induction of γH2AX and phosphorylation of c-Jun in parental RPE cells ( Figure 4A). Surprisingly, however, we did not observe any decrease in activation of these pathways in cells lacking PLK3 ( Figure 4A) [38]. Similarly, we did not observe a decreased level of the phosphorylated c-Jun after exposure of PLK3 knock-out cells to osmotic stress, suggesting that PLK3 is not involved in c-Jun phosphorylation ( Figure 4B) [20]. Importantly, RNAi-mediated knock-down of PLK3 in RPE and HeLa cells also did not reveal any impact of PLK3 on the ability to activate p38 and c-Jun pathways ( Figure 4C-E and data not shown). Treatment of cells with cobalt chloride mimics hypoxia and as expected, it increased levels of HIF1 alpha ( Figure 4F,G) [18]. Surprisingly, we found that RPE-PLK3-KO cells or HeLa cells transfected with PLK3 siRNA induced similar levels of HIF1 alpha, suggesting that PLK3 does not inhibit HIF1 alpha stabilization in human cells ( Figure 4F,G).
Finally, we evaluated the dynamics of DNA damage response by quantification of γH2AX nuclear foci formation after exposure of cells to ionizing radiation. As expected, we observed an increase in the number of nuclear foci in control cells at an early time-point after exposure to IR followed by a decrease at a later time point corresponding to DNA repair [32]. However, formation and disappearance of the γH2AX nuclear foci was comparable in parental and RPE-PLK3-KO cells ( Figure 4H). Similarly, RPE-PLK3-KO cells did not show any defect in the ability to phosphorylate CHK2 at Thr68 and KAP1 at Ser824 and Ser473, suggesting that ATM and CHK2 activation is not affected in the absence of PLK3 ( Figure 4I) [39,40]. Exposure of RPE-PLK3-KO cells to ionizing radiation also induced expression of p21, an established p53 target and mediator of the cell cycle checkpoint ( Figure 4I). In good agreement with the data in PLK3 knock-out cells, RNAi-mediated depletion of PLK3 also did not impair the activation of DNA damage response ( Figure 4J). We conclude that cells lacking PLK3 are able to activate ATM, arrest in the cell cycle checkpoint and repair DNA with comparable dynamics as parental cells.
In contrast to PLK1 that is rapidly inactivated upon genotoxic stress, PLK3 is believed to be strongly activated upon various forms of stress [12,41]. To test this, we performed an in vitro kinase assay using casein as a substrate and EGFP-PLK3 purified from cells exposed to various treatments. We found that the wild-type but not the kinase-dead PLK3-K91R mutant efficiently phosphorylated casein in vitro ( Figure 4K). Surprisingly, we did not observe any significant change in the activity of PLK3 upon exposure of cells to UVC, treatment with etoposide, CoCl2, or mannitol ( Figure 4L,M). These data suggest that enzymatic activity of PLK3 is not responding to stress and confirm our finding that PLK3 is disposable for cell response to genotoxic and osmotic stress. ( Figure 4B) [20]. Importantly, RNAi-mediated knock-down of PLK3 in RPE and HeLa cells also did not reveal any impact of PLK3 on the ability to activate p38 and c-Jun pathways ( Figure 4C-E and data not shown). Treatment of cells with cobalt chloride mimics hypoxia and as expected, it increased levels of HIF1 alpha ( Figure 4F,G) [18]. Surprisingly, we found that RPE-PLK3-KO cells or HeLa cells transfected with PLK3 siRNA induced similar levels of HIF1 alpha, suggesting that PLK3 does not inhibit HIF1 alpha stabilization in human cells ( Figure 4F,G). with the wild-type PLK3 or kinase-dead K91R mutant; PLK3 was isolated by GFP Trap and incubated with casein in kinase buffer supplemented with radioactive ATP. Phosphorylation of casein was detected by autoradiography. (L) Cells were exposed to various forms of stress including treatment with CoCl2 (300 µM, 12 h), exposure to UV (10 J/m 2 ), etoposide (4 µM, 1 h), hypertonic media (480 µM mannitol) or left untreated. EGFP-PLK3 was isolated using GFP Trap and kinase assay was performed as in H. Induction of stress pathways was analyzed by immunoblotting of cell extracts. Arrowhead indicates identical position on the electrophoretic gel. (M) Quantification of the kinase assay in L. Signal was normalized to the non-treated control. Plotted is median +/− SD (n = 3). Statistical significance was evaluated by two-tail t-test and significance was set to p < 0.05. Ns. stands for non-significant.
PLK3 Interacts with PP6 Phosphatase Through the PBD Domain
To search for proteins that could modulate PLK3 function through protein-protein interaction, we immunoprecipitated EGFP-PLK3 from stable HEK293 cells and identified bound proteins using mass spectrometry. Proteins that were enriched in complex with EGFP-PLK3 compared to the EGFP in at least two out of three independent experiments were considered as potential interactors ( Figure 5A). Among other proteins, we identified four components of a serine/threonine-protein phosphatase 6 highly enriched in EGFP-PLK3 complex. These included three regulatory subunits PPP6R1, PPP6R3 and ANKRD28 and a catalytic subunit PPP6C. Next, we performed immunoprecipitation from EGFP or EGFP-PLK3-expressing cells and confirmed that PLK3 specifically interacted with endogenous PPP6C, as well as with the regulatory subunits PPP6R1 and PPP6R3 ( Figure 5B). In contrast, we did not observe any interaction of EGFP-PLK3 with PPP6R2 subunit.
Next we aimed to map the interaction between PLK3 and PP6. We found that the EGFP-PLK3-∆PBD mutant lacking the PBD domain showed impaired interaction with PP6 ( Figure 5C). Similarly, H590A-K592M mutation of the predicted interaction site in the PBD domain also impaired interaction with PP6, suggesting that the PBD domain is needed for mediating the binding with the PP6 holoenzyme ( Figure 5C) [2]. Interestingly, PP6 subunits were recently identified in a complex with PLK1, suggesting that interaction with PP6 might be conserved also for other polo-like kinase members [42]. Finally, we aimed to compare the subcellular distribution of PLK3 and PP6 complex. Immunofluorescence microscopy revealed colocalization of EGFP-PLK3 with endogenous PPP6C at the centrosome ( Figure 5D,F). In addition, antibody against PPP6R3 showed a strong nuclear staining but partially also colocalized with γ-tubulin and EGFP-PLK3 at the centrosome ( Figure 5E,F).
of stress including treatment with CoCl2 (300 μM, 12 h), exposure to UV (10 J/m 2 ), etoposide (4 μM, 1 h), hypertonic media (480 μM mannitol) or left untreated. EGFP-PLK3 was isolated using GFP Trap and kinase assay was performed as in H. Induction of stress pathways was analyzed by immunoblotting of cell extracts. Arrowhead indicates identical position on the electrophoretic gel. (M) Quantification of the kinase assay in L. Signal was normalized to the non-treated control. Plotted is median +/− SD (n = 3). Statistical significance was evaluated by two-tail t-test and significance was set to p < 0.05. Ns. stands for non-significant.
PLK3 Is Phosphorylated in the T-Loop but This Does Not Affect Its Kinase Activity
A common mechanism of PLK1 activation is phosphorylation in the T-loop at Thr210 by Aurora-A kinase during G2/M transition and in mitosis [43][44][45]. As the T-loops of PLK1 and PLK3 are highly homologous, we asked if PLK3 could also be modified by phosphorylation ( Figure 6A). We immunoprecipitated EGFP-PLK3 from cells and probed with an antibody directed against pT210 of PLK1 ( Figure 6B). As control, we immunoprecipitated EGFP-PLK3-T219A mutant carrying a single mutation in the T-loop and confirmed that pT210-PLK1 antibody can specifically recognize phosphorylated PLK3 at Thr219 ( Figure 6B). Further, we found that PLK3 was weakly phosphorylated at Thr219 in asynchronously growing cells. When we treated these cells with a broad-spectrum phosphatase inhibitor calyculin, we observed a dramatically increased level of PLK3 phosphorylation at Thr219, suggesting that modification at this site is constantly removed by an opposing phosphatase ( Figure 6C) [46]. As PLK3 interacts with PP6 phosphatase, we hypothesized that phosphorylation of PLK3 could be counteracted by the activity of PP6. To test this, we isolated PLK3 from cells transiently expressing FLAG-PP6 and indeed, we found a lower level of the T-loop phosphorylation ( Figure 6D). Dephosphorylation of PLK3 was further pronounced when we co-expressed PP6 with the regulatory subunit PPP6R3 ( Figure 6D). Finally, we used the in vitro kinase assay to test the contribution of the T-loop modification on PLK3 activity. We found that the wild-type PLK3 but not the kinase-dead mutant PLK3-K91R efficiently phosphorylated casein ( Figure 6F). Surprisingly, however, activities of the nonphosporylable mutant PLK3-T219A and phosphomimicking mutant PLK3-T219D were comparable to the activity of the wild-type PLK3 ( Figure 6F). Similarly, neither depletion of PP6 by siRNA nor the treatment of cells with calyculin affected the activity of PLK3 ( Figure 6E). Finally, overexpression (B) EGFP-PLK3-WT and EGFP-PLK3-T219A was immunoprecipitated from transiently transfected cells using GFP-Trap and was probed with antibody against pT210-PLK1 or GFP. (C) EGFP-PLK3 was isolated from cells treated with mock or calyculin using GFP Trap. Isolated EGFP-PLK3 was incubated or not with lambda phosphatase for 20 min. All samples were probed with pT210 antibody. (D) HEK293-EGFP-PLK3 cells were transfected with PP6C and/or PPP6R1, and PLK3 was isolated using GFP-Trap. Level of PLK3 phosphorylation was evaluated by pT210 antibody. (E) HEK293-EGFP-PLK3 cells were transfected with negative control siRNA or siRNA to PP6C and grown for 48 h. Where indicated, cells were treated with calyculin 20 min prior to harvesting. PLK3 was isolated with GFP Trap. Level of PLK3 phosphorylation was evaluated by pT210 antibody. Kinase assay was performed as in F. (F) EGFP-PLK3-WT, EGFP-PLK3-T219A, EGFP-PLK3-T219D or a kinase dead EGFP-PLK3-K91R were purified from transiently transfected 293 cells using GFP Trap and incubated with casein and 32 P-γ-ATP for 20 min at 30 • C (n = 2). Phosphorylation was detected by autoradiography and amount of precipitated PLK3 by staining with GFP antibody. Arrowhead indicates identical position of the casein on the electrophoretic gel. (G) EGFP or EGFP-PLK3-WT was purified from 293 cells transfected with mock, PP6C or PP6C with PPP6R1, and kinase assay was performed as in F. Phosphorylation of PLK3 was evaluated with pT210-PLK1 antibody and phosphorylation of the casein by autoradiography (n = 2). Arrowhead indicates an identical position on the gel.
We also noted that PLK3 mutant lacking the PBD domain necessary for the interaction with PP6, showed a higher level of phosphorylation in the T-loop compared to the wild-type PLK ( Figure 5C). Importantly, depletion of endogenous PP6 by RNAi increased phosphorylation of PLK3 at Thr219, indicating that PP6 phosphatase controls the level of PLK3 phosphorylation in cells ( Figure 6E).
Finally, we used the in vitro kinase assay to test the contribution of the T-loop modification on PLK3 activity. We found that the wild-type PLK3 but not the kinase-dead mutant PLK3-K91R efficiently phosphorylated casein ( Figure 6F). Surprisingly, however, activities of the non-phosporylable mutant PLK3-T219A and phosphomimicking mutant PLK3-T219D were comparable to the activity of the wild-type PLK3 ( Figure 6F). Similarly, neither depletion of PP6 by siRNA nor the treatment of cells with calyculin affected the activity of PLK3 ( Figure 6E). Finally, overexpression of PP6C together with its regulatory subunit PPP6R1 did not affect the activity of the immunoprecipitated EGFP-PLK3 ( Figure 6G). We conclude that activation of PLK3 does not require phosphorylation of the T-loop at Thr219 and that PP6 phosphatase does not control the level of PLK3 activity in cells.
Discussion
In this study, we knocked-out PLK3 in human RPE cells using CRISPR/Cas9-mediated gene editing to study PLK3 function in stress response. Two independent clones of RPE-PLK3-KO showed no differences in stabilization of HIF1 in hypoxia conditions, in phosphorylation of c-Jun after exposure of cells to UV irradiation and in activation of DNA damage response after exposure of cells to ionizing radiation. Importantly, we obtained similar results when we depleted PLK3 using RNAi in human diploid RPE cells or in transformed HeLa and U2OS cells. Our data suggest that PLK3 does not significantly contribute to the cell response to hypoxia or DNA damage in human cells. In agreement with this, kinase assays performed using EGFP-PLK3 purified from cells exposed or not to various forms of stress also did not show any significant increase in the kinase activity of PLK3. Specificity of the kinase assay used in this study was validated by a kinase-dead mutant EGFP-PLK3-K91R that did not show any enzymatic activity. We conclude that in human cells, PLK3 does not respond to cellular stress. Previous reports relied mostly on purification of endogenous PLK3 using a polyclonal antibody, the specificity of which was not validated [16,19,20]. Since this antibody is no longer available, we could not perform the kinase assays in parallel with our assay. We tested several other available antibodies but most of them did not show satisfactory specificity and sensitivity towards PLK3. The only antibody that in our hands reliably recognized endogenous PLK3 in immunoblotting was a rabbit monoclonal from the Cell Signaling, but the antibody was not suitable for immunofluorescence microscopy. Other antibodies demonstrated poor affinity to exogenously expressed PLK3 and showed strong cross-reactivity in immunoblotting and a non-specific nuclear staining in immunofluorescence. We believe that previous data implicating PLK3 in the stress response in human cells should be interpreted with caution as they could be affected by low specificity of primary antibodies to PLK3. In this study, we observed enrichment of EGFP-PLK3 at the plasma membrane, which is in agreement with the recently described function of PLK3 in Fas ligand-induced apoptosis [36]. However, we were unable to validate this novel function of PLK3 because RPE cells are resistant to FasL treatment (data not shown).
Further, we show that PLK3 is post-translationally modified at Thr219 within the T-loop. The level of this phosphorylation is regulated by the PP6 holoenzyme that interacts with and continuously dephosphorylates PLK3. PP6 has recently been shown to regulate the activity of ASK3 kinase by controlling its phosphorylation upon osmotic stress conditions [47]. Therefore, we tested if PP6 could control PLK3 activity. However, we did not observe any changes in PLK3 interaction with PP6 upon osmotic stress (data not shown) and also activity of PLK3 was not affected by depletion of PP6, overexpression of PP6 or by treatment of cells with the phosphatase inhibitor calyculin. Phosphomimicking T219D and non-phosphorylatable T219A PLK3 mutants showed comparable enzymatic activities as the wild-type PLK3, suggesting that PLK3 is regulated by more distinct mechanisms than PLK1. It is possible that a single modification of the T-loop of PLK3 is not sufficient to boost the kinase activity of PLK3 and additional modifications may exist that control its function in cells. Instead of being regulated by PP6 through the T-loop modification, PLK3 could also stand upstream of the PP6 holoenzyme and control its function by targeting some of its subunits, which remains to be addressed by future research. We did not observe any changes in the level of phosphorylated Aurora-A at T288 in RPE-PLK3-KO cells, suggesting that PLK3 does not affect the PP6-Aurora-A axis during mitosis as has been reported for PLK1 (data not shown) [27,42]. Given the enrichment of PLK3 at the plasma membrane and Golgi apparatus, it will be interesting to test its potential impact on cell adhesion and intracellular trafficking that are regulated by PP6 phosphatase [48,49]. Finally, there is emerging evidence that expression of PLK3 could affect the therapeutical response in melanoma, prostate cancer and colon carcinoma [50][51][52]. Further phosphoproteomic studies are needed to identify new substrates of PLK3 that could explain its role in cell physiology and sensitivity to chemotherapy.
|
v3-fos-license
|
2019-07-18T19:11:17.489Z
|
2019-06-21T00:00:00.000
|
197453478
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1073/12/12/2394/pdf",
"pdf_hash": "909602776d4aa1fb794a4a32dc7d2ec5c9ef2b1b",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44094",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "a928fbd65de57ccd886d3613135946123e013e10",
"year": 2019
}
|
pes2o/s2orc
|
Tailoring Ni and Sr 2 Mg 0.25 Ni 0.75 MoO 6 − δ Cermet Compositions for Designing the Fuel Electrodes of Solid Oxide Electrochemical Cells
: The design of new electrode materials for solid oxide electrochemical cells, which are stable against redox processes as well as exhibiting carbon / sulphur tolerance and high electronic conductivity, is a matter of considerable current interest as a means of overcoming the disadvantages of traditional Ni-containing cermets. In the present work, composite materials having the general formula (1 − x)Sr 2 Mg 0.25 Ni 0.75 MoO 6 − δ + xNiO (where x = 0, 15, 30, 50, 70 and 85 mol.%) were successfully prepared to be utilised in solid oxide fuel cells. A detailed investigation of the thermal, electrical, and microstructural properties of these composites, along with their phase stability in oxidising and reducing atmospheres, was carried out. While possessing low thermal expansion coe ffi cient (TEC) values, the composites having low Ni content (15 mol.%–70 mol.%) did not satisfy the requirement of high electronic conductivity. Conversely, the 15Sr 2 Mg 0.25 Ni 0.75 MoO 6 − δ + 85NiO samples demonstrated very high electrical conductivity (489 S sm − 1 at 850 ◦ C in wet H 2 ) due to well-developed Ni-based networks, and no deterioration of thermal properties (TEC values of 15.4 × 10 − 6 K − 1 in air and 14.5 × 10 − 6 K − 1 in 50%H 2 / Ar; linear expansion behaviour in both atmospheres). Therefore, this material has potential for use as a component of a fuel cell electrode system.
Introduction
Solid oxide fuel cells (SOFC) are electrochemical devices capable of converting hydrogen and more readily available carbon-containing fuels into electricity with high efficiency and low emissions [1][2][3][4]. Traditional SOFC systems based on yttria-stabilised zirconia (YSZ) electrolytes operate at very high (more than 800 • C) temperatures required for reaching the sufficient performance [5,6]. However, such high temperatures impede the commercialisation of SOFCs due to the rapid component degradation associated with chemical (interdiffusion, chemical reactivity) and microstructural (electrolyte recrystallisation, electrode particle agglomeration, functional material delamination) factors [7][8][9][10]. While the degradation issue can be effectively tackled by designing low-and intermediate-temperature SOFCs, new challenges emerge in the course of developing the high-performance materials on which they are based.
Although typical Ni-based cermets are commonly used for SOFC anodes due to their excellent electrocatalytic properties [11][12][13], they have significant disadvantages associated with reduction-oxidation (redox) cycling instability and degradation due to the agglomeration of Ni particles occurring at high temperatures. Moreover, sulphur poisoning and carbon coking on the Ni-based anode surface are serious problems when SOFCs are used with hydrocarbon fuels [14]. In this regard, considerable efforts have been made for the development of alternative anode materials with good catalytic activity combined with high tolerance to sulphide(s) formation and carbon deposition [15][16][17][18][19].
It is well-known that the functional properties of the basic materials can be improved using the doping method. For example, when evaluated for use as SOFC anode materials, the complex oxides of the Sr 2 Ni 1−y Mg y MoO 6−δ (SNMM) system showed better stability in both oxidising and reducing atmospheres compared with the basic members of the SNMM system, i.e., Sr 2 MgMoO 6−δ and Sr 2 NiMoO 6−δ [31][32][33]. At the same time, the transport properties of the SNMM materials (0 < y < 1) remained unsatisfactory. A modification (composite preparation) method can be used simultaneously alongside a doping approach in order to improve the conductivity of such compounds. In our previous work, we proposed adding a SrMoO 4 impurity phase, passing into a well-conducting SrMoO 3 phase in a reducing atmosphere [34]. Such an addition underpinned the design of the new SNMM-SrMoO 4 (and SNMM-SrMoO 3 in reducing form) cer-cer composite materials exhibiting excellent chemical and redox stability as well as improved transport properties (>50 S cm −1 at 600 • C).
Another possible approach to optimising the properties of Mo-based oxides consists of the creation of cermets (ceramic-metal composite materials) [35,36]. For example, according to results of a study carried out by Niu et al., [35] Pd-impregnation of Sr 1.9 VMoO 6+δ resulted in a decrease in polarisation resistance at the electrode due to an improvement in the charge-transfer process. Xiao et al. [36] reported a similar effect for the Sr 2 Fe 1.5 Mo 0.5 O 6−δ fuel electrodes modified by a small amount of dispersed Ni phase. Despite the ostensive attractiveness of described impregnation/infiltration methods [37,38], the electrocatalytic activity of electrodes modified in this way tends to reduce over time due to a gradual dissolution of nanoparticles in the main backbone phase, leading to a decrease in the electrochemically active area.
Taking into account the mentioned drawbacks, we designed a new cermet composite system, (1−x)Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ + xNiO, with a wide variation in NiO concentration (15 ≤ x, mol.% ≤ 85). Particular attention was paid to studying the effect of second phase addition on the phase relation and microstructural features, as well as the thermomechanical and electrical characteristics depending on the oxidised and reduced form of the obtained composites.
Materials Preparation
To prepare the (1−x)Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ + xNiO composite materials, the Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ complex oxide was first synthesised using the glycine-nitrate synthesis method and then mechanically mixed with the NiO powder.
The details of the synthesis of the Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ material selected on the basis of works [31,34] are as follows. The (NH 4 ) 6 Mo 7 O 24 ·4H 2 O, SrCO 3 , MgO and NiO powders used as starting components had a purity of not less than 99% (sigma-Aldrich production). SrCO 3 , MgO and NiO powders were measured according to a strictly required ratio and then dissolved in dilute nitric acid. Following the complete dissolution of these powders, glycerin as a chelating agent was added in a mole ratio of 1:2 with respect to the total metal cations of the target composition; then an aqua solution of ammonium molybdate with the known Mo-content (determined by the thermogravimetric analysis) was also added. The obtained transparent solution was treated at 250 • C to provide pyrolysis. During this procedure, water evaporation, gelatinous mass formation, self-ignition, and the production of a highly dispersed powder were consistently observed. This powder was Energies 2019, 12, 2394 3 of 11 then calcined at 800 • C (2 h) in order to remove organic or carbon compounds, pre-synthesised at 1100 • C (5 h) to reach phase crystallisation and finally synthesised at 1100 • C (5 h) to ensure excellent chemical homogeneity. The powder was thoroughly milled (using an agate pestle and mortar) after each temperature treatment. The obtained Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ material was mixed with the NiO powder (Pulverisette 7 planetary mill, 400 rpm, 30 min); the concentration of NiO was varied from 15 to 85 mol.%. The composite materials were pressed at 250 MPa to form pellets (3 × 5 × 15 cm), which were then sintered at 1350 • C for 2 h.
Materials Characterization
The (1−x)Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ + xNiO composite materials were characterised by X-ray diffraction (XRD) analysis using a Rigaku D/MAX-2200VL/PC diffractometer [39]. The analysis was performed using Cu-K α radiation in an angle range of 20-75 • with a step of 0.02 • and a scan rate of 3 min −1 . The XRD analysis was also performed for the samples of (1−x)Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ + xNi reduced in pure H 2 at 800 • C for 5 h.
The morphology of the sintered and reduced ceramic materials was studied by scanning electron microscopy (SEM, Merlin, Carl Zeiss [40]) equipped with an X-Max Extreme (Oxford Instruments) detector for energy-dispersive X-ray (EDX) spectroscopy.
The thermal behaviour and thermal expansion coefficients (TECs) of the materials were evaluated using a DIL 402 C dilatometer (Netzsch GmbH). The experiments were carried out within a temperature range of 100-800 • C in both air and 50%H 2 /Ar gas media.
The electrical conductivity characterisation for the reduced samples was carried out using a four-point DC technique in wet hydrogen atmospheres. The temperature and conductivity were automatically controlled using a microprocessor system Zirconia-318 [41].
Phase Relation
In order to investigate a chemical stability and compatibility of the Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ double perovskite with NiO, an XRD study was carried out for both as-sintered and reduced samples ( Figure 1 and Figure S1). As can be seen, the XRD patterns contain reflections of the main double perovskite structure, NiO and trace amounts of a SrMoO 4 phase ( Figure 1a) for all the materials obtained following the sintering procedure. It should be noted that the existence of the latter is a characteristic feature for compounds with a general A 2 BMoO 6 formula prepared under oxidising conditions [42][43][44].
Energies 2019, 12, x FOR PEER REVIEW 3 of 11 self-ignition, and the production of a highly dispersed powder were consistently observed. This powder was then calcined at 800 °C (2 h) in order to remove organic or carbon compounds, pre-synthesised at 1100 °C (5 h) to reach phase crystallisation and finally synthesised at 1100 °C (5 h) to ensure excellent chemical homogeneity. The powder was thoroughly milled (using an agate pestle and mortar) after each temperature treatment. The obtained Sr2Mg0.25Ni0.75MoO6-δ material was mixed with the NiO powder (Pulverisette 7 planetary mill, 400 rpm, 30 min); the concentration of NiO was varied from 15 to 85 mol.%. The composite materials were pressed at 250 MPa to form pellets (3 × 5 × 15 cm), which were then sintered at 1350 °C for 2 h.
The morphology of the sintered and reduced ceramic materials was studied by scanning electron microscopy (SEM, Merlin, Carl Zeiss [40]) equipped with an X-Max Extreme (Oxford Instruments) detector for energy-dispersive X-ray (EDX) spectroscopy.
The thermal behaviour and thermal expansion coefficients (TECs) of the materials were evaluated using a DIL 402 C dilatometer (Netzsch GmbH). The experiments were carried out within a temperature range of 100-800 °C in both air and 50%H2/Ar gas media.
The electrical conductivity characterisation for the reduced samples was carried out using a four-point DC technique in wet hydrogen atmospheres. The temperature and conductivity were automatically controlled using a microprocessor system Zirconia-318 [41].
Phase Relation
In order to investigate a chemical stability and compatibility of the Sr2Mg0.25Ni0.75MoO6-δ double perovskite with NiO, an XRD study was carried out for both as-sintered and reduced samples (Figures 1 and S1). As can be seen, the XRD patterns contain reflections of the main double perovskite structure, NiO and trace amounts of a SrMoO4 phase (Figure 1a) for all the materials obtained following the sintering procedure. It should be noted that the existence of the latter is a characteristic feature for compounds with a general A2BMoO6 formula prepared under oxidising conditions [42][43][44]. Following exposure in H 2 , no SrMoO 4 phase (or reduced SrMoO 3 product) was found: almost all the samples represented two-phase systems consisting of the double perovskite and Ni compounds (Figure 1b). The most likely explanation for the disappearance of the SrMoO 3 impurity is its dissolution in the basic phase. Interestingly, the reduced material of 85% Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ + 15% Ni nominal composition was found to be single-phase. This can be attributed either to a complete co-dissolution of SrMoO 3 and Ni or insufficient diffractometer resolution, which only permits detection of phases in concentrations greater than 3 wt.%. In this reduced composite material, the weight fraction of Ni is equal to~2.4 wt.%.
Thermal Behaviour
In order to satisfy thermo-mechanical criteria as well as suppress the strain and stress during operation of the electrochemical SOFC devices at elevated temperatures, the thermal expansion behaviour of the oxides needs to be evaluated. In the case of new anode materials, their thermal behaviour was verified not only for the oxidising but also for the reducing conditions in which they operate. Figure 2 and Figure S2 show the dilatometry curves of the oxidised (1−x)Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ + xNiO ceramic composites and their reduced products. Moreover, the pure NiO sample was also prepared and included in the general system of the composites. As can be seen, the curves for pure NiO and Ni show slope changes in their linear trend in air as well as in 50% H 2 /Ar mixture, respectively, indicating the presence of undesirable phase transitions. Conversely, all composites exhibit a linear behaviour of thermal expansion in the whole studied temperature range without any detectable curvature. Following exposure in H2, no SrMoO4 phase (or reduced SrMoO3 product) was found: almost all the samples represented two-phase systems consisting of the double perovskite and Ni compounds (Figure 1b). The most likely explanation for the disappearance of the SrMoO3 impurity is its dissolution in the basic phase. Interestingly, the reduced material of 85% Sr2Mg0.25Ni0.75MoO6-δ + 15% Ni nominal composition was found to be single-phase. This can be attributed either to a complete co-dissolution of SrMoO3 and Ni or insufficient diffractometer resolution, which only permits detection of phases in concentrations greater than 3 wt.%. In this reduced composite material, the weight fraction of Ni is equal to ~2.4 wt.%.
Thermal Behaviour
In order to satisfy thermo-mechanical criteria as well as suppress the strain and stress during operation of the electrochemical SOFC devices at elevated temperatures, the thermal expansion behaviour of the oxides needs to be evaluated. In the case of new anode materials, their thermal behaviour was verified not only for the oxidising but also for the reducing conditions in which they operate. Figure 2 and Figure S2 show the dilatometry curves of the oxidised (1-x)Sr2Mg0.25Ni0.75MoO6-δ + xNiO ceramic composites and their reduced products. Moreover, the pure NiO sample was also prepared and included in the general system of the composites. As can be seen, the curves for pure NiO and Ni show slope changes in their linear trend in air as well as in 50% H2/Ar mixture, respectively, indicating the presence of undesirable phase transitions. Conversely, all composites exhibit a linear behaviour of thermal expansion in the whole studied temperature range without any detectable curvature. From dilatometry dependencies, the average thermal expansion coefficient (TEC) values were calculated as follows: where Lo is the length of the initial sample and ΔL is the relative length variation at temperature change (T). According to Table 1, the average TECs values changed insignificantly when varying the NiO concentration in the oxidised samples and Ni concentration in the reduced samples; they belong to the ranges of (15.3 ± 0.3)·10 -6 and (14.2 ± 0.4)·10 -6 K -1 , respectively. As can be seen, the individual From dilatometry dependencies, the average thermal expansion coefficient (TEC) values were calculated as follows: where L o is the length of the initial sample and ∆L is the relative length variation at temperature change (T). According to Table 1 With regard to the type of atmosphere, it can be revealed that the calculated TEC values for the composite materials were slightly lower in 50% H 2 /Ar than those obtained in air. The difference in the observed TECs is caused by those elements capable of changing their oxidation state. Therefore, the following factors occur for the studied system:
Together with a minor strain in cationic sublattice, the dimension change (contraction) in the anionic sublattice is estimated to be more pronounced due to oxygen desorption (r O x O = 1.40 Å, 46,47]) occurring as a compensation of the Mo-ions reduction process. Here, the ionic radii values are provided using the Shannon's system [48].
3.
NiO undergoes a complete reduction in a hydrogen atmosphere until the formation of a Ni metallic phase. The volume changes during this reduction amount~40% [49].
A comparison of the abovementioned factors allows two different conclusions to be revealed. The first of these consists in the fact that differences in α ox and α red are predominantly caused by the contraction of the anionic sublattice. Such a contraction along with the Mo-ions reduction results in a more packed lattice, for which the vibration amplitude can be lowered due to strengthening the M -O (M = Fe, Mo) ionic bonds. This is in accordance with shifting the XRD characteristic reflexes of the reduced materials to higher angles in comparison with the oxidised materials ( Figure S1). The second conclusion implies that the thermal behaviour of the materials is not determined by NiO or Ni phase, with the exception of the composite having x = 85, which had a higher TEC value compared with the other composites. The second conclusion is also confirmed by the fact that pure NiO and Ni phases exhibit non-monotonic expansion and very high TEC values (Table 1) due to phase transitions [50].
Conductivity Behaviour
The total conductivity of the (1−x)Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ + xNi ceramic materials in wet hydrogen atmosphere is shown in Figure 3. The composites having a low Ni concentration (x = 15, 30 and 50) displayed virtually the same conductivity level. As mentioned above, these composites are comprised of a Mo-based framework in which the Ni-based phase is statistically distributed. Therefore, no continuous metallic phase is formed for these objects, causing their fairly low conductivity levels in accordance with the transport properties of some double molybdates (Table S1, [27,[51][52][53]). When the Ni concentration was increased, the conductivity tended to increase considerably, up to~2.7 S cm −1 at 800 • C ( Table 2) and then to more than 450 S cm −1 at the same temperature. Moreover, the conducting behaviour of the composites was also quite varied, explained in terms of a change in the slope of conductivity dependencies in Arrhenius coordinates. This again indicates that the percolation effect is invoked when the nickel content varies between 70 mol.% and 85 mol.%.
Energies 2019, 12, x FOR PEER REVIEW 6 of 11 The total conductivity of the (1-x)Sr2Mg0.25Ni0.75MoO6-δ + xNi ceramic materials in wet hydrogen atmosphere is shown in Figure 3. The composites having a low Ni concentration (x = 15, 30 and 50) displayed virtually the same conductivity level. As mentioned above, these composites are comprised of a Mo-based framework in which the Ni-based phase is statistically distributed. Therefore, no continuous metallic phase is formed for these objects, causing their fairly low conductivity levels in accordance with the transport properties of some double molybdates (Table S1, [27,[51][52][53]). When the Ni concentration was increased, the conductivity tended to increase considerably, up to ~2.7 S cm -1 at 800 °C (Table 2) and then to more than 450 S cm -1 at the same temperature. Moreover, the conducting behaviour of the composites was also quite varied, explained in terms of a change in the slope of conductivity dependencies in Arrhenius coordinates. This again indicates that the percolation effect is invoked when the nickel content varies between 70 mol.% and 85 mol.%.
Microstructural Features
In order to understand the thermal and electrical behaviours of the materials developed, they were characterised by SEM analysis. The corresponding images for the as-sintered and reduced composite samples are presented in Figures 4 and 5, respectively. Analysing the data obtained for the oxidised (1−x)Sr 2 Mg 0.25 Ni 0.75 MoO 6−δ + xNiO materials (Figure 4), it can be noted that they were rather porous (10-20 vol.%) and consisted of a grain-based structure with well distinguished grain boundaries at low x values, while more dense samples with a lower porosity (5 vol.%-10 vol.%) and solid structure were formed at high x values. Since the composite materials were multi-phase (Figure 1a), different microand sub-micro sediments were detected along with the grains (Figure S3).
When the composites were reduced, their ceramic parameters were changed ( Figure 5). In detail, all the samples exhibited a crystallite structure composed of grains of two (Ni-and molybdate-based) phases and large amounts of pores (20 vol.%-30 vol.%). The latter was mostly caused by the mentioned volume changes during NiO → Ni reduction. The results of the EDX spectroscopy showed that the Ni metallic phase was initially located as individual particles and then formed a continuous network with a gradual increase of nickel concentration. Only in the case of 85 mol.% Ni in the composite system does the volume fraction of this metal exceed the percolation effect, resulting in the sharp changes in TECs (Table 1) and a dramatic increase in electronic conductivity (Figure 3). boundaries at low x values, while more dense samples with a lower porosity (5 vol.%-10 vol.%) and solid structure were formed at high x values. Since the composite materials were multi-phase (Figure 1a), different micro-and sub-micro sediments were detected along with the grains ( Figure S3). When the composites were reduced, their ceramic parameters were changed ( Figure 5). In detail, all the samples exhibited a crystallite structure composed of grains of two (Ni-and molybdate-based) phases and large amounts of pores (20 vol.%-30 vol.%). The latter was mostly caused by the mentioned volume changes during NiO → Ni reduction. The results of the EDX spectroscopy showed that the Ni metallic phase was initially located as individual particles and then formed a continuous network with a gradual increase of nickel concentration. Only in the case of 85 mol.% Ni in the composite system does the volume fraction of this metal exceed the percolation effect, resulting in the sharp changes in TECs (Table 1) and a dramatic increase in electronic conductivity ( Figure 3).
From the results obtained, the following conclusions can be made: 1. All the materials were stable in both oxidising and reducing atmospheres. The reduced samples were found to comprise dual-phase materials, while an impurity SrMoO4 phase was detected along with two target phases for the oxidised samples. 2. Thermal expansion of the studied composite materials was linear over the entire temperature range (200-800 °C); the calculated TECs values remained more or less consistent with a variation in composition, decreasing from the oxidised to the reduced samples.
From the results obtained, the following conclusions can be made: 1.
All the materials were stable in both oxidising and reducing atmospheres. The reduced samples were found to comprise dual-phase materials, while an impurity SrMoO 4 phase was detected along with two target phases for the oxidised samples. 2. Thermal expansion of the studied composite materials was linear over the entire temperature range (200-800 • C); the calculated TECs values remained more or less consistent with a variation in composition, decreasing from the oxidised to the reduced samples.
3.
The total conductivity of the reduced composites did not exceed 3 S cm −1 at 800 • C at 15 ≤ x, mol.% ≤ 70; whereas, it amounts to 450 S cm −1 for x = 85 mol.% at the same temperature.
The 15Sr2Mg0.25Ni0.75MoO6-δ + 85NiO composite material and its reduced product have potential for use in a fuel electrode system due to their high conductivity and tolerance to meaningful dimensional changes. It should be noted that such a composite is characterised by the high amount of nickel, the presence of which might lead to sulfidation and carbonization [3]; nevertheless, the co-presence of the double molybdate phase is assumed to promote S-desorption and inhibit coke formation [54,55]. Moreover, its electrochemical behaviour should be verified, for example, using electrochemical impedance spectroscopy, which will be addressed in future research.
|
v3-fos-license
|
2024-03-22T15:45:40.145Z
|
2024-03-01T00:00:00.000
|
268560288
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.cell.com/article/S240584402404427X/pdf",
"pdf_hash": "819ea73f1336fd217af16207539c13d0f47848ea",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44095",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"sha1": "f5ee87b89d4a0740b8fd70ea70777a73e9a8a320",
"year": 2024
}
|
pes2o/s2orc
|
Copper-based metal-organic frameworks (BDC-Cu MOFs) as supporters for α-amylase: Stability, reusability, and antioxidant potential
Copper-based metal-organic frameworks (BDC-Cu MOFs) were synthesized via a casting approach using 1,4-benzene dicarboxylic (BDC) as organic ligand and their properties characterized. The obtained materials were then utilized to immobilize the α-amylase enzyme. The chemical composition and functional components of the synthesized support (BDC-Cu MOFs) were investigated with Fourier transform infrared spectroscopy (FTIR), the surface morphology was determined with scanning electron microscopy (SEM), and the elemental composition was established with energy dispersive X-ray (EDX) analyses. X-ray diffraction (XRD) was employed to analyze the crystallinity of the synthesized DBC-Cu MOFs. The zeta potentials of DBC-Cu MOFs and DBC-Cu MOFs@α-amylase were determined. The immobilized α-amylase demonstrated improved catalytic activity and reusability compared to the free form. Covalent attachment of the α-amylase to BDC-Cu provided an immobilization yield (IY%) of 81% and an activity yield (AY%) of 89%. The immobilized α-amylase showed high catalytic activity and 81% retention even after ten cycles. Storage at 4 °C for eight weeks resulted in a 78% activity retention rate for DBC-Cu MOFs@α-amylase and 49% retention for the free α-amylase. The optimum activity occurred at 60 °C for the immobilized form, whereas the free form showed optimal activity at 50 °C. The free and immobilized α-amylase demonstrated peak catalytic activities at pH 6.0. The maximum reaction velocities (Vmax) values were 0.61 U/mg of protein for free α-amylase and 0.37 U/mg of protein for BDC-Cu MOFs@α-amylase, while the Michaelis‒Menten affinity constants (Km) value was lower for the immobilized form (5.46 mM) than for the free form (11.67 mM). Treatments of maize flour and finger millet samples with free and immobilized α-amylase resulted in increased total phenolic contents. The enhanced antioxidant activities of the treated samples were demonstrated with decreased IC50 values in ABTS and DPPH assays. Overall, immobilization of α-amylase on BDC-Cu MOFs provided improved stability and catalytic activity and enhanced the antioxidant potentials of maize flour and finger millet.
Introduction
Enzymes are biomolecules that serve as catalysts for biochemical reactions, and they find widespread use across various fields.The broad utilization of these molecules can be attributed to their gentle reaction conditions, specific substrate preferences, and ecofriendliness [1,2].Enzymes are significant catalysts that play crucial roles in biosensors and other biotechnological processes [3-5].α-Amylase is one of the most commonly used amylolytic enzymes in the food, textile, detergent, pharmaceutical, paper, and leather industries [6][7][8].To release short-chain oligosaccharides from polysaccharides, a glycoside hydrolase (GH) must first break their α-1-4 glucosidic linkages.However, enzymes have limitations, as they tend to lose their original structures when exposed to harsh environmental conditions.To provide an appropriate environment for effective enzyme activity, companies are exploring various alternatives [9].Immobilization may overcome these limitations.Immobilized enzymes are biocatalysts that are fixed or confined to a support matrix or surface, which provides a stable and reusable enzyme system.Immobilization enhances the stabilities, activities, and selectivities of enzymes while also facilitating separation and recovery of the enzyme after it has catalyzed the substrate reaction.Immobilized enzymes have found extensive use in various industrial applications, such as in the production of food and beverages, pharmaceuticals, and biofuels.Despite the numerous advantages of using immobilized enzymes, there are also certain limitations to this approach.Immobilization sometimes leads to a reduction in enzyme activity or a decrease in the reaction rate due to alterations in the enzyme conformation or its accessibility to the substrate.Furthermore, the immobilization process can be costly and time-consuming, which may restrict usage in certain applications [10][11][12].For a material to be suitable as a support, it must retain the maximum possible level of enzyme activity while also protecting the enzyme and permitting reuse in practical applications.A variety of support materials have been developed to protect enzymes, including metal-organic frameworks (MOFs) [13][14][15][16].Self-assembly of rigid organic bridging ligands and metal ions leads to the creation of metal-organic frameworks (MOFs), which are novel organicinorganic hybrid nanomaterials [17,18].These materials are very versatile and can be utilized in a broad range of applications due to their well-organized porous crystal structures, adjustable pore sizes, and ease of chemical modification [19].They have gained significant attention in recent years due to their unique properties and potential for application in various fields, including drug delivery [20], dye and metal adsorption [21], chromatographic separation [22], catalysis [23], and sensing [24].MOFs have recently emerged as promising materials for enzyme immobilization [25][26][27].
MOFs have several unique properties that make them ideal for enzyme immobilization.Their high surface areas and adjustable pore sizes enable effective enzyme loading and provide precise control over enzyme immobilization.Additionally, MOFs can safeguard the enzymes from harsh conditions, such as high temperatures, extreme pHs, and organic solvents, which often lead to enzyme denaturation and activity loss.This is accomplished by confining the enzymes within the MOF pores, which shields them from the external environment.Furthermore, the functional groups within the MOFs facilitate enzyme immobilization through covalent bonding or electrostatic interactions.In recent years, there has been growing interest in using MOFs as support materials for enzyme immobilization [28][29][30].Zeyadi et al. [31] reported the immobilization of horseradish peroxidase onto an NH 2 -MOF-Zr through covalent bonding.The resulting biocatalyst exhibited reusability and higher efficacy in removing phenol than the free enzyme.Atiroglu et al. [32] also reported the immobilization of α-amylase onto OLB/BSA@ZIF-8 MOF with covalent bonding and adsorption.The resulting biocatalyst showed high activities and stabilities at different pHs and temperatures.Acet et al. reported the immobilization of α-amylase onto Carbon felt modified with Ni 2+ ions.The resultant biocatalyst demonstrated superior reusability and increased efficiency in the degradation of starch compared to the free enzyme [33].
The objective of this study was to immobilize α-amylase on copper integrated with 1,4-benzene dicarboxylic (BDC-Cu MOFs).The support material was characterized using X-ray diffraction (XRD), scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR), and zeta potential measurements.The immobilization parameters were optimized for high efficiency.Kinetic parameters, temperature, pH, and storage stabilities were evaluated to compare the properties of the immobilized enzyme with soluble α-amylase.
This study has several novel contributions.Firstly, it introduces the synthesis of copper-based metal-organic frameworks (BDC-Cu MOFs) via a casting approach, which is a less explored method.Characterization techniques, including FTIR spectral analysis, SEM, EDX analysis, and XRD, provided a comprehensive understanding of the synthesized BDC-Cu MOFs.Additionally, immobilizing α-amylase on BDC-Cu MOFs represents a novel application of this specific MOF.The evaluation of stability, reusability, and catalytic activity of the immobilized α-amylase, along with its enhanced performance compared to the free form, adds to the novelty of this research.Lastly, the study demonstrates the potential of BDC-Cu MOF-supported α-amylase in enhancing the antioxidant properties of maize flour and finger millet, which is a unique contribution of this specific MOF.
Synthesis of the material support (BDC-Cu)
The synthetic strategies employed in nanoparticle fabrication are designed to optimize various physicochemical properties, morphologies, and crystallite sizes to achieve improved stabilization, monodispersity, and biocompatibility.In this particular study, a casting approach was chosen for the synthesis of BDC-Cu MOFs due to its effectiveness in controlling the morphologies, uniformities, surface charges of particles, crystallite sizes and agglomeration [34].Trimethylamine (TMA), an organic base, played a crucial role in the MOF synthesis by deprotonating the organic linker molecules or ligands.This deprotonation process facilitated coordination of the metal ions with the metal-ligand bonds that provide the stability of the MOF structure [35].In this study, copper-based metal-organic frameworks (MOFs) were synthesized by using 1,4-benzene dicarboxylic (BDC) as the organic linker in the presence of N, S.A. Al-Harbi and Y.Q.Almulaiky N-dimethylformamide, as illustrated in Scheme 1.The resulting coordination complex contains 1,4-benzene dicarboxylic ligands that are coordinated to copper ions (Cu 2+ ) in a bidentate bridging fashion.Each Cu 2+ atom is also coordinated by a molecule of DMF, resulting in a square-pyramidal coordination geometry for the Cu 2+ atoms.This arrangement provided a specific structure in which the Cu 2+ atoms were coordinated to the BDC linkers in the (201) planes.These planes or sheets were then connected through weak stacking interactions, which are similar to those observed in a material called MOF-2 [36].Clausen et al. reported the structure of a polymorph of MOF-2 that shared the same space group and exhibited unit-cell parameters similar to those of the described complex [37].The resulting MOFs were obtained as solid crystalline materials.The synthesized MOF crystals were subjected to characterization techniques to analyze their properties.Furthermore, the obtained MOF crystals were utilized for immobilizing the enzyme α-amylase.The covalent binding of α-amylase to BDC-Cu resulted in an immobilization yield (IY%) of 81% and an activity yield (AY%) of 89%.
These findings were compared to those in previous research, in which immobilization of the HRP enzyme on NH 2 -MOF-Zr exhibited an immobilization yield of 76% and an activity yield of 82% [31].HRP immobilization on CuONS-PMMA yielded an immobilization yield of 73% [38].
Scheme 1. Synthesis of the BDC-Cu MOFs and enzyme immobilization.In another study by Cao et al. [39], soybean epoxide hydrolase was immobilized on UIO-66-NH 2 with an enzyme activity of 88.0%.Salgaonkar et al. [40] reported that glucoamylase and α-amylase enzymes were successfully immobilized in a metal-organic framework (MOF) structure, resulting in a yield of 68% and an enzyme activity of 73.3 units per milligram.Immobilization of the α-amylase enzyme was attributed to the formation of permanent crosslinks between the enzyme and the carrier via EDC/NHS.This hypothesis was supported by strong covalent binding of the enzyme to BDC-Cu, which also exhibited efficient loading and α-amylase performance.
The effectiveness of the enzyme was enhanced by reduction of the steric barriers surrounding its active sites, which was facilitated by the large surface area of the carrier and the presumed even distribution of the enzyme [41].The establishment of robust and distinct chemical linkages within the MOFs and enzymes was primarily influenced by covalent bonding [42].The presence of multiple covalent bonds on the surfaces of MOFs and enzymes minimizes their structural flexibility and provides stability, preventing protein leakage, unfolding, collapse, or denaturation [43].
FTIR analysis
The chemical composition and functional components of the synthesized supporter and BDC-Cu MOFs@α-amylase were investigated with FTIR spectral analyses over the range from to 4000 cm − 1 (Fig. 1).The FTIR analysis revealed the presence of O-H asymmetric stretching vibrations at 3449 cm − 1 .The bands observed at 1665 cm − 1 and 1569 cm − 1 corresponded to the C --O stretching vibrations of the carbonyl group in BDC, as well as the C-C skeletal vibrations of the aromatic ring.The strong band at 1398 cm − 1 was assigned to C-O stretching vibrations.Moreover, the absorption bands at 831 cm − 1 and 1156 cm − 1 were attributed to symmetric and asymmetric stretching vibrations of O-C --O.The presence of immobilized α-amylase enzyme was confirmed by the changes seen in the substrate spectrum following enzyme immobilization.Additionally, the appearance of a glycosidic C-O-C band at 1048 cm − 1 , along with a broad band at 1644 cm − 1 corresponding to C --O stretching vibration (amide I), and a weak band at 1530 cm − 1 , were attributed to C-N stretching and N-H bending vibrations (amide II).
Morphological characterization of the support
SEM and EDX analyses were used to study the surface morphologies and elemental compositions of BDC and BDC-Cu MOFs@αamylase, as shown in Fig. 2. The SEM micrograph in Fig. 2a revealed the distribution of particles in the copper MOF, which appeared as irregularly shaped flakes arranged in wave clusters.The image clearly demonstrated that the particles were highly agglomerated.In Fig. 2b, the SEM image displays the changes occurring in the surface morphology of DBC-Cu MOFs after the immobilization of α-amylase.Compared to the morphology of the DBC-Cu MOFs alone, there were observable changes in the appearance of the surface.
Zeta potentials and the average hydrodynamic sizes of particles
The zeta potentials of DBC-Cu MOFs and DBC-Cu MOFs@α-amylase are illustrated in Table 1 and Figs.1S and 2S (supplementary file).The DBC-Cu MOFs showed a zeta potential of − 8.29, which indicated that the DBC-Cu MOF particles had negative surface charges in the liquid medium.This could have arisen from several factors, including the presence of charged functional groups on the surface of the MOFs, dissociation of the ionizable groups, or adsorption of charged species from the surrounding medium.After enzyme immobilization, the surface charge of the material support may undergo changes due to attachment of the enzymes or other biomolecules.This can lead to changes in the zeta potential of the material support.The change in the zeta potential from − 8.29 to − 10.5 after the immobilization of α-amylase indicated an increased negative surface charge for the DBC-Cu MOF particles.Immobilization of α-amylase onto the DBC-Cu MOFs may have introduced additional charged functional groups or altered the surface properties of the MOFs, resulting in the more negative zeta potential.The α-amylase molecules themselves contain charged residues or groups that contributed to the overall surface charge of the immobilized system [45].
Overall, the change in zeta potential to a more negative value (− 10.5) after the immobilization of α-amylase suggested an alteration in the surface charge of the DBC-Cu MOFs due to the presence and interactions of the immobilized α-amylase.The intensity weighted mean hydrodynamic size, also known as the Z-average, is used to describe the hydrodynamic size distribution of particles in a sample.It is important to note that the Z-Average is a measure of the hydrodynamic size, which includes the size of the particle or molecule as well as the surrounding solvent or hydration layer.In this study, the Z-average of the material support before immobilization of the α-amylase was 2145 d nm.This was the average diameter of the particles or molecules comprising the material support.After the immobilization of α-amylase, the Z-average increased to 4913 d nm.This increase in the Z-average suggested that the sizes or hydrodynamic diameters of the particles increased on the material support.This was attributed to the presence of the immobilized α-amylase, which likely contributed to the larger overall size of the system.The increased Z-average could be due to the binding of α-amylase molecules onto the material support, leading to the formation of larger aggregates or complexes [46].
Reusability and stability
Enhanced usability, maximized catalyst cycling, and reduced costs were the key objectives for the immobilization of α-amylase.The reusability of the immobilized α-amylase is demonstrated in Fig. 5a, revealing a significant improvement in both the catalytic activity and reusability compared to those of free α-amylase.Unlike free α-amylase, which can only be used once, the immobilized form retained approximately 81% of its catalytic activity even after the 10th cycle.This confirmed the benefit of immobilizing α-amylase, which led to cost reductions and improved economic benefits.Consequently, the immobilized α-amylase exhibited a substantial increase in reusability, laying a solid foundation for future use in biocatalysis.These results support the notion that different carrier supports, such as a modified acrylic fabric [1] and amidoximated acrylic fabric [47], preserved the activity of α-amylase during multiple repetitions.Specifically, the modified acrylic fabric retained 72% of its original activity after 10 repetitions, while the amidoximated acrylic fabric maintained 50% of its original activity after 15 repetitions.The loss of activity seen after repeated cycling could have resulted from alterations in the enzyme structure or adsorption of substrates and products at the reactive sites [13].
To evaluate the storage stabilities of the free and immobilized α-amylases, their relative activities were assessed following incubation in sodium acetate buffer (50 mM, pH 5.5) at 4 • C for 8 weeks.The results demonstrated that the DBC-Cu MOFs@α-amylase and free α-amylase retained 78% and 49% of their initial activities, respectively, after an 8-week storage period at 4 • C (as depicted in Fig. 5b).The immobilized α-amylase exhibited significantly improved storage stability compared to the free enzyme.The DBC-Cu MOFs@α-amylase, in particular, demonstrated superior retention of enzymatic activity, suggesting its potential for prolonged storage and subsequent use in various biocatalytic applications.These findings were consistent with previous studies conducted by other researchers, who also that immobilized enzymes retained their enzymatic activity more effectively than free enzymes during storage.For instance, Dhavale et al. [48] reported that free α-amylase retained only 18% of its activity after 20 days, whereas amylase immobilized on chitosan-coated MNPs preserved 66% of its activity during the same period.Similarly, Sohrabi et al. [49] found that α-amylase immobilized on silica-coated Fe 3 O 4 nanoparticles maintained up to 79% of its activity after 12 days of storage.These results highlight the increased storage stability conferred by enzyme immobilization.The enhanced storage stability was attributed to the robust and stable structure of the immobilized enzyme on the surface of MOFs (metal-organic frameworks) [50].This enhanced stability was explained by the immobilization process, which provided a rigid support for the enzyme and protected it from external factors that may cause degradation or denaturation.
Effects of temperature and pH
The optimal temperatures and pHs were determined for the free and immobilized α-amylases, since these are crucial in assessing their suitability for biotechnological processes.α-Amylases that function effectively at high temperatures hold significant potential for various industrial applications.In this study, the temperature supporting maximum activity of the free α-amylase was 50 • C, while the immobilized form exhibited an optimum temperature of 60 • C (Fig. 6a).The activity of the free enzyme was 81% at 60 • C.However, as the temperature increased, the enzyme activity gradually declined, reaching 39% at 80 • C. On the other hand, after immobilization, α-amylase demonstrated improved activity at elevated temperatures, including 71% at 80 • C.These results provided further evidence for the effectiveness of immobilization in preserving enzymatic activity at elevated temperatures and highlight the reduced sensitivity of immobilized α-amylases to temperature fluctuations.Similar findings have been reported in previous studies.For instance, when α-amylase from Bacillus subtilis was immobilized on a hydroxyapatite-decorated ZrO 2 nanocomposite, it retained 80% of its activity after incubation at 80 • C [7].In another study, α-amylase from Aspergillus oryzae was immobilized on a novel hybrid support and exhibited optimal activity at 60 • C [51].Comparing the results of this study, the immobilized α-amylase demonstrated higher enzymatic activity at high temperatures (80 • C) than α-amylase from Arabian balsam immobilized on calcium alginate/Fe 2 O 3 nanocomposite beads, which exhibited less than 55% activity at 80 • C [52].The improved activity at elevated temperatures was attributed to alterations in the microenvironment surrounding the immobilized enzyme.This modified microenvironment shielded the enzyme from temperature-related fluctuations and protected it from thermal denaturation.As a result, the immobilized enzyme exhibited improved resistance to heat and maintained its activity at elevated temperatures [53].These findings underscore the vantages of immobilization in increasing the stability and activity of α-amylase, particularly at elevated temperatures.The immobilization process provides a means to overcome the limitations associated with free enzymes and makes the immobilized α-amylase a valuable tool for various biotechnological applications that require efficient enzymatic activity at high temperatures.
The environmental pH influences the activity of α-amylase, so it was investigated in this study.The effects of pH on the catalytic properties of both the free and immobilized α-amylases were examined (Fig. 6b), which revealed that the catalytic activity of free α-amylase increased as the pH was increased from 4.5 to 6.0 and showed maximum activity at pH 6.0.However, as the pH increased from 6.0 to 9.0, the catalytic activity gradually decreased.These findings indicated that the optimal pH for free α-amylase is 6.0.In comparison, DBC-Cu MOFs@α-amylase resisted degradation by acidic or basic conditions and showed an expanded pH range.The catalytic activity of DBC-Cu MOFs@α-amylase increased as the pH was increased from 4.5 to 7.5.The maximum catalytic activity was observed within the pH range 6.0-7.5.However, above pH 7.5, the catalytic activity gradually decreased with further increases in the pH.These findings indicated that the DBC-Cu MOFs@α-amylase had a broader pH activity range, specifically within the pH range 6.0-7.5.The improved stability and pH resistance of the DBC-Cu MOFs@α-amylase was attributed to changes in the surface charges of the DBC-Cu MOFs and their impact on the ionic environment surrounding the active center of α-amylase.The immobilization of α-amylase within the DBC-Cu MOF structure provided protection and increased the stability of the enzyme.By modulating the surface charge, the DBC-Cu MOFs created a favorable environment for the immobilized α-amylase, which enabled it to withstand acidic and alkaline conditions more effectively.This modification of the enzyme microenvironment improved the resistance to pH changes, ultimately expanding its functional range and its applicability in diverse biotechnological processes.Previous studies investigated the immobilization of α-amylase with different materials and reported their pH profiles.For example, when α-amylase was immobilized on ultrafine polyvinyl alcohol, the optimal pH was 6.0, and the enzyme retained approximately 80% of its activity at pH 8.0 [49].In another study, the α-amylase from Bacillus subtilis was immobilized on a metal-organic framework nanocomposite, and the optimal pH was 6.5, with the enzyme maintaining over 70% of its activity at pH 8.0 [32].Acet et al. documented the immobilization of α-amylase onto poly(2-hydroxyethyl methacrylate) that attached to copper.The results of the study revealed that the immobilized enzyme exhibited enhanced stability within a pH range of 6-7.5 [54].
Kinetic behavior of a-amylases
The rates of the reaction were measured at different substrate concentrations in sodium acetate buffer (pH 5.5) with both free α-amylase and BDC-Cu MOFs@α-amylase.The double reciprocal method was used to plot the data and determine the values of Km and Vmax (Table 2).For free α-amylase, the linear regression equation was y = 1.644x + 19.18, with a Vmax of 0.61 U/mg of protein.In contrast, for BDC-Cu MOFs@α-amylase, the linear regression equation was y = 2.689x + 14.69, with a Vmax of 0.37 U/mg of protein.These Vmax values indicated that the structure of the enzyme was altered by covalent interactions with the cross-linker, which increased stability of the BDC-Cu MOFs@α-amylase.The Km value, which indicates the affinity between the enzyme and substrate, was significantly lower for BDC-Cu MOFs@α-amylase (5.46 mM) than for free α-amylase (11.67 mM).This suggested that the enzyme immobilized on the surface of BDC-Cu MOFs had more accessible active sites and increased affinity of α-amylase for the starch substrate.The covalent interactions and immobilization of α-amylase onto the BDC-Cu MOF structure enhanced the stability and substrate affinity of the enzyme, making it a promising system for improved enzymatic activity in starch degradation.Atiroglu et al. observed similar trends for the immobilization of α-amylase on a metal-organic framework.They found decreases in both the Km and Vmax values compared to those of the free α-amylase enzyme [32].The decrease in Km indicated an increased affinity between the immobilized α-amylase and the substrate, while the reduction in Vmax suggested a decrease in the maximum reaction rate.These findings were consistent with the notion that enzyme immobilization can alter the enzyme structure and microenvironment and change the reaction kinetic.A study by Atiroglu et al. provided further evidence that immobilization, such as with metal-organic frameworks, can impact the enzymatic activity and substrate affinity of α-amylase [32].
Enhancement of the antioxidant capacity of certain foods
Polyphenols are natural compounds found in various plant-based sources, such as cereals, vegetables, fruits, flowers, and tea [55].They are a significant class of secondary plant metabolites [56].Numerous studies have highlighted the positive health benefits of polyphenolic substances, including antioxidant, antiviral, anti-inflammatory, antithrombogenic, antiallergenic, antiasthma, antidiabetic, and anticancer properties [57][58][59].One notable aspect of polyphenols is their association with dietary fiber.Prior research demonstrated that treating cereals with hemicellulose, a dietary fiber, increased the antioxidant activities of the cereals [60].This suggested that the presence of hemicellulose, which is rich in polyphenols, could enhance the overall antioxidant potentials of cereals.Phenolic compounds are potent antioxidants found in various plant-based foods.Some of these compounds are bound to starch molecules, limiting their antioxidant activity.By breaking down the starch, α-amylase increased the accessibility of these antioxidants.In this study, both maize flour and finger millet samples were treated with free and immobilized α-amylase, and the polyphenol contents were measured before and after the enzyme treatment.The results are presented in Table 3.Compared to the control samples, the total phenolic content of the maize flour treated with free α-amylase was increased by a factor of 1.28, while the immobilized α-amylase treatment resulted in a 1.39-fold increase.Similarly, in the case of finger millet, the free α-amylase treatment led to a 1.35-fold increase in total phenolic content, and the immobilized α-amylase treatment resulted in a 1.64-fold increase.Interestingly, the use of a protease enzyme in a separate experiment also demonstrated increased antioxidant activity, possibly due to reduced interactions between the proteins and phenolic compounds [61].The results highlighted a significant correlation between the total phenolic contents of maize flour and finger millet and the IC 50 values obtained from antioxidant activity assays.As the phenolic content was increased, the IC 50 values for the ABTS and DPPH assays decreased, indicating enhanced antioxidant activity.Additionally, when maize flour was treated with the immobilized enzyme, the IC 50 values showed 1.44-fold (DPPH) and 1.53-fold (ABTS) increases.Similarly, the treatment of finger millet with immobilized α-amylase resulted in 1.33-fold (DPPH) and 1.26-fold (ABTS) increases in the IC 50 values.These results demonstrated that treatment with α-amylase enhanced the antioxidant capacities of the maize flour and finger millet extracts, as evidenced by the decreased IC 50 values obtained from the ABTS and DPPH assays.
Treating certain foods with α-amylase increased their antioxidant capacities.α-Amylase specifically acts on starch and breaks it down into smaller molecules.Many antioxidant compounds in foods are bound to starch molecules, which limits their antioxidant activities.By breaking down the starch, α-amylase increases the accessibilities of these antioxidants [1].In a related study conducted by Dey and Banerjee, α-amylase was used to increase the antioxidant capacity of wheat [62].Similarly, Yu et al. utilized a combination of cellulase and α-amylase to release phenolic acids from barley, thereby enhancing its antioxidant characteristics [63].
Table 2
The kinetic behavior of free and immobilized α-amylase.
Conclusion
In conclusion, copper-based metal-organic frameworks (MOFs) were synthesized with a casting approach and used to immobilize α-amylase.The immobilized α-amylase exhibited superior properties compared to the free form.Covalent binding of α-amylase to the BDC-Cu MOFs resulted in a high immobilization yield (81%) and activity yield (89%).Characterization techniques, including FTIR, SEM, EDX, and XRD, confirmed the compositions, morphologies, and crystalline characteristics of the synthesized materials.The zeta potential analyses provided insights into the surface charges of the BDC-Cu MOFs and BDC-Cu MOFs@α-amylase.The immobilized α-amylase maintained significant catalytic activity even after ten cycles, retaining 81% of its initial activity.Storage at 4 • C for eight weeks resulted in a higher activity retention rate for DBC-Cu MOFs@α-amylase (78%) compared to free α-amylase (49%).The immobilized form exhibited an optimum temperature of 60 • C, while the free form showed optimal activity at 50 • C. Both the free and immobilized α-amylase displayed peak catalytic activity at pH 6.0.The immobilized enzyme had a lower Km value (5.46 mM) and a comparable Vmax value (0.37 U/mg of protein) compared to the free enzyme (11.67 mM and 0.61 U/mg of protein, respectively).
Furthermore, treatment of maize flour and finger millet samples with the free and immobilized α-amylase resulted in increased total phenolic content and enhanced antioxidant activity.These findings highlight the potential of BDC-Cu MOFs as effective supports for enzyme immobilization, offering improved enzymatic performance and potential applications in various industries.The utilization of BDC-Cu MOFs as a support material for enzyme immobilization opens up various applications in the food and biotechnology industries.
Materials and methodology
The following materials were procured from Sigma-Aldrich Chemical Co.Maize flour and finger millet samples were procured from a local market in Jeddah, Saudi Arabia.
Carrier preparation
To create the material support, a solution was prepared by dissolving 15 mM Cu(NO 3 ) 2 and 15 mM 1,4-benzene dicarboxylic in 80 mL of DMF.The solution was stirred gently, and 3 mL of trimethylamine was slowly added dropwise.The mixture was sealed and stirred at 80 • C for 2 h.The resulting BDC-Cu MOF crystals were separated by centrifugation, washed with DMF and dried.
Immobilization process
To immobilize α-amylase on the BDC-Cu MOFs, 200 mg of BDC-Cu MOFs was added to 10 mL of phosphate-buffered saline (50 mM PBS, pH 7.4), followed by the addition of 30 mg of EDC.The mixture was continuously stirred for 1 h at room temperature before adding 30 mg of NHS and stirring for an additional 1.5 h at room temperature.Next, a Falcon tube containing 80 units of α-amylase in 10 mL of PBS was used to immobilize the enzyme end-over-end for 12 h at room temperature.The resulting product (BDC-Cu MOFs@αamylase) was separated by centrifugation and washed with phosphate-buffered saline, and the protein content was determined with the Bradford technique with bovine serum albumin as the reference standard [64].The immobilization yield (IY%) and activity yield (AY%) were calculated with the following formulas: Immobilization Yield (IY%) = Amount of protein introduced − Protein in the supernatant Amount of protein introduced X 100 Activity yield (AY%) = Immobilized enzyme activity Iniatial activity X 100
α-Amylase activity measurements
The Miller method was employed to determine the activities of both the immobilized and soluble forms of α-amylase [65].To determine the activity of the immobilized enzyme, a standard protocol was followed with 10 mg of BDC-Cu MOFs@α-amylase.The immobilized and soluble forms of the enzyme were separately mixed with 1 mL of a 1% starch solution prepared in sodium acetate buffer (50 mM, pH 5.5) and incubated at 37 • C for 30 min.To develop the color, 1 mL of DNS reagent was added, but for the immobilized enzyme, it was first separated from the reaction and washed with distilled water before adding the DNS reagent.The reaction mixture was then incubated at 37 • C for 30 min, and the absorbance was measured at 560 nm.
Characterization of the material support
The morphologies of both BDC-Cu MOFs and BDC-Cu MOFs@α-amylase were analyzed with scanning electron microscopy coupled with energy dispersive X-ray spectroscopy (FEG-SEM: Quattro S FEG, SEM-Thermo Fisher, NL).Fourier transform infrared spectroscopy (FTIR, PerkinElmer Spectrum 100) was utilized to characterize the functional groups present in the BDC-Cu MOFs and BDC-Cu MOFs@α-amylase.An XRD system (XMD-300, UNISANTIS GERMANY, XQ Suite Software) was used to study the crystallite sizes and structural phases of the nanostructured BDC-Cu MOF samples.The zeta potential of the support was measured with a Malvern laser particle size analyzer (Zetasizer Ver.7.12, UK).
Reusability and storage stability
Immobilized enzymes offer significant benefits over their free counterparts in terms of reusability.To investigate the reusability of the immobilized enzyme under optimal conditions, BDC-Cu MOFs@α-amylase was removed from the reaction mixture after the initial use by centrifugation, and any remaining substrate or product was washed away with sodium acetate buffer (50 mM, pH 5.5).These steps were repeated after each cycle, and a new substrate solution was added.The residual activity was then calculated as a percentage of the initial use (100%).
To evaluate the stability of free α-amylase and BDC-Cu MOFs@α-amylase during storage, both were kept at 4 • C, and their residual activities were measured on a regular basis over a period of eight weeks to determine if enzyme denaturation had occurred and affected their activities.The reported values represent the average of three measurements.
The activities of the free and BDC-Cu MOFs@α-amylase were evaluated in sodium acetate buffer (50 mM, pH 5.5) at various temperatures (30-80 • C) for a period of 15 min.The control value (100%) used in calculating the remaining percent activities of the free and immobilized enzymes was based on the activity at the optimal temperature.
Kinetic parameters
The maximum reaction velocities (Vmax) and Michaelis-Menten affinity constants (Km) for free α-amylase and BDC-Cu MOFs@αamylase were determined by measuring their activities with starch as the substrate according to the method of Choi et al. [66].The Michaelis-Menten equation was fitted to the experimental data with nonlinear regression analysis.The activity assay was conducted at pH 5.5 and 37 • C, with substrate concentrations ranging from 1.5 to 4 mg.The experiments were repeated three times, and the reported Vmax and Km values were the averages with standard errors.
Increased antioxidant capacities of certain foods treated with α-amylase
To conduct these experiments, maize flour and finger millet samples were ground into fine powders.Prior to testing, the powders were sieved through a 1 mm sieve.Each sample (0.5 g) was then combined with 4 mL of 0.1 M acetate buffer (pH 5.5) and autoclaved.
The mixtures were incubated at 50 • C for 2 h [67].To terminate the enzymatic reaction, the temperature of the mixture was raised to 95 • C for 3 min.For the control samples, 1 mL of acetate buffer was used instead of the enzyme.After 1 h of refluxing at 50 • C with 10 mL of distilled water, the reaction mixtures were obtained.To evaluate the degrees of hydrolysis, the total phenolic contents (TPCs) of the control samples and samples treated with the enzyme were determined with the methods described by Velioglu et al. [68].The ABTS•+ and DPPH• scavenging activities of the samples were assessed with the techniques outlined by Ao et al. [69] and Re et al. [70], respectively.
Ethics
Not applicable.
S.A. Al-Harbi and Y.Q.Almulaikyimmobilizing the enzyme, the XRD pattern showed that the diffraction peaks at 17.5 • and 26.7 • remained unchanged, indicating that these specific crystalline structures were unaffected by the immobilization process.However, other diffraction peaks in the XRD pattern were shifted, suggesting alterations in the crystallographic properties of the material due to the interactions between α-amylase and the support material.These interactions included: 1) bonding of the enzyme was facilitated by the presence of carboxylic groups in DBC, and 2) strong immobilization by the DBC-Cu MOFs was attributed to the ability of Cu to facilitate cross-linking of the regular open frame structure of the DBC[44].
Table 1
Zeta potentials and intensity weighted mean hydrodynamic sizes of DBC-Cu MOF and DBC-Cu MOF@α-amylase S.A. Al-Harbi and Y.Q.Almulaiky
Table 3
Improved antioxidant capacities of certain foods with free and immobilized α-amylases.
|
v3-fos-license
|
2020-09-03T09:04:05.961Z
|
2020-08-30T00:00:00.000
|
221659411
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/complexity/2020/2413564.pdf",
"pdf_hash": "48089a5a14c45acbfba951ec639f6c30bf035229",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44098",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"sha1": "72e7994c1aee287acd55334e2549a2d7959b3ea3",
"year": 2020
}
|
pes2o/s2orc
|
Numerical Calibration Method for Vehicle Velocity Data from Electronic Registration Identification of Motor Vehicles Based on Mobile Edge Computing and Particle Swarm Optimization Neural Network
Shenyang Institute of Automation (Guangzhou) Chinese Academy of Sciences, Guangzhou 511458, China Shenyang Institute of Automation Chinese Academy of Sciences, Shenyang 110016, China School of Electronics and Communication Engineering, Sun Yat-Sen University, Guangzhou 510006, China Technical Center of Huangpu Customs District China, Guangzhou 510730, China School of Communications and Information Engineering, Xi’an University of Posts and Telecommunications, Xi’an 710061, China South China Agricultural University, Guangzhou 510642, China Yaz Technology Co., Ltd., Guangzhou 510630, China
Introduction
As urban development expands, the rapid increase in the number of vehicles on roads has caused various urban management problems and has led to the construction of large-scale, traffic-related infrastructures and the construction and the deployment of extensive vehicle-related facilities and equipment. ese are specifically manifested in the use of various traffic sensing and road monitoring techniques and equipment including video cameras, GPS, geomagnetic, and radar. ese devices are deployed on the road by local public security departments to ensure the safety and smooth operation of road traffic, strengthen the enforcement of traffic regulations, and ensure the intelligent control of city-level traffic. Intelligent control of traffic has become an important component of smart cities. However, video-based vehicle information recognition technology is susceptible to interference from environmental factors, such as weather, lighting, and distance from the vehicle. As a result, the effective recognition rate of video technology cannot meet the requirements of intelligent traffic management and cannot automatically identify false license plates, overlay plates, and intentionally obscured plates. Electronic registration identification of motor vehicles based on a single video image cannot satisfy the needs of smart traffic and the need to maintain a general social order [1]. To meet these needs, the application of electronic registration identification (ID) is proposed and promoted on a national level by establishing a national standard. Electronic vehicle registration identifications have been promoted in a limited number of cities and is used for intelligent parking [2], smart signal control and application [3,4], web-linked smart vehicles [5], special vehicle passing management and special electronic registration identification of motor vehicles and control [6], and traffic operation supervision and environmental driving restrictions [7]. However, due to various technical problems, the vehicle velocity data collected using electronic vehicle registration identification still cannot serve for traffic law enforcement; this has restricted the scope of application of electronic vehicle registration identification. In terms of increasing the accuracy of collected data with spatial attributes, many researchers have conducted studies on multiobjective optimization algorithms in recent years. Grid-based methods have found good applications [8][9][10]. For example, Wu [11], Luo [12], and Kong [13] located robot arm movement by constructing a grid; Leong [14] maintained the diversity of solutions by constructing a grid; Knowles [8] used adaptive grids to store the obtained nondominated vectors; Yang [15] used grid technology to apply evolutionary algorithms to solve high-dimensional multiobjective optimization problems; Li [16] used grids to simultaneously characterize the convergent and distributive characteristics and proposed a multitarget particle swarm optimization algorithm based on grid sorting. e algorithm uses coordinate mapping, ordering of element optimization in the grid coordinate system, and the Euclidean distance between the element and the boundary of the approximate optimum. To improve the accuracy of the velocity data collected using electronic registration identification of motor vehicles, we combine the results of previous researchers and propose a numerical calibration method for vehicle velocity data collected by electronic registration identification of motor vehicles based on a particle swarm optimization neural network. By comparing the optimized vehicle calibration velocity and the data collected by the OBD port of the test vehicle, the reliability of the acquired velocity data is further improved.
Installation and Connection Method for Reader/Writer of Electronic Vehicle Registration
Identification. e collection of vehicle velocity data based on electronic registration identification of motor vehicles involves installing a comprehensive sensing base station at each key intersection or public security checkpoint, expressway ramp, and checkpoints at city entries. A detection coil should be installed 15 meters away from the law enforcement monitoring station or the key intersection on the main highway. To inspect the vehicles passing through the road, a gantry should be installed at a distance of 15 meters from the detection coil and an electronic registration identification of the motor vehicle device should be installed on the gantry to automatically identify the license plate of the vehicle. e reader/writer antenna installed on the gantry acquires the data from passing vehicles equipped with an electronic registration identification of motor vehicles. When the vehicle comes in contact with the coil, the detection coil emits two signals, one signal goes to the high-definition video camera and the other signal goes to the reader/writer antenna. e vehicle license plate and the electronic registration identification of the motor vehicle signal of the vehicle are captured simultaneously. After the signal is read, it is uploaded to the comprehensive sensing base station for data processing.
e system at the comprehensive sensing base station consists of a license plate recognition system and a vehicle electronic registration identification of motor vehicles decal read and write system. e system as a whole performs data collection, verification, transmission, and processing in practical applications. e overall layout is shown in Figures 1
and 2.
An appropriate location at the traffic checkpoint or the road junction is chosen to install the gantry. e induction coil is installed 15 meters from the gantry. Installed on the gantry are the electronic registration identification of motor vehicles' antenna, the high-definition video camera, and the fill-in light. Two trigger signals are sent out through the induction coil, one to the video camera and one to the vehicle electronic antenna. e antenna of the electronic registration identification of motor vehicles is connected to the read/write control of the vehicle. e video camera is connected to a front-end computer through a network or to an exchanger.
Vehicle Velocity Detection Based on Reader/Writer of Electronic Registration Identification of Motor Vehicles.
e main method of using electronic registration identification of motor vehicles and readers/writers to acquire vehicle velocity data is the ultrahigh-frequency radio frequency identification technology. e radio frequency identification reader, using ultrahigh-frequency identification technology, interacts with the radio frequency electronic registration identification of motor vehicle tag installed on the wind shield of the vehicle through the ultrahigh-frequency horizontally polarized directional antenna. e velocity of the vehicle is calculated by noting the time difference for the vehicle to go through a fixedlength identification zone. Under normal circumstances, the transmission power of the RFID reader is 30-33 dBm, and the antenna gain is 10-12 dBi. Combined with comprehensive factors such as the sensitivity of the reader and the tag, the reading and writing coverage of the RFID reader is usually between 0 and 35 meters. e steps to calculate the velocity of a vehicle using an electronic registration identification of motor vehicle tag and a reader are as follows: Step 1. For an arbitrary vehicle shown in Figure 1, the identification record and time data are recorded as the vehicle enters the identification section of the electronic reader/writer; the same records are also recorded as the vehicle leaves the identification section.
Step 2. Interruption in the recognition record is checked to determine that the vehicle has left the identification section.
e velocity is calculated using the formula: where V is the velocity with which the vehicle passes through the recognition section, S is the direct distance according to the electronic registration identification of motor vehicle reader, that is, the length of the recognition section, T i is the time when the vehicle enters the recognition section, and T o is the time when the vehicle leaves the recognition section.
However, constrained by factors such as the material and angle of the vehicle's windshield, any presence of a film, location of the decal, or any other interference, different electronic registration identification of motor vehicle readers/writers often have different maximum reading distances. e differences in the maximum reading distance prevents the electronic reader from determining where the corresponding vehicle enters the recognition section, that is, where it was first read. As a result, the reader cannot accurately calculate the vehicle velocity based on the distance and time it read.
Given the various interferences on the maximum read distance of the electronic registration identification of motor vehicle reader/writer, the calculation of the vehicle velocity in actual practice is often combined with the received signal strength indication (RSSI), that is, the strength of the signal returned by the electronic registration identification of the motor vehicle tag. e higher the RSSI value, the closer the vehicle is to the antenna, and vice versa. Specifically, the position of the vehicle at different times may be calculated by using the RSSI value returned from the electronic registration identification of the motor vehicle tag of the vehicle at different times, and the velocity of the vehicle may then be Complexity calculated according to the time differences of the corresponding positions. at is, by analyzing the RSSI value of the feedback signal that is first recognized when the corresponding vehicle enters the recognition section, it can be determined whether the vehicle position at the first recognition is at the boundary between the direct shot area and the blind area or at the farthest point in the reflection area. e formula V � (S/(T o − T i )) can then be used to calculate the velocity of the vehicle. In actual implementation, however, the presence of blind spots and the emergence of environmental conditions may affect the results of velocity calculation. Analysis shows that although a blind spot may disappear, the radio frequency identification effect in the cross section of the blind area will still be different from that in the direct area. is is judged by the fluctuation of the number of recognitions per unit time from when the vehicle enters the recognition section to when it leaves the recognition section. If the fluctuation is not large, the vehicle has been recognized in the direct shot area the whole time. If the number of recognitions per unit time varies greatly and if one period shows a very high number of recognitions per unit time and the other period shows a relatively low number of recognitions, or if a vehicle in another lane is recognized and calculated by the electronic registration identification of the motor vehicle reader, then it is possible that a blind spot had disappeared. In a real environment, it is necessary to set up different calculation methods for calculating vehicle velocity in different situations. Since it is often difficult to accurately select the correct calculation method, the calculated vehicle velocity often deviates from the actual values.
Velocity Calibration Method Based on Particle Swarm Optimization Neural Network Algorithm
Since vehicle velocity data are affected by various environmental and equipment factors, we use the velocity data read by the OBD interface on the vehicle as the reference standard. We establish a velocity calibration model based on a particle swarm optimization neural network algorithm and incorporate various types of data influenced by environmental factors. In order to preserve the diversity of velocity data acquisition methods using electronic registration identification of motor vehicles, the source of the velocity data comes from data collected by the electronic registration identification of the motor vehicle reader/writer on vehicles in the lane of interest and in other lanes. On the two-way, eight-lane road section, the velocity of vehicles passing through the recognition section can theoretically be calculated from the data read by four electronic registration identification of motor vehicle readers/writers. is completes the detection network of electronic registration identification of motor vehicle readers/writers set up to improve the accuracy of velocity calibration. In addition, to improve the calculation efficiency of the neural network, attention is given to the distribution efficiency of sensing tasks and the collection efficiency of sensing data in the edge network in order to improve the scope of coverage and sensing efficiency of the reader application. e readers that have deployed the task distribution service will serve as the source of task distribution and continuously distribute vehicle-borne velocity detection tasks to mobile devices. During the data collection process, the vehicle-borne devices serve as the source of the velocity data detection, and the data are transmitted back to the readers involved in data collection [17,18].
RBF Neural Network.
e RBF neural network is an artificial neural network that uses local adjustment to perform function mapping. It has strong input and output mapping functions and is an optimal network for performing mapping functions in feed-forward networks. It has a strong nonlinear approximation capability, a simple network architecture, and a fast learning velocity. e output matrix of the hidden layer after iterative convergence has a linear relationship with the output. It is an ideal algorithm for calculating the degree of influence [19,20].
Using RBF as the "base" of the hidden unit constitutes the hidden layer space, so that the input vector can be directly mapped to the hidden space, without the need for a weighted connection. After the center point of the RBF is determined, this mapping relationship is also determined.
e mapping of the hidden layer space to the output space is linear, that is, the output of the network is a linear weighted sum of the output of the hidden unit, and the weights here are the tunable parameters of the network. Here, the role of the hidden layer is to map the vector from a low dimension to a high dimension, so that the linearly inseparable case of the low dimension can become linearly separable in the high dimension. us, the network's mapping from the input to the output is nonlinear, while the network's output is linear with respect to the adjustable parameters. e weights of the network can be solved directly from the linear equations, which greatly speed up the learning and avoids local minima.
In the two-staged learning process of the RBF neural network, unsupervised learning mainly determines the radial basis vector and normalized parameters of the Gaussian basis function of each node of the hidden layer based on the input samples; that is, it determines the center and variance of the hidden layer basis function. In the supervised learning phase, the weights of the hidden layer and the output layer are calculated using the least square method after determining the hidden layer parameters. e output of the i-th hidden layer of the RBF neural network is [20][21][22] where x(t) is the input vector of the network at time t, c i is the central vector of the i-th unit of the hidden layer, s i is the shaper parameter of the Gaussian function, where s i > 0, 1 ≤ i ≤ L, and L is the number of nodes of the hidden layer. e overall output of the RBF neural network is as follows: 4 Complexity where For an RBF network with k input nodes, m output nodes, and n learning samples, the error objective function is where λ is the forgetting factor and δ i (n), y i (n), and y i (n) are, respectively, the error of the output node, the expected output, and the actual output. e operation performance of the RBF neural network mainly depends on the center and radial basis of the hidden layer function and the weight of the output layer. Additionally, the number and the center of hidden nodes in the RBF network are difficult to determine, which affects the accuracy of the entire network. When the selection of the training samples contains a high degree of randomness, the number of errors will increase rapidly, and it is difficult to meet requirements. ese errors will contribute directly to the calculation of the predicted results. In addition, when determining these parameters, the learning strategy of the traditional RBF neural network has a great disadvantage, that is, it can only find the optimal solution in local space. If these parameters are not set properly, the accuracy of the approximation will be reduced or the network will diverge. Furthermore, since the input samples contain various types of data, including discrete values, continuous values, and missing values, the training samples are usually obtained by random extraction. e center of the RBF neural network hidden layer basis function is selected from the input sample set, which has a large dependence on the training samples. In many cases, it is difficult to reflect the true input-output relationship of the system, and it is easy for data pathology to occur when there are too many initial center points.
is sample data selection dilemma is a key problem to be solved when an RBF neural network is used for nonlinear system modeling. To this end, we introduce in this paper, a particle swarm optimization algorithm to optimize the parameters of the RBF neural network, and we apply it to the numerical calibration of vehicle velocity detection based on electronic registration identification of motor vehicle reading.
Particle Swarm Optimization Algorithm.
Particle swarm optimization (PSO) is an efficient heuristic parallel search algorithm. Since it converges rapidly during optimization and does not require gradient information of the objective function, it enjoys the advantage of being simple and easy to implement, and thus has been applied in the field of engineering technology [23,24]. e solution of the PSO algorithm for the optimization problem is abstracted as an entity, the "particle," without weight and volume. Each particle has a fitness value determined by the optimized function, and this value is searched in the search space at a certain velocity through the cooperation and competition among the particles. e specific process is to first initialize a group of random particles; these particles update their velocity and position by tracking the two optimal particles. e two optimal particles are the individual optimal particle (the optimal solution found by the particle itself, known as the individual optimal position) and the global optimal particle (the optimal solution found by the entire population up to now, known as the global best position).
Let the search space be an n-dimensional space and the particle swarm consists of n s particles; are the position and velocity, respectively, of the i-th particle at time t in the search space, is the personal best position (referred to as pbest) of the i-th particle at time t, and is the global best position (referred to as gbest) at time t. According to the PSO algorithm, the update formula for particle velocity and position is as follows [23][24][25]: where c 1 and c 2 are the learning factors or acceleration constants, respectively, for adjusting the step size of the particle flying to the personal best position and the global best position. In order to reduce the probability of the particle to flying out of the search space, the velocity of the particle in each dimension is usually limited to a certain value within the range. Also, r 1 and r 2 are random numbers obeying a uniform distribution in the interval [0, 1], i � 1, 2, . . . , N s , N s is the scale of the particle swarm, and j � 1, 2, . . . , n.
Traditional PSO algorithms generally follow the following steps to complete the iteration [26]: step one, initializing the velocity and position of the particle swarm; step two, calculating the fitness value of each particle in the particle swarm; step three, updating the personal best position of each particle; step four, updating the global best position of each particle; step five, updating the velocity and position of the particles and taking certain measures to ensure that the particles remain in the search space; step six, determining whether the termination conditions are met (terminate the algorithm if yes, or return to step two).
However, in the updating process of the traditional PSO algorithm, the inertia weight, learning factor, and maximum Complexity velocity jointly determine the balance between the global exploration and local development capabilities of the algorithm, which has a direct impact on the search performance of the algorithm.
RBF Neural Network Based on Particle Swarm
Optimization.
e key point of the seeking process is that the parameters of the RBF neural network are combined into a particle vector by coding, and the individual optimal value and the global optimal value of particles are reduced to the corresponding parameters of the RBF neural network after initialization; the individual utility extreme value of particles is calculated by comparison, and the individual optimal value and the global optimal value are updated after comparison with the current individual optimal value and the global optimal value. Finally, the global optimal solution is decoded, restored to the parameters of the network, and the RBF network prediction is carried out. Before neural network training, the convergence of the RBF neural network can be ensured by normalizing all sample data.
By combining the calibration requirement vehicle velocity detection data using electronic registration identification of motor vehicles with characteristics of PSO and the RBF neural network, we arrived at the following procedure for optimizing the kernel function parameter of the RBF neural network with the PSO algorithm: Step 1. Data Normalization Processing. In order to improve the generalization ability, the convergence velocity, the correlation between influencing factors, and the fitting effect of the neural network, we use a maximum and minimum value method to normalize the sample data to the [0, 1] range: y(k) � (x(k) − min(x(n))/ max(x(n)) − min(x(n))), k � 1, 2, . . . , N, where min(x(n)) is the minimum value of the sample data and max(x(n)) is the maximum value of the sample data.
Step 2. e particle swarm and the neural network are initialized, the scale is determined, i.e., the dimension of the particle swarm, and the mapping is established between the dimensional space of the PSO particles and the connection weight of the neural network. e population size is set as N s and the maximum number of iterations as T max . e population location matrix Rand(x) and the velocity matrix Rand(v) are randomly generated as follows: where M � k(n + 1), k is the number of nodes in the hidden layer, n is the dimension of the input feature vector, x i,j |i � 1, 2, . . . , N; j � 1, 2, . . . , M| represents the j-th dimension component of the i-th particle position vector, (x i,1 , x i,2 , . . . , x i,M ) corresponds to the radial basis parameter of the RBF neural network, representing the j-th dimension component of the velocity vector of the i-th particle, and the value of v (t) max is related to x (t) max and x (t) min .
Step 3. e fitness of each particle is calculated, the mean square error of the neural network is used as the fitness function of PSO. e fitness function is a measure of the position of particles in space. e greater the value of the fitness function, the better the position of the particle. It is defined as follows: where FitnessFunc i is the fitness function value of the i-th particle and D(x i,1 , x i,2 , . . . , x i,M ) is the sum of the average errors squared of the neural network when the kernel function parameter is equal to Step 4. For each particle, its fitness is compared with the fitness of the best location it has experienced, and if the former is better, pbest is updated.
Step 5. For each particle, its fitness is compared with the fitness of the best location experienced by the swarm, and if the former is better, gbest is updated.
Step 6. e velocity and position of the particles are updated, and dynamic adaptive adjustment is performed using the inertial weights. Steps 3 through 5 are repeated until the calculation requirements are met. When the algorithm iteration stops, the weights and thresholds of the neural network correspond to the global optimum, which is the optimal solution to the problem.
Step 7. e vehicle velocity data are calibrated using the trained RBF neural network while simultaneously assessing the accuracy of the velocity data.
Numerical Optimization of Multireader Velocity Data
Using Mobile Edge Computing. e shortcomings of the RBF neural network, including the local optima and the effects on PSO by inertial weight, learning factor, and maximum velocity, have direct impact on the calibration results in the probability sense. In order to further improve the accuracy of velocity calibration, the capability of having multiple readers simultaneously implementing velocity detection of the same electronic registration identification of the motor vehicle tag had been achieved before our study was conducted. After each reader/writer calibrates the velocity data using the RBF neural network based on the particle swarm 6 Complexity optimization algorithm, the k-means method is used to find the cluster center value of different reader/writer values as the final velocity calibration value. e calculation method is as follows.
Assuming that the cluster partitioning is (c 1 , c 2 , . . . , c k ), the goal is to minimize the squared error E: where μ i is the mean vector of cluster C i and In order to improve the computing efficiency of the reader/writer, the emphasis of the optimization algorithm is placed on the distribution efficiency of the sensing tasks and the collection efficiency of the sensing data in the edge network of the reader/writer. By designating a certain reader/writer as the main computing reader/writer, readers/ writers already deployed for task distribution services will serve as sources for task distribution in the task distribution computation process. e main computing reader/writer will continually distribute the vehicle terminal velocity detection computing tasks to different readers. During the data collection and optimization process, all the readers/ writers serve as the source of vehicle-borne devices in acquiring data and return the data of completed computing tasks to the main computing reader/writer for final calculation.
Test Method and Data Description.
e tests were conducted in a road section where the electronic registration identification of the motor vehicle reader has already been installed for testing. e road was a two-way, eight-lane urban expressway, with a gantry equipped with internetconnected testing instruments.
ere were eight sets of electronic registration identification of the motor vehicles readers, with each traffic lane aligned with at least one reader for direct acquisition of the electronic registration identification of motor vehicle data in the vertical direction.
Since the automobile electronic registration identification of the motor vehicle reader has already acquired the capability of group acquisition, data can be collected simultaneously when multiple electronic registration identification of motor vehicles pass through at the same time, and we placed 1000 electronic registration identification of motor vehicles simultaneously on the test vehicles and scattered them at different locations inside the vehicles. In this manner, we demonstrated the interference immunity performance between the electronic registration identification of motor vehicles. e data read by the OBD interface on the vehicle is the true velocity of the moving vehicle. With the temporal data of the OBD interface synchronized with the temporal data of the reader, we selected three sedans as test vehicles, each carrying 1000 automotive electronic registration identification of motor vehicle cards, to pass under the gantry's readers with different velocity. e sample data set was formed with the data read from the 1000 vehicle-borne electronic registration identification of motor vehicles as the input, with the actual velocity data read by the OBD interface as the output, all matched with the time data. It should be noted that when the vehicles passed under the gantry, the velocity data of the electronic registration identification of motor vehicles in the vehicles were read by multiple readers, and each electronic registration identification of motor vehicles can form velocity data in different readers. e final complete sample data set was obtained by testing the three sedans in different lanes at different velocity, and with the data from repeated tests matched. A comparison of the true values of the velocity and the measured values of the velocity is shown in Figure 3. e abscissa is for the measured value of the velocity and the ordinate is for the true value of the velocity. If the two values are the same, the point falls on the straight line of y � x. If the true value is less than the measured value, the point falls below the straight line, and vice versa. Based on the data distribution shown in Figure 3, the differences between the true velocity and the measured velocity are shown in Figure 4.
Test Results and Analysis.
e experimental results show that the best effect is that the number of hidden layer centers of the RBF neural network is 9, and the network structure is 4-12-1. PSO algorithm parameters are divided into c 1 � c 2 � 2.0, the number of particle population is 46, and the maximum times of iterations is set to 500. Before neural network training, the convergence of the RBF neural network can be ensured by normalizing all sample data. After many experiments, when the control accuracy is 0.005, the average times of iterations using only the RBF neural network is about 320, while the average times of iterations using the PSO-RBF neural network is about 95. e errors in the collected detection data reflects the difference between the data collected by the traditional velocity detection method and the true values. Most of the data had an error rate within 5%, which can fully meet the needs of practical applications; to a certain extent, this is better than the error rate of video, geomagnetic, and other methods for collecting velocity data and is closer to the needs of practical applications. However, in some industries or specific applications, detected data need to be closer to the actual velocity, and higher requirements for controlling the data error rate are imposed. In this paper, we use the particle swarm optimization neural network method to calibrate the velocity detected with electronic registration identification of motor vehicles so as to obtain a detected velocity that is closer to the actual data.
Test data for different velocity and different lanes were screened for perfect time matches; a total of 11086 sets of complete data were obtained. e data acquired for the eight lanes showed an approximately uniform distribution. It should be noted that since the driver can only control the velocity at a certain interval when driving the vehicle through the gantry and the velocity on urban expressways is not less than 20 km/h, all data values below 20 km/h were Complexity discarded. In the meantime, requirements for safe driving forbids drivers from driving over 100 km/h under normal circumstances (although the electronic registration identification of motor vehicle readers/writers can accurately detect velocities up to 180 km/h), and therefore no tests were carried out over 100 km/h.
Using the same display method as in Figure 3, the actual velocity and the detected velocity are displayed for four lanes of the selected side of the road ( It can be seen from Figure 5 that, after the detection position of the electronic registration identification of the motor vehicle reader is fixed on the gantry, the detection results are also relatively fixed; detection in different lanes would not cause a large difference. However, since all the velocity values are detected by hardware, the velocity calculation method belongs to a generic algorithm. As long as its error is within a preset range, it meets the user requirement. For this reason, the error rate is still too high for certain specific applications. In addition, the distribution results of the detected velocity also showed that the errors of the two lanes near the center line and the emergency lane are slightly higher than that of the middle two lanes. It can also be seen that the faster the velocity, the lower the error. And, the errors are slightly larger at lower velocities. e tests are conducted on a selected middle lane, and the velocity detection results are calibrated using the following methods.
e RBF neural network is based on particle swarm optimization, and multireader/writer velocity values are optimized with the RBF neural network based on PSO.
e calibration results are shown in Figures 6-9.
As can be seen from Figures 6-9, the optimal output calibration result is the value of the multireader velocity with numerical optimization based on the particle swarm-optimized RBF neural network. Especially in the low-velocity stage, the detected velocity value can be better calibrated, and the velocity error value can be further reduced. e four scatter plots above show that, in the 25-40 km/h range, the detected velocity values and the actual velocity values are greater than those in other regions. e mean square error value for all data is 0.0965, and the Pearson coefficient , a measure of correlation, is calculated to be 0.9301.
In order to further compare the differences in the calibration process of velocity detection in different lanes, we statistically analyzed all the test data. e statistical results are shown in Table 1. It should be noted that in the second column of the table, labeled "optimization algorithm," "uncalibrated" represents data obtained without calibration, "Algorithm 1" represents the detected velocity output value calibrated by the RBF neural network, "Algorithm 2" represents the detected velocity output value calibrated by the RBF neural network based on particle swarm optimization, and "Algorithm 3" represents the detected velocity output value calibrated by the RBF neural network based on particle swarm optimization and on multireader velocity numerical optimization. Table 1 lists the calibration results of the detected velocity values for different lanes using different algorithms. As far as the trend of calibration accuracy is concerned, the accuracy of the data from the one-way middle lane is relatively high. For detected velocity values calibrated with multireader numerical optimization based on the PS-optimized RBF neural network, the percentages of data with errors less than or equal to 5% are 92.39%, 92.10%, 91.41%, and 91.11% for lane nos. 2, 3, 6, and 7, respectively. e average percentage of data with errors less than or equal to 5% is 91.76% for the middle lane, 82.13% for the edge lane, and 87.12% for all lanes combined. In terms of actual applications, results with an error rate less than 10% have an application value, and the calibrated velocity detection data have met this requirement. A lower error rate not only is indicative of the method's application value, but also reflects the real traffic condition more realistically. In the comprehensive statistical analysis of the error rate of the detected velocity value, the rate for the middle lane is lower than that for the edge lane. After calibration, the error values for different testing velocities are statistically consistent. To further analyze the test results, we plot in Figure Complexity value of the uncalibrated detected velocity values and the red circle represents the error of the detected velocity values calibrated by the RBF neural network based on particle swarm optimization after multireader numerical optimization. e scatter plot of the error distribution for the uncalibrated and calibrated detected velocity values shows that the algorithm proposed here can achieve accurate calibration of the velocity detected by electronic registration identification of motor vehicles and that the error value can be controlled within an acceptable range. In actual application, if the various influencing factors are fully considered, including the external environment, electromagnetic interference, windshield material, angle, different vehicle types, presence of film, decal placement location, and other interference factors, then the test results can accurately reflect the value of the real velocity, and the algorithm can provide data even closer to the real situation of practical application scenarios for improving traffic management efficiency such as in traffic rule enforcement, traffic status detection, traffic congestion index calculation, and navigation and parking guidance.
Conclusion
Based on vehicle velocity values collected by electronic registration identification of motor vehicles and by on-board (OBD) equipment and combined with actual application needs, we calibrated the detected velocity output value of electronic registration identification of motor vehicles using an RBF neural network, particle swarm optimization-based RBF neural network, and multireader numerical optimization based on a particle swarm-optimized RBF neural network and compared the data with that data collected by OBD. We then analyzed the errors. e method proposed here can greatly improve the accuracy of the velocity detected by the electronic registration identification of motor vehicles and make it closer to the real value through the use of an algorithm. e test results show that the results predicted by the method can more accurately reflect the actual driving velocity. e proposed method can be an important reference for improving the manufacture and application of electronic registration identification of motor vehicle software and hardware. In addition, since the two methods of data collection using electronic registration identification of motor vehicles and using on-board (OBD) devices both have their limitations, such as data acquisition cycle and time matching, the method proposed here is established on the velocity values selected from the same data by time matching. As a result, part of the time series data is incomplete, and less than 30% of all the data collected are used as test samples. With only 30% of the data, it is difficult to fit or predict velocity values using the time series method, and thus the data analyzed in this paper are slightly lacking in the time dimension. is would be one area to strengthen in future studies, and the focus should be on the fitting of velocity data with different time stamps. e fitted velocity detection data, together with the algorithm described here, would then be able to completely describe the vehicle trajectory and provide important guidance for algorithm selection and model optimization.
Data Availability
e original data used to support the findings of this study are restricted by the relevant law enforcement departments in order to protect vehicle information privacy and law enforcement basis. Data are available from relevant law enforcement departments for researchers who meet the criteria for access to confidential data.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
v3-fos-license
|
2019-02-06T18:32:29.544Z
|
2019-01-31T00:00:00.000
|
59612099
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12909-019-1469-2",
"pdf_hash": "751a84f736723c590b08db6beb527695885d39e8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44099",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"sha1": "e9fe8a3ff65ceb4c3167becc24617421bd38e058",
"year": 2019
}
|
pes2o/s2orc
|
Randomized study showing the benefit of medical study writing multiple choice questions on their learning
Background Writing multiple choice questions may be a valuable tool for medical education. We asked medical students to generate multiple choice questions and studied its effect on their exams. We hypothesized that students generating questions would improve their learning. Methods We randomized students in their second and third years at the School of Medicine to write four multiple choice questions on two different sections of General Pathology (Immunopathology and Electrolyte and acid-base status; second year) and Pathophysiology (Blood and Respiratory system; third year). We analyzed whether students writing questions on a section had better results in the exam test in that section than the rest of the students. Results Seventy-five (38.2%) students wrote questions for General Pathology and 109 (47.6%) for Pathophysiology. Students that wrote questions obtained significantly better results in the exam than those who did not. In General Pathology, students who wrote questions about Immunopathology obtained better results in that section than those who wrote questions about the other section (5.13 versus 3.86 over 10; P = 0.03). In Pathophysiology, the differences between both groups were not significant, but students who wrote good questions about Respiratory system obtained better results in that section than those who wrote good questions about Blood (6.07 versus 4.28 over 10; P = 0.015). Male students wrote good questions in Pathophysiology more frequently than female students (28.1% versus 10.4%; P = 0.02). Conclusions The writing of multiple choice questions by medical students may improve their learning. A gender effect may also influence this intervention. Future investigations should refine its potential role in teaching.
Background
The construction of questions by the students has been used as a learning tool for medical education. This tool increases the students' participation in learning and helps them to identify the relevant topics in the lesson content [1,2]. As designing a good multiple choice question (MCQ) requires a deep knowledge of the material being assessed, it has been suggested that the formulation of MCQ may contribute to a deeper understanding of the topic than other methods [3].
Previous studies have shown that the designing of questions improves their achievement and promotes student motivation [4][5][6]. On the other hand, other researchers have not found such a beneficial effect [7] or have found only a positive learning effect on certain groups of students [8].
The aim of this study was to investigate whether writing MCQ could improve learning of a topic. The effect of the quality of the questions and of student gender was also evaluated. This was done through a prospective randomized study.
Participants and setting
The study was conducted in the School of Medicine of the Universidad de Navarra, Spain. In our University, Medicine curriculum is studied in six years. Two groups of potential participants were selected: students in their second year studying General Pathology, and students of Pathophysiology (third year). Both are obligatory in the Medicine curriculum. These two subjects are taught by the investigators and by other members of the Department of Internal Medicine. The program of General Pathology is organized in 3 months (January to April) through 48 master classes, including two blocks of 8 classes (Immunopathology and Disturbances of electrolyte and acid-base status), that are taught by two of the investigators (JIH and FL, respectively). Pathophysiology is organized in 8 blocks of 11 master classes. The first four blocks are given in the first three months (September to November). The students have an exam in December that includes these four blocks, and those who reach a qualification of 6 out of 10 don't need to include these four blocks in the final exam in May. One of the investigators of the study (JIH) teaches two of these four blocks (Blood and Respiratory system pathophysiology).
Intervention and procedure
One of the investigators (JIH) invited the students to participate in the study in the first lecture of the subject. The students had the opportunity of writing four MCQ with four potential choices. The topic of their questions (Immunopathology or Disturbances of electrolyte and acid-base status for second year, and Blood or Respiratory system pathophysiology for the third year students) was randomly determined, according to the number of their university identity card (even or odds). To stimulate their participation in the study, they could get an extra qualification of up to 0.25 points (out of 10), according to the quality of their questions. The following characteristics were evaluated in each MCQ: importance of the topic, adequately written, unambiguous question (only one valid answer), middle difficulty, and originality. A question was considered to be good if it reached an adequate quality in most of the mentioned characteristics. Students who wrote at least two (of a maximum of four) good questions were analyzed separately. All the students had access to all the questions and their answers (uncorrected by the teacher), independently of whether they had written any question. Two questions of each topic were selected for the exam (with previous changes made by the teacher). The exam of General Pathology included 14 questions of Immunopathology, 14 questions of Disturbances of electrolyte and acid-base status and 62 questions of other topics. Pathophysiology exam included 25 questions of each of the topic explained in the first term (Blood, Respiratory system, Circulatory system and Renal pathophysiology).
Outcome measure
The outcome measure was the performance of the students in each of the parts of the exam. The effect of writing questions about a topic (and of writing good questions) and gender were studied.
Statistical analysis
Continuous variables are expressed as median (quartile range), and categorical variables as number (percentage). Differences between groups were compared with Mann-Whitney test (continuous variables) and Chi-square test (categorical variables). Differences were considered to be significant if P value was inferior to 0.05. Statistical studies were done with the software Statistical Package for Social Sciences (SPSS). As this was a pilot study and we did not have previous data about the proportion of students' participation and differences between the groups, sample size had not been previously estimated.
Ethical considerations
The study was approved by the Universidad de Navarra School of Medicine and by the Committee for Research Ethics of the Universidad de Navarra. Participation of the students in the study was voluntary, without the need of a written informed consent. Assignment of the students to each arm was random. The ethical principles of the World Medical Association Declaration of Helsinki were observed [9].
Participation in the study
Seventy-five (38.2%) of the second year students and 109 (47.6%) of the third year students participated in the study. There were no gender differences between participants and non-participants, but participants obtained better results than non-participants, not only in the topics included in the study, but also in other topics of the subject. Third-year students who wrote MCQ also had a better score in General Pathology in the previous year than those who did not (Tables 1 and 2).
Evaluation of the effect of writing questions about a topic on performance in the exam
Thirty-eight students wrote questions of Immunopathology and 37 wrote questions of disorders of electrolyte and acid-base status. There were no gender differences between them. The performance in Immunopathology of the students who designed MCQ of Immunopathology was significantly better (median qualification: 5.13 versus 3.86 over 10; P = 0.03). Other differences between both groups of students were not significant (Table 3). There were not significant differences between male and female students in their perfomance (data not shown).
Fifty-five students wrote questions about blood pathophysiology and 54 about respiratory pathophysiology. There were no gender differences between them. The performance on the four topics included in the exam was not significantly different between both groups ( Table 4).
Evaluation of the quality of the questions
According to the previously mentioned criteria, 34/75 (45.3%) students of General Pathology wrote at least two questions that could be considered good in most of the criteria: 18/38 (47.3%) in Immunopathology and 16/37 (43.2%) in Disturbances of electrolyte and acid-base status. The proportion of students with to or more good questions in Pathophysiology was 15.6% (17/109): 6/55 (10.9%) in blood and 11/54 (20.4%) in respiratory system. There were no gender differences in General Pathology, but the proportion of male students with good questions in Pathophysiology (28.1%) was significantly higher than female students (10.2%) (Fig. 1).
The students who did good MCQ in Respiratory pathophysiology obtained better results in the exam questions of Respiratory pathophysiology than the students with good questions in Blood pathophysiology (median qualification: 6.07 versus 4.28 over 10; P = 0.015). The rest of the comparisons concerning the quality of MCQ were non-significant (Table 5).
Discussion
The present randomized study reveals that the generation of written MCQ by medical students seems to exert a positive learning effect. Second year students who wrote questions about Immunopathology had better results in this topic that students who did questions in other topic. Furthermore, students who wrote good MCQ about Respiratory Pathophysiology also obtained better results on this topic than those who wrote good questions on other topic. The positive effect of question designing was not evident in all the comparisons that we made. A possible explanation is the poor quality of the questions designed by our students (less than 50% second-year and less than 20% third-year students wrote at least two good questions). It is likely that the beneficial learning effect of this intervention is evident only if they are good enough. Generating questions by the students stimulate critical thinking and academic performance [10]. The formulation of questions stimulates the students to reflect on their learning progress and start to develop metacognitive capacity [11], but this effect may require a minimal effort.
Comparison with the literature
Most of the previous studies about the potential effect of question designing are observational studies. Some studies have shown that formulating questions increased the understanding of the topic [8,12,13]. The present study reinforces this thought. We have found that question design increase the acquisition of knowledge. However, this is not a universal finding. Other authors have not found that MCQ writing has positive effects on learning [3,14]. Furthermore, other factors may influence on it.
The quality of the questions is one of these factors [15]. Chin et al. found that basic questions do not help to deep learning of a subject. Our results are in agreement with these findings. On the contrary, Palmer and Devitt [3] did not find a positive effect on the exam results, despite their students made high quality questions. In our study, the students wrote their MCQ shortly before the exams. This last-minute work was probably accompanied by a small effort. Future studies should explore if the inclusion of MCQ design in daily work may increase this effort and improve its learning effect.
Another interesting finding is the difference between genders. Male and female students have different style preferences [16]. Female students usually have a higher degree of genuine motivation (genuine interest in the topic) [17] and males are possibly more stimulated in a competitive environment. Olde Bekknink et al. found that formulating an extra written question had a positive effect on male students [8]. Our study has also found a gender difference. Male students wrote better questions in the Pathophysiology course. Probably, this type of challenge is more motivating for males than for females.
Strengths and limitations
This was a large, prospective, randomized study that analyzed the potential effect of a MCQ designing on learning.
The study was done in two different scenarios (second and third year of Medicine) and with different teachers (two different teachers in second year and the same teacher in the third year). This intervention is not time-consuming. Thus, it is easy to apply in large groups. A major limitation was the fact of the poor quality of the MCQ formulated by the majority of the students. Probably, the students' effort for the generation of questions was small and the objective of a deep learning was not obtained in many of them. Furthermore, the classification of the MCQ as good depended of the subjective qualification of the investigators (according to pre-specified criteria). The use of more objective criteria would have been desirable. Another limitation is the absence of a universal demonstration of the beneficial effect of MCQ generation. The only significant differences that were found suggested that students who designed MCQ on a topic obtain a better score in this topic, but this finding was not confirmed in all the comparisons.
Conclusions
Formulating MCQ by students seems to exert a positive learning effect. This effect seems to be greater in male students and may be restricted to students who make a significant effort that allow them to formulate good questions. Future research may refine this strategy of participating of students in their learning.
|
v3-fos-license
|
2020-04-10T14:02:25.046Z
|
2020-04-10T00:00:00.000
|
215572128
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41392-020-0132-z.pdf",
"pdf_hash": "873f932a8c22d8071b6202fbf23a9662a2c2e0c3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44100",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "873f932a8c22d8071b6202fbf23a9662a2c2e0c3",
"year": 2020
}
|
pes2o/s2orc
|
The HIF-1A/miR-17-5p/PDCD4 axis contributes to the tumor growth and metastasis of gastric cancer
Dear Editor, Gastric cancer (GC) is one of the most common malignant tumors of the digestive system and the second leading cause of cancer death worldwide. Despite the gradually declining morbidity and mortality, GC still burdens many countries in East Asia. Understanding the GC pathological process is vital for the successful diagnosis, treatment and prevention of this disease. Programmed cell death 4 (PDCD4), a nucleocytoplasmic shuttling protein, binds to and exerts an inhibitory effect on the helicase activity of eukaryotic translation initiation factor 4A (EIF4A), which is an RNA helicase catalyzing the unwinding of mRNA secondary structure. By combining these two mechanisms, PDCD4 suppresses the translation of specific mRNAs. PDCD4 has become a hotspot of cancer research as a newly identified tumor suppressor gene in recent years. The roles of PDCD4 in GC mainly include the inhibition of cell proliferation and metastasis and the promotion of cell apoptosis. Despite a growing number of investigations, the underlying mechanism of PDCD4 in GC remains to be fully clarified. microRNAs (miRNAs) play an extensive role in various physiological and pathological processes. These single-stranded noncoding RNAs of 18–25 nucleotides can block mRNA translation by targeting the 3’ untranslated region (3’UTR). In a previous study by our lab, ten miRNAs, including miR-17-5p, were identified as the most highly upregulated miRNAs in gastric tumor tissues. Among these miRNAs, miR-16-5p, miR-23b-3p, let-7a-5p, miR-15a5p, miR-17-5p and miR-93 were identified as candidate regulators of PDCD4, indicating that PDCD4 could be the key downstream protein during GC development. Specifically, miR-17-5p and miR93 were shown to most likely regulate PDCD4. However, it remains unclear whether miR-17-5p could regulate PDCD4 during GC development. In this study, we found that decreased PDCD4 mRNA expression was negatively correlated with increased miR-17-5p levels in GC tumor tissues according to TCGA datasets (Supplementary Fig. S1a–c). This negative relationship between PDCD4 and miR-17-5p levels was also confirmed in 16 pairs of GC tumor tissues and their adjacent normal tissues (Supplementary Fig. S1d–i). Both low PDCD4 expression and high miR-17-5p levels led to worse overall survival (OS) and recurrence-free survival (RFS) outcomes for GC patients according to the TCGA dataset (Supplementary Fig. S1j). Mechanistic investigation indicated a potential regulatory site of miR-17-5p in the PDCD4 3’UTR (Fig. 1a). After overexpressing or silencing miR-17-5p using transfection of pre-miR-17-5p or antimiR-17-5p (Supplementary Fig. S2a), we determined that miR-175p could inhibit PDCD4 expression (Fig. 1b, Supplementary Fig. S2b, c). Transfection of the PDCD4 plasmid rescued the miR-17-5psilenced PDCD4 levels (Supplementary Fig. S2d–h). The luciferase reporter assay demonstrated direct binding between miR-17-5p and the PDCD4 3′UTR (Fig. 1c). We also demonstrated that miR-175p promoted MKN-45 cell proliferation and migration and prevented MKN-45 apoptosis by suppressing PDCD4 (Supplementary Fig. S3). In the orthotopic mouse model, PDCD4 effectively inhibited tumor growth and liver metastasis, whereas miR-17-5p resulted in faster tumor growth and worse liver metastasis. PDCD4 overexpression restored the cancer-promoting effect of miR-17-5p (Fig. 1d). Moreover, the xenograft mouse model proved that miR17-5p plays an oncogenic role by repressing PDCD4 expression (Supplementary Fig. S4). To date, less is known about why miR-17-5p is overexpressed during GC progression. In this study, we identified 4 transcription factors that could potentially target the miR-17-5p promoter region (Supplementary Fig. S5a). The Oncomine dataset showed that TFAP2A (transcription factor AP-2 alpha) and HIF-1A (hypoxia inducible factor 1 subunit alpha) were overexpressed in GC tumors (Supplementary Fig. S5b–e). Bioinformatic analysis indicated that overexpression of TFAP2A or HIF-1A was negatively associated with the survival outcomes of GC patients (Supplementary Fig. S5f) and positively correlated with miR-17-5p levels in GC tumors (Supplementary Fig. S5g, h). However, we found that HIF-1A was highly expressed (Supplementary Fig. S6a–c) in the tumor tissue of 16 GC tissue pairs, while TFAP2A was decreased (Supplementary Fig. S6e, f). Given that only HIF-1A levels in GC tumor tissues were consistent with the previously predicted results, we focused on HIF-1A in our further study. The subsequent results showed that HIF-1A could potentially target the promoter region of miR-17-5p (Fig. 1e). By overexpressing or silencing HIF-1A (Supplementary Fig. S2j–l), we confirmed the direct target of HIF-1A to the miR-175p promoter region using luciferase reporter and ChIP assays (Fig. 1f, g). Consequently, HIF-1A activated the transcription of pre-miR17-5p and miR-17-5p (Fig. 1h, i, Supplementary Figs. S1g, S6d, g, h). Our results suggest that HIF-1A-activated miR-17-5p overexpression promotes GC development and metastasis by repressing PDCD4. When analyzing the data of cell transfections and the xenograft mouse model, we found that PDCD4 overexpression resulted in low miR-17-5p levels (Supplementary Figs. S2i and S4c), the underlying mechanism of which deserves special attention and investigation. We performed PPI, GO and KEGG enrichment analyses to figure out a PPI network of PDCD4 involving genes such as the EIF family, STAT4, RPS6KB1 and mTOR (Supplementary Fig. S7a–c). Interestingly, the KEGG analysis revealed that PDCD4 might participate in the HIF-1 signaling pathway through potential interactions with EIF4EBP1, MTOR, RPS6 and RPS6KB1 (Supplementary Fig. S7d). By further experiments, we found that PDCD4 could positively regulate the expression of HIF-1A, EIF4EBP1, MTOR, RPS6 and RPS6KB1 (Fig. 1j). In particular, PDCD4 could directly bind to RPS6 (Fig. 1k). This was an unexpected finding. The mTOR/RPS6KB1/RPS6/EIF4EBP1 signaling pathway is well known to enhance HIF-1A transcription levels. Our result might imply
Dear Editor,
Gastric cancer (GC) is one of the most common malignant tumors of the digestive system and the second leading cause of cancer death worldwide 1 . Despite the gradually declining morbidity and mortality, GC still burdens many countries in East Asia 1 . Understanding the GC pathological process is vital for the successful diagnosis, treatment and prevention of this disease.
Programmed cell death 4 (PDCD4), a nucleocytoplasmic shuttling protein, binds to and exerts an inhibitory effect on the helicase activity of eukaryotic translation initiation factor 4A (EIF4A), which is an RNA helicase catalyzing the unwinding of mRNA secondary structure 2 . By combining these two mechanisms, PDCD4 suppresses the translation of specific mRNAs. PDCD4 has become a hotspot of cancer research as a newly identified tumor suppressor gene in recent years 3 . The roles of PDCD4 in GC mainly include the inhibition of cell proliferation and metastasis and the promotion of cell apoptosis 4,5 . Despite a growing number of investigations, the underlying mechanism of PDCD4 in GC remains to be fully clarified. microRNAs (miRNAs) play an extensive role in various physiological and pathological processes. These single-stranded noncoding RNAs of 18-25 nucleotides can block mRNA translation by targeting the 3' untranslated region (3'UTR) 6 . In a previous study by our lab 7 , ten miRNAs, including miR-17-5p, were identified as the most highly upregulated miRNAs in gastric tumor tissues. Among these miRNAs, miR-16-5p, miR-23b-3p, let-7a-5p, miR-15a-5p, miR-17-5p and miR-93 were identified as candidate regulators of PDCD4, indicating that PDCD4 could be the key downstream protein during GC development. Specifically, miR-17-5p and miR-93 were shown to most likely regulate PDCD4. However, it remains unclear whether miR-17-5p could regulate PDCD4 during GC development.
In this study, we found that decreased PDCD4 mRNA expression was negatively correlated with increased miR-17-5p levels in GC tumor tissues according to TCGA datasets (Supplementary Fig. S1a-c). This negative relationship between PDCD4 and miR-17-5p levels was also confirmed in 16 pairs of GC tumor tissues and their adjacent normal tissues (Supplementary Fig. S1d-i). Both low PDCD4 expression and high miR-17-5p levels led to worse overall survival (OS) and recurrence-free survival (RFS) outcomes for GC patients according to the TCGA dataset ( Supplementary Fig. S1j).
To date, less is known about why miR-17-5p is overexpressed during GC progression. In this study, we identified 4 transcription factors that could potentially target the miR-17-5p promoter region ( Supplementary Fig. S5a). The Oncomine dataset showed that TFAP2A (transcription factor AP-2 alpha) and HIF-1A (hypoxia inducible factor 1 subunit alpha) were overexpressed in GC tumors ( Supplementary Fig. S5b-e). Bioinformatic analysis indicated that overexpression of TFAP2A or HIF-1A was negatively associated with the survival outcomes of GC patients (Supplementary Fig. S5f) and positively correlated with miR-17-5p levels in GC tumors ( Supplementary Fig. S5g, h). However, we found that HIF-1A was highly expressed (Supplementary Fig. S6a-c) in the tumor tissue of 16 GC tissue pairs, while TFAP2A was decreased ( Supplementary Fig. S6e, f). Given that only HIF-1A levels in GC tumor tissues were consistent with the previously predicted results, we focused on HIF-1A in our further study. The subsequent results showed that HIF-1A could potentially target the promoter region of miR-17-5p (Fig. 1e). By overexpressing or silencing HIF-1A ( Supplementary Fig. S2j-l), we confirmed the direct target of HIF-1A to the miR-17-5p promoter region using luciferase reporter and ChIP assays (Fig. 1f, g). Consequently, HIF-1A activated the transcription of pre-miR-17-5p and miR-17-5p (Fig. 1h, i, Supplementary Figs. S1g, S6d, g, h). Our results suggest that HIF-1A-activated miR-17-5p overexpression promotes GC development and metastasis by repressing PDCD4.
When analyzing the data of cell transfections and the xenograft mouse model, we found that PDCD4 overexpression resulted in low miR-17-5p levels ( Supplementary Figs. S2i and S4c), the underlying mechanism of which deserves special attention and investigation. We performed PPI, GO and KEGG enrichment analyses to figure out a PPI network of PDCD4 involving genes such as the EIF family, STAT4, RPS6KB1 and mTOR ( Supplementary Fig. S7a-c). Interestingly, the KEGG analysis revealed that PDCD4 might participate in the HIF-1 signaling pathway through potential interactions with EIF4EBP1, MTOR, RPS6 and RPS6KB1 (Supplementary Fig. S7d). By further experiments, we found that PDCD4 could positively regulate the expression of HIF-1A, EIF4EBP1, MTOR, RPS6 and RPS6KB1 (Fig. 1j). In particular, PDCD4 could directly bind to RPS6 (Fig. 1k). This was an unexpected finding. The mTOR/RPS6KB1/RPS6/EIF4EBP1 signaling pathway is well known to enhance HIF-1A transcription levels 8 . Our result might imply Fig. 1 a Potential binding site of miR-17-5p at the PDCD4 3′UTR. b PDCD4 was negatively regulated by miR-17-5p in GC cells. c Relative luciferase activities in MKN-45 and 293T cells treated with pre-miR-17-5p or anti-miR-17-5p. d miR-17-5p promotes GC tumor growth and metastasis in vivo by targeting PDCD4. e Potential targeting of HIF-1A at the promoter region of miR-17-5p. f, g Luciferase reporter and ChIP assays: HIF-1A targets the promoter region of miR-17-5p. h, i HIF-1A increases the levels of pre-miR-17-5p and miR-17-5p. j PDCD4 enhances the expression of RPS6KB1, RPS6, MTOR, EIF4EBP1 and HIF-1A. k Co-IP assay: RPS6 binds with PDCD4. *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001 Letter that PDCD4 enhances HIF-1A expression by directly activating RPS6. However, it remains unclear how PDCD4 could negatively regulate the miR-17-5p level in gastric tumors, which deserves more attention and further investigation in future study.
In conclusion, we demonstrated that the HIF-1A/miR-17-5p/ PDCD4 axis contributes to the carcinogenesis of gastric cancer. Furthermore, we determined that PDCD4 could enhance the HIF-1A signaling pathway and thereby form a negative feedback loop of PDCD4/HIF-1A/miR-17-5p/PDCD4. We inferred that the biological function of this regulation might be a self-rescue system for the decrease in HIF-1A expression through PDCD4 downregulation in the tumorigenesis of gastric cancer. Our study provides new insights into gastric tumor etiology and potential targets for GC treatment.
|
v3-fos-license
|
2021-09-01T15:09:57.085Z
|
2021-06-24T00:00:00.000
|
237873305
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academic.oup.com/isr/article-pdf/23/4/1595/41766052/viab027.pdf",
"pdf_hash": "6a5df5dd1edce0aaf1c9c1d45e3a40790e46c73b",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44101",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "52bba0134cc410953d08a5b07a9c93439553150b",
"year": 2021
}
|
pes2o/s2orc
|
ANALYTICAL ESSAY Practice Approaches to the Digital Transformations of Diplomacy: Toward a New Research Agenda
As a growing number of diplomatic practices take new digital forms, re- search on digital diplomacy is rapidly expanding. Many of the changes linked to digitalization transform or challenge traditional ways of doing diplomacy. Analyses of new forms of “digital diplomacy” are therefore valuable for the advancement of practice approaches in international relations theory. That said, digital diplomacy poses a number of challenges for international relations scholarship that are only beginning to be addressed. Digitalization is both a process and a result, and provokes key questions regarding continuity, change, agency, space, and materiality in diplomacy. The overarching aim of this article is to advance a research agenda that seeks to address key questions in the study of digital diplomacy on the ba- sis of various practice approaches. In particular, the article highlights three dimensions of change as being central to the research agenda and investi- gates how these can be explored in future analyses of digital diplomacy.
Introduction
The emergence and global spread of Internet technologies have fundamentally reshaped societies in just a few decades. In international politics, it is forcing diplomats to rethink core issues of governance, order, and international hierarchy (Seib 2016;Bjola and Manor 2018;Riordan 2019). The intersection of diplomacy and information technology has led to the emergence of new practices of "digital diplomacy." An intern posting a photograph on an embassy's social media account, high-level diplomats networking with tech companies in Silicon Valley, and state leaders using Twitter to comment on international negotiations are now examples of everyday diplomatic life. Digital diplomacy is a broad term that refers to how the Internet, digital tools, digital media, and the technology sector have influenced or even transformed diplomacy. Conceptually, digital diplomacy is seen as both a driver and a result of digitalization, and thus encompasses all the various ways in which digitalization interacts with diplomacy (Bjola and Holmes 2015). However, changes in processes and practices amount to more than adaptations of the takenfor-granted ways of doing diplomacy. New technology is bringing new actors into the field of diplomacy. It is also challenging established actors to change their ways of doing things and how they present and perceive themselves. Digital diplomacy can be said to have disrupted traditional diplomacy because it is in many ways a selfascribed experimental practice. Diplomatic actors are often aware that digitalization involves taking risks and engaging with the unknown, which in turn is at odds with the perception that diplomacy should display foresight and be risk-averse.
As research on digital diplomacy expands rapidly, "practice theory" in international relations (IR) has become a point of departure for studies because of its supposed ability to account for both continuity and change in international politics (Holmes 2015;Adler-Nissen and Drieschova 2019;Cooper and Cornut 2019; see also Adler and Pouliot 2011). The practice idiom can be used to describe a range of concrete phenomena from mundane aspects of local e-mail protocol to ceremonial use of social media in state representation or increasingly structured activities of teleconferenced negotiations in international organizations. The swift shift to "zoom diplomacy" in early 2020 as a result of the COVID-19 pandemic demonstrates how these practices can, at least temporarily, replace face-to-face diplomacy, alas not without its own attendant difficulties (Naylor 2020;Eggeling and Adler-Nissen 2021).
The study of the social practices of digital diplomacy is therefore assumed to offer opportunities to explore how processes, interactions, and habits are influenced by new technology, and how the interconnectedness of the Internet is unfolding on the ground. However, practice theory is not in itself a coherent set of theoretical propositions to be applied off-the-shelf to the study of digital diplomacy or any other field of social inquiry. Practice theory in IR is rather a pluralistic set of approaches that more or less coherently draws on insights from, inter alia, pragmatism, phenomenology, and critical theory (e.g., see Bicchi and Bremberg 2016;Kustermans 2016;Bueger and Gadinger 2018). As a pluralistic endeavor, it is open to different theorizations of social change. This plurality of approaches is only partly reflected in the research front on digital diplomacy; its character as a "moving target" tends to hamper theoretical advancements. Some research on digital diplomacy has already shown well-known symptoms of diplomacy being "particularly resistant to theory" (Der Derian 1987, 91), by focusing on evaluations or recommendations related to supposedly valuable skills in digital diplomacy rather than analyses of the social and political implications of digitalization in the field of diplomacy. Insights from different practice approaches in IR are sometimes used in an ad-hoc way to advance arguments that, for example, suggest that new technology in digital diplomacy can lead actors to the discovery of new ways of doing things, new goals, and new meanings. In his detailed account of how the emergence of digital communication and social media has affected the practice of public diplomacy, Manor (2019), for example, makes reference to "front stage" and "backstage" social interaction (using Erving Goffman's terminology), but does not engage with microsociology in the analysis of interactionist change or insights from practice approaches on the performative aspects of diplomacy.
We believe that there is much more to be gained in the field of digital diplomacy from engagement with practice approaches. The changes derived from digitalization that influence the long-standing culture and practice of diplomacy are instructive in probing some of the big questions in IR, not least those centered on the relationship between agents and their environment, in which practices can be analyzed as both signifiers of continuity and carriers of social and political change. We would therefore like to invite scholars in IR and related fields of inquiry to help advance a more reflective research agenda that can address key questions in digital diplomacy drawing on insights from various practice approaches. We propose an initial focus on three sets of interconnected questions that seem particularly fruitful. The first set of questions addresses diplomatic agency: How do encounters with digitalization reshape the diplomatic profession? How do digital diplomats challenge traditional diplomats? A second set of questions probes the spatial and material aspects of the "digital" in diplomacy: What is the relationship between online and offline practices of diplomacy? What practical difference does the absence of face-to-face interactions make? Finally, the third set of questions addresses the extent to which transparency in digital diplomacy creates and connects new kinds of audiences: How do online audiences contribute to enact diplomacy? What are the constitutive effects of online visibility?
This article has a three-fold aim as it seeks to (1) take stock of the research front in practice theory and digital diplomacy; (2) identify where more research is needed to understand the theoretical potential of this intersection; and (3) advance a new research agenda on practice approaches to digital diplomacy. We seek to achieve these aims through a conceptualizing and synthesizing discussion that addresses the risks and opportunities of practice approaches and digital diplomacy in two parts. The first part discusses the pitfalls and promises of practice approaches to digital diplomacy and how these are reflected in recent studies of the field. The second part addresses the methodological opportunities provided by digital observations and discusses the analytical tools offered by practice approaches to IR. We then take steps toward outlining a research agenda, which mainly draws on the pragmatist tradition of practice theory, which can inform analyses of digital diplomacy. We suggest three key dimensions of digital diplomacy in order to address the questions outlined above. We conclude by calling for more systematic interaction between practice approaches to map these practices and how they are transforming diplomacy.
Pitfalls of Using the Practice Idiom without Theorizing Practices
Thus far, research on digital diplomacy has been dominated by studies of the digitalization of public diplomacy, which is sometimes included in the definition of the term "new public diplomacy" (Melissen 2005;Seib 2010;Hayden 2012;Pamment 2013Pamment , 2016Manor 2019). In this body of literature, approaches that use concepts such as "soft power", "strategic communication", and "nation branding" have led to an instrumental understanding of how such influence is best projected and how it can be measured. From this perspective, digital diplomacy is often reduced to a tool of soft power aimed at attracting and persuading foreign publics through the promotion of a country's cultural attributes and values (Nye 1990(Nye , 2004. These studies have been well received by practitioners in the field, that is, the diplomats and civil servants who benefit from studies of digital diplomacy practice understood as practical knowledge that can easily be transferred and adopted in and through guidelines. This perspective is not problematic per se and such studies have contributed to an exploration of sites of digital diplomacy, but they have also created some confusion over the promises of a practice approach to digital diplomacy where practice is simply understood as "practical" in direct contrast to "theoretical" (cf. Kustermans 2016, 178).
We argue that this instrumental view of practice is not totally irrelevant as long as it is connected to social theory insights that can provide an analytical lens on this specific kind of rationality. Best practices on digital diplomacy can have explanatory significance through engagement with logics of action that can isolate the "doing" from its actors and environment. That said, this is where the strictly instrumental and rationalist approach falls short, since the art of diplomacy is not a skillset that can be acquired from a textbook: it requires tacit knowledge. Part of the problem here is that the international spread of digital diplomacy has become a top-down process. The diplomatic corps was not dominated by digital natives but instructed to change and adopt by their governments. In order to capture the influence of change and continuity in international politics, research should focus on the agency of diplomats doing digital diplomacy rather than evaluating policy or communication campaigns. Essentially, we should not assume that anyone can do digital diplomacy after having read a manual of best practice. Moreover, it is also relevant to consider how these new patterns of digital diplomacy output may reproduce or disrupt patriarchal social structures (Standfield 2020).
The interesting practical knowledge of digital diplomacy is rather related to agency and the preconditions of logics of action. The COVID-19 pandemic has exposed many challenges related to the constraints on converting diplomatic practices that depend on tacit knowledge to the use of digital tools. In April 2020, the EU High Representative for Foreign Affairs and Security Policy, Josep Borrell, expressed the difficulty of building trust and finding compromise in EU diplomacy when all social interaction takes place by video conference (Borrell 2020). To distinguish them from policy briefs or think tank reports, studies into these processes should strive to better capture how diplomats balance their online and offline environment in practice, and the potential constraints or lack of them. The balancing of online and offline practices can reveal how diplomats overcome constraints in the absence of face-to-face diplomacy and more generally contribute to a mapping of the socialization of diplomatic conduct online and the emergence of digital norms in new habits.
The entanglement with strategic communication, public relations, and marketing in digital diplomacy is also problematic in relation to how practices are often understood as relying on unreflective, automatic, unconscious, and habitual actions. This could be reason enough to push digital diplomacy into strictly rationalist models of diplomacy that view diplomacy as strategic interactions. This, however, would be a lost opportunity for studying an area of diplomacy with patterns of meaningful action that are gradually being naturalized into contemporary diplomacy. In terms of communication strategies, more research is needed to understand why certain strategies are becoming commonsensical while others are being abandoned in favor of stable routines. This is an area where practice approaches could contribute by situating digital diplomacy practice as a process of knowledge construction in which its practitioners are both leading and shaping their practice. The results of practices are therefore found not only in the impact of digital diplomacy, but also in the way that these practices themselves evolve. These patterns of output should be distinguished from strict understandings of the effects or impact of strategic communication on diplomacy. For instance, diplomats' use of social media can be considered a practice that leads not only to effects of increased visibility and transparency, but also to more contention in diplomacy. To leave the analysis here would in our view be a major pitfall because the aim of practice approaches is to move away from the view of practices only as outcomes in need of an explanation. The relationship between visibility, transparency, and contention in diplomacy is related to the interplay of different levels of diplomatic interaction, different material aspects of communication, and different levels of participation by spectators. The added value of studying practices as they unfold on the ground is therefore that they can and should be studied as both processes and the outcomes of these processes (e.g., see Adler and Pouliot 2011).
The materiality of digital diplomacy practice deserves more attention as it poses additional challenges to the value of practice approaches in this field. While one promise of the "practice turn" in IR was to open up new ways of studying the interplay between discourse and behavior, the materiality of practices is often studied in ways that favor one over the other. Digital diplomacy covers the visual practices of diplomacy not only because they are visible as they take place in the open, but also because social media are intrinsically visual (Manor and Crilley 2018). Practice approaches to digital diplomacy have therefore also drawn attention to visual and affective power in IR. The assumption that we live in a "visual age" suggests that visual elements such as images are shaping politics (Hansen 2011(Hansen , 2017Bleiker 2018;Adler-Nissen, Andersen and Hansen 2019). This is a promising route by which to study the interplay between behavior and discourse in analytically alike routines and ceremonial uses of social media. Once again, however, we foresee some troubling entanglement with strategic communication and practical knowledge when visuality is studied in digital diplomacy. For instance, are we speaking of strategies to generate emotion through visual representation or can we speak of visual or emotional practices of digital diplomacy? Visual representation and emotive storytelling are undoubtedly central to the opportunities presented by the digitalization of public diplomacy (Manor 2019). Social media favors intimacy and personalized communication over information, often through the use of images as cognitive shortcuts to emotions. In addition, Duncombe (2019) has highlighted the emotional dynamics of Twitter as a social media platform that can play a role in the escalation or de-escalation of international conflict. Nonetheless, we still need to further explore what it means when diplomatic actors are producing emotional content. What is the role of the social media platforms in such actors' ability to communicate emotions and how do their material conditions differ? What practices are visual media reflecting and how are they changing diplomacy?
While highly relevant, attention to the visual elements of digital diplomacy is a potential pitfall for practice approaches when they risk being treated only as techniques or outcomes of digitalization. Framing theory, for instance, tends to overstate the reach and speed of social media instead of paying more attention to how the format itself transforms the practice beyond the traditional frames of mass media. When considered as a trending practice of strategic communication, visual representations are often bundled together with textual communication but fail to recognize that the nonverbal character is often ambiguous. Moreover, visual political communication plays a central role in populist rhetoric. The way that visuals are engaged in digital diplomacy should therefore also be contextualized within a broader societal setting, both as the means and an outcome of adaptation to populist challenges (see Cooper 2019; Duncombe 2019). Visual analysis should be engaged with the promise of contextualizing the role of visuals to show that videos or images themselves as material objects are not necessarily contentious but may become so when considered as contextual practices (Hansen 2011(Hansen , 2017. To treat visuals as merely an effective format of communication is to ignore the fact that the material and symbolic aspects and affordances of social media may also perform authority and knowledge. Finally, digital diplomacy is simultaneously a front stage for diplomacy as a result of the digitalization of diplomatic practice, a window into a backstage area previously out of public reach, a journal or cumulative record of everyday practices of diplomacy, and a set of tools that facilitate ways of doing diplomacy. All of these forms and functions in which digital diplomacy serves to manifest diplomatic practice depend on interactions between leaders, between diplomats, with civil servants, with news media or with the public. While interaction between the first three groups of actors is a continuation of traditional forms and functions of diplomacy, the visibility, visuality, and interactivity grants a more prominent role to audiences. Thus far, attempts to theorize the role of audiences in IR have often fallen short and analyses tend to stop at elite perceptions of public and emotional engagement. Practice approaches in IR offer no easy remedy for this problem but the digitalization of everyday activities in combination with the accessibility (and normalization) of statements by international leaders such as former US President Donald J. Trump are key aspects of the understanding of audiences in diplomacy. These audiences matter greatly to our understanding of how Twitter might facilitate daily interaction and produce routines, specialized language and ceremonial uses of social media, for instance by creating expectations in which loaded silence and the absence of expected tweets become equally important. Thus, practice approaches offer many, albeit hitherto mainly unexplored, ways to theorize the role of audiences in digital diplomacy.
The Promises of Practice Theory
It has been argued that the main aim of practice theory in IR is to bridge dualist positions primarily within constructivist scholarship on ideational versus material, agency versus structure, and continuity versus change (McCourt 2016). Even though the ongoing revindication of the "practice turn in social theory" (cf. Schatzki, Knorr-Cetina and von Savigny 2001) seems to have resonated most strongly with constructivist scholars thus far, we adhere to the notion that practice theory in IR is, and needs to be, a pluralist endeavor. Bueger and Gadinger (2018) suggest that "international practice theory" currently cover a wide range of theoretical approaches, including Bourdieusian praxeology (e.g., see Kuus 2014), Foucauldian governmentality (e.g., see Merlingen 2006), Wenger's notion of communities of practice (e.g., see Bicchi 2011), Schatzki's view on situated practical understandings (e.g., see Bremberg 2016), and varieties of actor-network theory (ANT) (e.g., see Best and Walters 2013). This allows for quite different understandings of social practice, and it is hard to argue that certain understandings are necessarily more useful or valuable than others. This also means that "practice theory" cannot easily be applied to any field of inquiry (such as digital diplomacy) without specifying in some detail what notion(s) of practice the researcher wants to engage with and how to do so.
In general, we think that it is useful to define practices as patterns of meaningful action stemming from emerging nexuses of saying and doing. Practices are both agential and structural, since they are performed through agency but upheld by structure, which in turn ranges from standards of competence to technology. Moreover, some practice approaches emphasize the struggle for recognition as a key driver of political change (Pouliot 2016). Others stress that change is instead an outcome of collective learning processes (Adler 2019). These theoretical positions need not be mutually exclusive but can be combined in different ways (Adler and Pouliot 2011). For example, Adler-Nissen (2016) suggests that we should distinguish between "ordering" and "disordering" practices as a means of understanding how social practices relate to change as well as continuity. Others suggest that in order to analytically capture social and political change, we need to theorize in much more detail how improvisation and creativity can work to reshape practices, and thus specify the conditions under which habitual action is replaced by conscious reflection (Cornut 2018;Hopf 2018).
We agree with Pouliot (2014, 237) that insights from different practice approaches suggest that social causality is limited to specific contexts. At the same time, however, if we assume that practices are patterned meaningful actions, it seems possible that certain practices might travel to other social contexts within the same interpretive boundaries. Practice approaches in IR have for instance proved valuable for better understanding the dynamics of international security, where security practices tend to privilege stability over change but are disrupted and evolving in and through social relations and material conditions in various settings (e.g., see We, like many others, argue that diplomacy, as a social field and an object of academic inquiry, is especially well-suited to be explored by practice approaches because it combines path-dependent rituals of communication and representation with adaptive responses to societal change, not least linked to technological developments (Pouliot and Cornut 2015;Bicchi and Bremberg 2016). Diplomacy is traditionally defined as the "tactful" conduct of official relations among independent states (e.g., see Satow 1979Satow [1917), although contemporary conceptualizations tend to emphasize that diplomacy is not necessarily only performed by accredited diplomatic agents and that it needs to be understood as an "evolving configuration of social relations" (Sending, Pouliot, and Neumann 2011, 528; see also Barston 1997;Constantinou and Der Derian 2010). Advancing on this understanding, we suggest that diplomacy involves a set of practices that are concerned with both upholding the political status quo and managing social change in IR. For example, Neumann (2012, 307) suggests that the modern diplomatic practice of permanent representation is spread across Europe from Italian city-states partly as a result of the further weakening of the myth of Christian unity in the wake of the Protestant Reformation, i.e., a process of social transformation facilitated by new technology in the shape of the printing press. Moreover, Guzzini (2013, 524) argues that modern diplomacy has been heavily influenced by the behavioral repertoire of Court Aristocracy because even as new social groups entered European diplomatic corps by the early twentieth century, they essentially adopted the pre-revolutionary diplomatic habitus, albeit with "some adaptations due to the 'nationalization' of politics." Among the practices most commonly studied in diplomacy studies, those which result from digitalization and increased interaction with digital media are new in comparison to more established ways of doing diplomacy through bilateral negotiations, multilateral meetings, cultural exchanges, peace mediation, and so on. Digital diplomacy practice, however, consists of transformed traditional practices (digitalization as structural change) and new practices that are emerging as a result of new opportunities and improvisation on the ground (digitalization through participatory culture). This dual process of change seem to resonate with transformations of diplomatic practice in the past and thus makes digital diplomacy a fertile ground on which to explore and develop practice approaches to IR (for a similar suggestion, see Cooper and Cornut 2019).
While it is becoming more common to adopt a macro understanding of the digitalization of politics (and its opportunities and challenges), the changes brought about by the Internet, and social media in particular, were noticeable first "on the ground". The field of digital politics emerged rapidly but IR scholars were relative latecomers, in part because of the challenges of bringing the structural aspects of new media into theory (Jackson 2018). Holmes (2015) was one of the first proponents of a practice approach to digital diplomacy. He argued that the potential lay in the ability to explain the role of digital diplomacy in the management of international change. Rather than departing from digitalization as a process of change, he considered the role that digital diplomacy could play in two types of changes in the international system: top-down exogenous shocks and bottom-up incremental endogenous shifts. Diplomacy constitutes the international practices of managing these two types of change through mentoring or responses such as adaptation or reaction. In his view, digital diplomacy resulted from the bottom-up incremental endogenous shift where practices such as gathering and analyzing information online, negotiating using video-conference tools or listening to the public discourse on the ground, constituted types of diplomatic response.
This view of digital diplomacy as a set of international practices that develop on the ground represented the early understanding of the Internet as a facilitator of transparency, visibility, and connectedness in international politics. In fact, a decade ago, digital diplomacy was first and foremost synonymous with public diplomacy and understood as practices of listening and conversing online with foreign publics or the domestic public on the subject of foreign policy (Melissen 2005;Seib 2010). These practices were thought to facilitate the management of international change through the opportunities for connectedness and for speedy access to information brought about by the Internet. This bottom-up view of digitalization was associated with a notion of democratization of diplomacy, influenced by the increased inclusion of non-state actors, the rise of new virtual communities, and the growing relevance of freedom of information legislation brought about in the domain of IR by the Internet (Archetti 2012, 183). Holmes (2015) sought to expand the understanding of practices influenced by digitalization to include negotiations and changes to face-to-face diplomacy, predicting that exogenous shocks such as international crises would increasingly be managed using digital tools. These predictions appear to reflect current developments. While face-to-face diplomacy remains the cornerstone of international politics, digital tools are increasingly being engaged to complement, assist or even substitute face-to-face diplomacy during unexpected events. Digital diplomacy was engaged during the nuclear negotiations with Iran in 2013-2015 (Seib 2016;Duncombe 2017) and in the aftermath of Russia's annexation of Crimea in 2014 (Bjola and Pamment 2016), and later, during the COVID-19 pandemic (Bramsen and Hagemann 2021). In the spring of 2020, a number of virtual high-level meetings were conducted using videoconferencing tools by the leaders of the G7, the G20, the United Nations, and the European Union (Perrett 2020). By April 2020, European diplomats were speaking out about the constraints of operating online, such as the inability to "read a room" or engage in corridor diplomacy in order to reach consensus on sensitive issues (Barigazzi, de la Baume, and Herszenhorn 2020; Heath 2020). These comments seem to reflect what have previously been identified as the problems of fostering relationships and signaling intentions when substituting digital tools for personal social interaction (Holmes 2015).
Today, ministries of foreign affairs and embassies have guidelines on how to use social media for crisis communication and public outreach in unforeseen circumstances, but these routines have evolved gradually and proved effective to varying degrees. In addition, social media outlets-mainly Twitter-are now commonly used in communications between states and have become at least to some extent accepted channels of representation. They might even facilitate interpersonal contact that would otherwise not be possible. Digitalization has thus greatly influenced and even transformed diplomatic practice in ways that often challenge traditional protocol. Duncombe (2017, 555-60) for instance has shown how social media are employed in the practice of interstate dialogue. Twitter is a new technological platform for dialogue but its structure and formatting logic constrains and transforms practices such as the digital form of diplomatic signaling. When interstate dialogue is practiced on Twitter, the presence of an international audience changes the expectations of the performances of state actors. Using the case of Iran-US relations during the negotiations on the Iran nuclear agreement, Duncombe showed how Twitter provided Iran with new ways to signal support for the negotiations. Her study demonstrates how Twitter can shape, carry, and reflect states' struggles for recognition, and thereby legitimize political opportunities for change. Digital diplomacy practice, in this case interstate dialogue on Twitter, can thus lead to new conditions for, and new means and forms of interaction and outcomes in diplomacy, which are to a large extent visible to the public. As Iran-US relations became tense again under the Trump administration, it is therefore understandable that analysts turned to Twitter to observe signs of change, in particular following the Soelimani strike and the subsequent accidental shooting down of Ukraine International Airlines Flight 752 in Teheran in January 2020. This time, social media was used to reduce tensions between the United States and Iran. While the hashtag #worldwar3 was trending in public discourse on Twitter, state officials and world leaders primarily used the channel to signal de-escalation. 1 Another similar example is provided in Cooper and Cornut's analysis of US Ambassador Michael McFaul's use of Twitter as a means to reach out to Russian citizens in the midst of deteriorating Russia-US relations (Cooper and Cornut 2019, 314). Thus far, some of these actions appear to have been sporadic, but they can be said to constitute emerging practices in the sense that they illustrate what can be done (Pouliot 2010a).
Exogenous shock in the diplomatic community has also led to an increased need to understand the structural aspects of new media; how digitalization has not just led to new tools and political artifacts, but also influenced power relations in international politics. The rise of digital disinformation and cybersecurity threats has forced states and international organizations to rethink the role of digital diplomacy (Bjola and Pamment 2016;Duncombe 2018;Ördén 2018;Hedling 2021). Disinformation, or the deliberate use of false information to deceive, mislead, and confuse, is now a well-known aspect of planning and executing a state's digital communication strategy. While the digitalization of public diplomacy has meant a greater emphasis on diplomatic relationships with the public, both foreign and domestic, digital disinformation has emerged as the "dark side of digital diplomacy" (Bjola and Pamment 2018). The threat of digital disinformation in addition to other areas of cybersecurity threat effectively ended the "age of innocence" in digital diplomacy debates. In addition to sophisticated and creative communication strategies, diplomacy is now being increasingly transformed by its adaptation to technological advances, using algorithms and machine learning to balance the positive aspects of digitalization with its vulnerabilities (Riordan 2019). The realization that digital tools are not just available for promotional purposes has increased levels of competition in digital diplomacy. States and even individual diplomats that fare poorly in the online world of diplomacy can suffer consequences in the offline world. In addition, global crises such as the COVID-19 pandemic enhance the practicality of using digital tools in diplomacy. While online negotiations may be less than optimal, a crisis of such magnitude can often spur the emergence of alternative practices. For all these reasons, digital diplomacy can no longer be seen as optional in the diplomatic repertoire.
Consequently, digitalization has both enabled a more participatory culture of diplomacy and restructured patterns of communication and representation. The state of digital diplomacy practices, such as Twitter messaging or information gathering online, generally reflects both these characteristics. They are adaptations of ways of doing things but have to some degree also developed in their local context, depending on the actors that engage with digital media and the opportunities and challenges they encounter. Despite the fact that digital diplomacy is becoming a set of recognizable practices, it is still to a large extent an explorative and experimental area of diplomatic practice. Thus, its normative and behavioral underpinnings are shaped by the interplay between the host diplomatic institution and the opportunities and constraints offered by digital society in a specific political context. The new norms and behaviors that have emerged through the establishment of digital diplomacy must therefore be understood in the light of how political practices converge with digital society. This is why we argue that practice approaches are well-suited to furthering the conceptual understanding of digital diplomacy. The challenge to this field of inquiry is therefore to carefully and systematically map the practices of digital diplomacy alongside traditional practices of face-to-face diplomacy while acknowledging that both dimensions will continue to evolve.
The Methodological Opportunities of Digital Observations
Diplomacy studies that draw on practice approaches often seek inspiration from the methodological tradition in social science of using inductive insights into lived experiences. Such insights also carry weight for studies of digital diplomacy. When departing from insights from various practice approaches, it is often assumed that the analytical process involves tracing the background knowledge and tacit understandings of those who are "doing diplomacy". This includes the intersubjective rules and resources that are considered imperative for the performance of diplomatic practices such as negotiation and representation (e.g., see Pouliot 2008;Adler-Nissen 2014;Bueger 2014). Practice approaches favor publicly accessible performances over private mental states, which in effect treats practices as "raw data" (Andersen and Neumann 2012). In many ways, digital diplomacy is therefore an area of diplomatic practice that is especially well-suited to practice approaches, because it departs from the notion of subtle change (digital transformation) where practices are to some extent visible and observable. When diplomats use social media to signal or report during negotiations, the interactions that follow, or at least those which take place online, can be observed. Other practices in this group that do not take place in public, such as WhatsApp conversations, videoconferencing or using word processing to negotiate agreements, may still be observable because they are often assumed to be less sensitive than habits of mediation or decision-making, and are therefore more likely to be studied through practice-favored methodologies.
The view of digital diplomacy as experimental and thus characterized by high degrees of risk-taking has mostly been explored in relation to the specific logic of social media, where speed and reach are sometimes favored over accuracy, and mistakes are increasingly perceived as short-lived (Manor 2019, 33). However, digital diplomacy is also an experimental practice in broader terms. When in 2017, the Danish Ministry of Foreign Affairs announced the world's first "tech ambassador" with a global mandate and physical presence in three time zones (Silicon Valley, Beijing, and Copenhagen), they said that they did not know exactly what the goal was or what they were going to do. The Danish Ministry of Foreign Affairs stated that it was facing the future and was openly prepared to experiment in the development of diplomacy (Jacobsen 2017). Since then, the Danish ambassador and his team have been learning by doing, very much in line with the pragmatist view of a "hands on" approach to learning through creative experiment (cf. Dewey 1929). Diplomatic representation to the tech industry also illustrates the shifting power relations in a digital society where tech companies rather than state actors are acknowledged partners in diplomatic crises. In addition, this illustrates an acceptance of the need to learn by doing in order to keep up with societal developments. The fact that the Danish ambassador's presence in Silicon Valley can and has been used in unanticipated situations reflects how improvisation in specific cases is constitutive of the "big picture" in international politics (Cornut 2018). 2 While diplomatic representation to the tech industry appears to be spreading (Australia and France now also have "cyber ambassadors"), we do not know to what extent this practice reflects a fundamental shift in diplomacy. Adding this capacity to diplomatic institutions could, eventually, enhance the role of technology in diplomatic practice. However, tech ambassadors could also remain a rare breed of tech savvy individuals operating on the margins of conventional diplomacy. Here, studying practices of digital diplomacy holds the promise of helping us better understand if and how new practices go from being self-ascribed as experimental to becoming self-evident, normal ways of doing things (cf. Hopf 2018).
Thus, practices can and should be studied through multiple methods of data collection. They can be seen, talked about or read, which in turn encourages a combination or mix of methods of collecting empirical material. The favored method thus far for IR researchers who adhere to practice approaches has been qualitative interviewing (Pouliot and Cornut 2015;Adler-Nissen 2016;Bicchi and Bremberg 2016). Such interviews are often unstructured or semi-structured to account for the informants' descriptions of how they go about their business. While elite interviews appear to be the most common approach, Pouliot (2014, 245) considers ethnographic participant observation to be the best method for embedding practices in their social context. Indeed, ethnography is conceived as the holy grail of studying practices of diplomacy (Neumann 2012;Kuus 2014;Marsden, Ibañez-Tirado, and Henig 2016). The assumption is that observations of how practitioners do politics, preferably in combination with direct and unfiltered accounts, ideally enable the researcher to understand how and why agents act, behave, think, and feel (see Pouliot 2010a). In reality, however, such access is rarely possible.
We argue that digital diplomacy is particularly interesting to practice approaches in IR because it involves practices that can be observed to a better extent than many other diplomatic practices. This is linked to the transparency of the Internet, where for instance communication practices are highly visible, as well as the relative or perceived neutrality of technology compared to other elements of diplomacy. It is arguably less likely that a researcher would be allowed to observe a high-stakes negotiation than the less secretive ways of updating a social media account. While the content of online group discussions may be subject to secrecy regulations, it is not impossible for researchers to gain access to the ways in which such communication technologies are being used. For instance, in their empirically rich account of track-change diplomacy, Adler-Nissen and Drieschova (2019) draw on participant observations made through access to a large number of documents on draft legislation circulated for the Committee of Permanent Representatives (Coreper), which negotiates in preparation for the meetings of the EU Council of Ministers. Access to these documents enabled the researchers to see how word processing software (specifically the track-change function) plays an instrumental part in the negotiation of political agreements. While these practices were contentious and reflected struggles for power, the fact that they could be "seen" rather than "talked about" was a probable factor in the successful methodology of the study.
The visibility of digital diplomacy also opens up avenues for promising research designs where interviews can be combined with observation. In addition to gaining access to otherwise sensitive negotiation documents, some digital diplomacy practice can be seen online and therefore scraped to enable large-N investigations and network analyses. The field also offers numerous opportunities for visual analysis that can reveal new dimensions of negotiations and signaling in diplomacy (Duncombe 2017). To date, there have been no known studies of digital diplomacy using netnography, in the strict sense of the term. Netnography or "virtual ethnography" is the adaptation of ethnography to the digital world. It has been argued that netnography offers unobtrusive and non-influencing monitoring of the communication and interaction of online usage behavior, which is to some extent a contradiction of its ethnographic roots (Kozinets 2010). However, social movement and youth studies have produced interesting results on online/offline relationships using netnographic approaches (Wilson 2016;Barisione, Michailidou, and Airoldi 2019). While diplomacy is a formal practice and therefore less inclined to the type of mobilizing behavior that might be expected from social movements, netnography holds the promise of reaching otherwise elusive audiences for digital diplomacy. The increased role of the public in international politics is a common point of departure in studies of new media in IR, but most studies have been limited to including the perceptions of audiences. Social media is believed to emotionally engage and its impact is measured in terms of likes, reposts, and viewing time. This perceived audience is a source of legitimacy that has increased the stakes when it comes to digitalizing diplomacy. Here, practice approaches have a central role to play in theorizing the role of the audience as both spectator and participant in the practice of digital diplomacy. Studying audiences is methodologically challenging but methods such as netnography offer opportunities to overcome the focus on perception or the quantification of audiences' engagement to understand their relational role in new practices of international politics.
While we think that the opportunities outweigh the methodological problems, there are many questions to keep in mind in taking a practice approach to digital diplomacy. These questions invoke the urgency of including theorizations of the microdynamics of social life (Goffman 1959). For instance, technology enables stage management, in the sense that diplomatic actors can project a persona online. At the same time, managing a role is more difficult in a real-time drama where there is no equivalent to the backstage sphere and still an offline persona to manage at the same time. While at the outset new media opportunities might be assumed to foster impression management, a role must resonate with the expectations of an audience. It is therefore possible that the online/offline dimension might lead to more unfiltered accounts. The risk of not resonating with the offline persona endangers the accumulation of support from the following audience. Furthermore, a "tech ambassador" is a persona and a diplomatic signal of engagement that challenges previous role conceptions in diplomatic practice. For one, the fact that the ambassador has a global mandate differs from the traditional role of ambassadors as local envoys (even though it follows the state practice of appointing ambassadors-atlarge or special envoys). This signals that the acknowledgement of co-presence with tech companies has led to changes in both the role and the script that future ambassadors will perform. To think of sector ambassadors with global mandates only in terms of change, however, would be to miss how this development also signals continuity. For example, corporate diplomacy is a phenomenon that has its roots in the early modern world (van Meersbergen 2017). There are plenty of opportunities to explore how change and continuity in the social interactions of diplomacy are currently unfolding and practice approaches offer ways to think about relevant methods for doing so.
Toward a Research Agenda for Digital Diplomacy Practice
Against the backdrop of the promises and opportunities of practice approaches to digital diplomacy, we suggest ways to develop a research agenda. Efforts to understand digital diplomacy have sometimes emphasized change at the expense of continuity. However, we would like to stress the need to consider digital diplomatic practice as an interplay between continuity and change in this field. This is particularly important if digital diplomacy is viewed as more than a subset of diplomacy. If anything, previous research demonstrates that digital diplomacy practices are increasingly emerging alongside other practices in multiple sites of diplomacy. The breadth of digitalization highlights how digital diplomacy contains more than changes in diplomatic communication. It has led to transformations in both the structural conditions for diplomacy and the agency and working routines of diplomacy on the ground. Therefore, digital diplomacy should be understood as an emergent political practice in increasingly digitalizing societies.
In our view, a particularly useful way of theorizing change and continuity in digital diplomatic practice draws on pragmatist notions of human action (Whitford 2002;Kratochwil 2011;Frankel Pratt 2016). The pragmatist view on practice is well in line with the practice approaches developed by Wenger and Schatzki, but that does not mean that we do not find insights from for instance Bourdieu to be useful as well (see above). The key insight here, however, is that there is an alternation between habitual and creative actions because social practices do not completely specify the appropriate or "natural" code of conduct. There is always some "room for manoeuvre", meaning that there might be more than one course of action that is perceived to be naturally appropriate in a given situation; and that in situations that do not correspond to what actors are normally faced with, they are often forced to come up with their new ways of doing things (Gross 2009). In line with this pragmatistinspired understanding of social practices, political change can be thought of in both incremental and more radical terms.
It is in a local context that we can observe the interaction between elements of change and the practices that they reproduce. Since practices are both general and contextually embedded, conceptualizations that do not take account of the social context and the political prerequisites of diplomacy will fall short in analytical terms and remain centered on instrumental migration to the digital sphere. A contextual understanding of diplomacy as political practice, however, does not mean that more general conceptualizations are ruled out. Rather, it is through social causality in a local context that we can hope to generate analytically general insights (Pouliot 2014). In our view, the aim of research that draws on practice approaches must therefore be to strike a balance between thick description and conceptual abstraction.
In the table below, we suggest three areas where practice approaches can collectively contribute to furthering our understanding of digital diplomacy and where we see opportunities for theoretical advancement. First, we identify the questions of diplomatic agency at stake in digital diplomacy and discuss the evolving "habitus" of diplomats in the digital age. Second, we discuss the issue of space and the materiality of new technology where the interrelations between saying and doing become visible online. Finally, we consider the new role of audiences for digital diplomacy and how to theorize their role in the practices they observe, expect, react to, engage with or ignore. The practice approaches that we engage with here are not meant to be understood as exhaustive, and the main purpose of this agenda is to encourage more collective reflection as we imagine that the categories in the table could be complemented with more avenues for future research (Table 1).
Diplomatic Agency
Digital diplomacy depends on new communication and technical skills. While communication is a cornerstone of diplomacy, the codes, habits, and norms of communication online differ from both the formal and the informal diplomatic communications that take place behind closed doors. Mastering the formatting logic of software, the navigation of big data and management of relationships with tech companies have become new tasks of diplomacy (Riordan 2019). This has led to a need for diplomatic organizations not only to learn new skills, but also to recruit new competences. In general terms, diplomatic organizations have internalized strategic communication to a greater extent than before, leading to an increased number of professional communicators in diplomatic organizations-a different professional role to diplomats as communicators. This is partly a result of the embrace of social media, but it also reflects the shift toward more proactive news media relations that began before the emergence of social networks online (Pamment 2016). Yet, a majority of these practices reflect a mere migration of the conventional broadcast mode of communication. For example, a Ministry of Foreign Affairs' use of social media may not be indicative of substantial change. Thus far, the need for communicators and training resources does not appear to have had an impact on the selection of new diplomats, but it is likely that this development will eventually challenge longstanding criteria for "good candidates" for diplomatic training. 3 As To uncover, understand, and explain how digital transformation of diplomatic practices has been possible and how these practices reproduce or challenge offline modes of diplomacy.
Audiences
How do online audiences contribute to enact diplomacy?
What are the constitutive effects of online visibility?
To explore and assess the empowered role of audiences, spectators, and publics in diplomacy. To uncover how acts of "seeing", increased visibility, and emotional engagement in diplomacy reproduce or alter the trajectory of diplomatic processes.
Symbolic interactionism (Goffman) The public as a political actor (Dewey) technology gradually takes over the role of rational analysis, future diplomats may be shaped by the dependence on machines and artificial intelligence. Embassies are now expected to perform local online data analysis, and digital disinformation is both a domestic and an international problem for the public diplomacy of states and organizations. Diplomatic organizations today are undergoing a process of professionalization of these skills, and digital diplomacy is therefore concerned with changes in diplomatic agency. In Bourdieu's (1990) practice approach to agency, habitus corresponds to agents' dispositions as a result of lived experiences and socialization. According to Bourdieu, practices change because of improvisations that come naturally to the actors that perform them. Hence, digitalization changes the habitus of diplomats because they naturally adapt to new conditions and new tools. Bourdieu's approach to practice has been criticized because it tends to exclude the role of reflection and learning (e.g., see Adler 2019). While the absence of reflection has sometimes been notable in practices of digital diplomacy (see below), we argue that learning (or lack thereof) is central to understanding the changes in diplomatic agency brought about by digitalization. Indeed, other practice approaches, such of increased digital training, allocation of resources and recruitment of communicators (Pamment 2016;Manor and Crilley 2020). as building on pragmatism, have developed the role of reflection and learning and connected these dimensions to the process through which actions become patterned (Kustermans 2016;Bueger and Gadinger 2018). In the context of digital diplomacy, the fact that habitus is changed not only through new practices, but also through the influx of new agents and new situations stress the need to pay more attention to diplomatic agency (cf. Bicchi and Bremberg 2016). Changes in agency therefore also refer to the diversification of social background in diplomacy, for instance through the increase of women, and digitalization may intersect with gradual change in gendered practices (cf. Standfield 2020).
We imagine that these encounters with digitalization shape and will continue to reshape the diplomatic profession and we encourage studies that can map and offer analyses across different sites of transformation. The influx of communicators are only one aspect of how new demands for digital skills are changing diplomacy from within (Hedling 2021). Other fertile grounds for exploration can for instance be located in new attempts to shape digital strategies, practices of cyber security, and experimentation with artificial intelligence. These sites involve a multitude of actors that engage in processes of shared learning by gradually establishing ways of doing things through their everyday interactions. Practice approaches inspired by the work of Wenger could offer insights into how digital transformation shapes and reshapes communities of practice around these sites (cf. Bremberg 2016).
Even though diplomacy is commonly understood as first and foremost about negotiation and representation among state officials (e.g., Satow 1917Satow /1979Barston 1997;Berridge 2010), the scope of diplomatic practices cannot be limited to actions that are performed by national diplomats. To study digital diplomacy is also to consider diplomatic practices conducted by agents from outside the field of accredited diplomatic organizations. Conveying a diplomatic message through visibility and reach is increasingly considered an act of diplomacy. An abundance of new media opportunities has allowed famous and highly visible individuals to gain access to large international audiences in order to conduct celebrity diplomacy (Wheeler 2013; Bergman Rosamond 2016). As noted above, corporate diplomacy is a longstanding practice, but in the digital age it has come to include tech companies whose concentration of power in international politics is still relatively underresearched and poorly understood. Diplomatic agency is also expanded through the new role of audiences as both spectators of and participants in diplomacy. For instance, Golovschenko, Hartmann, and Adler-Nissen (2018) have studied citizens as curators of digital disinformation. Digital disinformation and efforts aimed at countering it are now commonly considered practices of digital diplomacy. In some ways, when citizens become interlocutors and curators of digital information online, they challenge conceptions of diplomatic agency by participating and shaping social exchanges rather than merely acting as audiences of communication. Diplomacy is then involved in a process that links performers and audiences in ways that should be of interest to interventions from both the sociology of networks, by forming "actants"-the relational source of action in ANT (Latour 1996(Latour /1990, and the sociology of action by linking agency and structural conditions.
Discussions on media diplomacy have debated the role of the news media. News media actors are rarely understood as diplomatic actors precisely because they are still often seen as a medium of communication. Social media instead enables diplomatic actors to bypass the news media and engage directly with audiences. When these audiences actively participate in the activities that define a practice of digital diplomacy, such as digital disinformation, it could be argued that they do so with agency, that is, they actively participate in the making of diplomatic practices. We envision that engagement with ANT could produce innovative ways of approaching the general expansion of agency in diplomacy through in-depth analysis of the local unfolding of actants, for instance, in relation to the role of algorithms and networks in diplomatic use of social media.
The value of practice approaches in relation to digital diplomacy is to keep role conceptions as open questions and seek to understand how "traditional" and "nontraditional" diplomatic agents become part of an evolving configuration of social relations (Constantinou and Der Derian 2010;Sending, Pouliot, and Neumann 2011). It is in this interplay between traditional and non-traditional diplomatic agents that digital diplomacy has emerged as a practice that can be distinguished from online behavior or digital action through the material aspect of doing diplomacy. The struggle for recognition as competent diplomatic performers is a key element in this process, and as such the emergence of digital diplomacy can be seen as part of the larger processes of reconfiguring diplomacy as a social institution currently being explored by IR scholars (e.g., see Benson-Rea and Shore 2012; Cooper, Heine, and Thakur 2013;Kuus 2014;Pouliot 2016). This research agenda also stresses the need to explore agency as an empirical phenomenon in IR (Braun, Schindler, and Wille 2018). Digitalization also highlights the difference and possible tensions between the analytical focus on organizational agency and individual agency in the field of diplomacy. In order to make insightful contributions to this agenda, we argue that studies of digital diplomacy should further the understanding of different types of diplomatic agency at stake and strive to pinpoint the transformations of diplomatic agency in practices of digital diplomacy.
Space and Materiality of New Technology
The "digital" in digital diplomacy refers to both space-virtual space online or sectorial space such as the tech industry-and materiality, as conditions or objects of communication. In addition, the social science research on digital media now often ascribes agency through performativity to new technology, or at least to algorithms and their ability to shape social, political, and economic life (Kitchin 2017;Wilcox 2017).
This presents conceptual challenges, as studies often engage several of these dimensions at the same time. A common approach is to consider the digital as a structural process and digitalization as enabling and constraining diplomatic practice. However, the materiality of technology may also confine the digital to "materials of practice", tools of automation or dissemination (Pouliot 2010b), affordances (Adler-Nissen and Drieschova 2019) or even props used to perform or enhance the presentation of the self (Goffman 1959;Aggestam and Hedling 2020). This multitude of dimensions and levels, however, also offer opportunities for studying practices of digital diplomacy, where the spatial and material aspects always interact to some degree. For instance, studies following the tradition of symbolic interactionism can explore social media as a stage on which interesting performances of diplomacy take place but still maintain the materiality of Twitter in such a performance, by emphasizing the personal or intimate tone it allows. For instance, the processes in which social media reshapes expectations of diplomatic rituals are instrumental to grasp changes to "interaction orders" (Goffman 1959). Such processes of connection and engagement also highlight the ways in which technology assists the embodiment of diplomatic roles and practices, such as ambassadorship or negotiation through emotions such as trust or esteem. However, social media platforms lend themselves to emotional engagement in different ways through their specific affordances, that is, what they allow their users to do (Bucher and Helmond 2018). Twitter, Weibo, Facebook, and Instagram are, for instance, different socio-technological environments and may therefore afford their users different kinds of practice. The way that these affordances allow for new practices to transgress the boundaries between the private and the public may also change expectations of (gendered) intimacy in diplomacy (Standfield 2020). Digital media therefore have both embodied and embodying effects on everyday practices of diplomacy and we suggest that more research is aimed toward capturing how spatial and material aspects condition new possibilities for practical change in diplomacy.
Furthermore, the Internet enables both material connections (new communication channels) and the connection of materials (technologies as political artifacts) that can be circulated with increasing ease, speed, and reach. In our view, this multidimensionality belongs at the center of practice approaches to digital diplomacy because it relates to how saying and doing interrelate in these practices; that is, how verbal, scripted and told, and non-verbal, shown and performed acts are enmeshed within each other. In addition, using social media for the purpose of diplomatic signaling or a word processing program for the purpose of negotiating agreements are patterns of meaningful action that use technologies (both as space of communication and as material artifacts) to produce material outcomes while at the same time leading to changes in behavior and practical dispositions. It is therefore relevant to maintain an analytical distinction between space and materiality of new technology in order to explore their functions in practices of digital diplomacy.
For these reasons, the process of digitalization requires careful contextual understanding that accounts not only for how the process unfolds locally, but also for the hierarchical order of different levels of entanglement with new technology. The hierarchy of dimensions of change matters because it reflects directions of power. For instance, the way that the Internet facilitates communication through speed, reach, and representation often leads to a top-down view of how digitalization structures diplomatic communication through its opportunities for cognitive shortcuts or visual elements that reproduce power relations and hegemonic norms (e.g., United States' soft power diffusion). The way in which technological affordances through algorithms, software or applications change the ways in which information is shared, exchanged or negotiated, however, instead suggests a bottom-up direction of how practices produce and reproduce power. Adaption to these practices can be a result of exogenous shocks such as pandemics, cyberattacks or digital disinformation campaigns. The digitalization of diplomacy means that these processes of change in diplomatic contexts take place simultaneously. Attention to local context might tell us whether adaptation from above or exploration on the ground are driving digitalization processes at different moments in time, which is often a reflection of the offline dimensions of local diplomatic practice. For instance, in a recent contribution, Bramsen and Hagemann (2021) offer a micro-sociological analysis of the effects of virtual peace mediation during the Covid-19 pandemic. The changing conditions for face-to-face diplomacy during the pandemic offer ample opportunity to conduct similar studies across diplomatic contexts.
This discussion becomes even more relevant when studies on digital diplomacy bring in assumptions from ANT (e.g., Archetti 2012; Adler-Nissen and Drieschova 2019). The added value of ANT appears to be the ability to extend the relational approach of practice approaches to non-human entities such as technology, while opening up the possibility of symmetry between human and non-human actors in social practices. Hence, agency is conceived as a relational effect (Braun, Schindler, and Wille 2018). It is therefore central to connect agency to practices in ways that the relationship with technology leads to practices that would otherwise not exist in any meaningful way. This does not necessarily imply that we need to adhere to post-humanist ideas, because in our view it is not the agency of technology per se that is of interest but rather how technology embeds, conditions, and embodies diplomatic agency. However, we believe that this is a debate to which studies of digital diplomacy, explicitly drawing on insights from ANT and other practice approaches, might be able to make useful contributions.
For these reasons, we encourage studies that engage with key questions of online and offline practices of diplomacy and confront the practical differences between spaces and materials of diplomacy. While we imagine that many directions of inquiry can result from these questions, we suggest that a common objective will be to offer analyses of how digital transformations become possible in their local contexts and how they reproduce or challenge traditional (offline) modes of diplomacy. As we have noted, several practice approaches could offer pathways for such analyses, we point to Schatzki's understanding of situated practical understandings as a point of departure to capture local changes in "good practice". Furthermore, Schatzki's engagement with materiality may offer pathways to consider how and to what effect the material properties of the digital world are implicated in the digitalization of diplomacy (e.g., see Schatzki 2019).
Audiences
Audiences have thus far not been studied to any large extent in research on digital diplomacy. This is probably a reflection of both theoretical and methodological challenges. IR scholars are not used to conceptualizing the audiences for international politics because their empowered role is a relatively new development. In the broader research field of new media in IR, audiences are increasingly being included in the theorization of, for instance, the role of media in war and conflict (Der Derian 2009;Hoskins and O'Loughlin 2015;Pantti 2016;Jackson 2018;Merrin 2018;Miskimmon, O'Loughlin, and Roselle 2018). The point of departure here is that digital media have both expanded and diversified the audiences for war and conflict. There is a similar assumption in studies on digital diplomacy that diplomats and diplomatic organizations do digital diplomacy to a large extent because of the expansion and diversification of international audiences. Audiences have expanded in terms of both reach and speed, which has led to new opportunities for and constraints on diplomacy, not least when it comes to generating and upholding public legitimacy. Audiences have become more fragmented and must therefore be engaged with differently, depending on the diplomatic goals at stake. At the same time, the traditional boundaries between national audiences have eroded in the online sphere, making it more difficult to appropriate messages. In addition, there is growing competition for audiences online as a growing number of actors look to shape online discussions. The trending topic of digital disinformation further illustrates how audiences are susceptible to such influences.
The relative neglect of the role of audiences in studies on digital diplomacy calls for relational approaches because traditional communication theories fail to grasp the "newness" of digital media due to their inability to conceptualize the changing role of audiences. The common view of the communication process as unidirectional leads to difficulties in accounting for the agency of audiences beyond two-way communication attempts at "listening." Audiences for digital diplomacy do more than produce public discourse for diplomats to listen to; however, they are both objects and subjects in the practices of digital diplomacy. In less obvious ways, audiences also condition the use of emotional cues and images in social media through their perceived and actual engagement. The affordances offered by social media platforms condition the ways in which audiences can be reached and engaged by such elements. The power and authority that these elements perform depend on their resonance with an audience. Therefore, knowing whether, how, or why audiences respond to images (and whether or not the response was intended by its disseminator) is an essential practical understanding to grasp the influence of visual and affective artifacts. Furthermore, the focus on the perceived roles of the audience has seemingly led to an overstatement of public interest in digital diplomacy. While live videos and curated content are increasingly valued practices of digital diplomacy among states' ministries and embassies as well as international organizations, the number of viewers and their level of engagement is often modest, at best (Hedling 2020). It would therefore also be valuable to include the influence of lack of interest or even ignorance among audiences in the understanding of how these practices evolve.
At first glance, practice approaches may seem ill-suited to address this challenge, but all social performances, relationships or processes of emancipation depend on receivers, listeners, spectators, publics or "others"-all audiences in some sense. This is perhaps most explicit not only in analyses of social interaction that use stagerelated metaphors (Goffman 1959), but also in attempts to theorize democracy, for instance in Dewey's seminal work on the political public (Dewey 1927). The role of audiences as a site of tension between practice approaches that tend to disregard, underestimate or neglect audiences is challenged by the digital sphere in which social resonance is expanded in terms of scale and speed. Audiences are at the same time both closer (e.g., the intimacy of social media) and more distant (e.g., big data as raw material) (Couldry and Yu 2018). Apart from early anticipation of digital diplomacy as a process that might lead to increased democratization of diplomacy, the importance of studying public participation in the construction of both knowledge and everyday habits remains central to the challenges facing global governance today. We envision that both Goffman's symbolic interactionism and Dewey's work on the political role of the public are instructive to analyzing key changes in the information environment. For instance, how do digital audiences contribute to negotiate the success or failure of diplomacy? More attention to the active role of spectatorship in both of these traditions could contribute to enhance the theorizing of audiences in IR.
The methodological challenge for students of IR is of course how to study audiences, how to collect meaningful samples, and how to analyze digital behavior. We think that much would be gained if scholars drawing on insights from practice approaches in the study of digital diplomacy should further explore the opportunities for conducting observations of audiences online. In order to do so successfully, however, audiences will need to be brought into the understanding of what constitutes digital diplomacy practice more systematically. In this mission, we believe scholars drawing on practice approaches to study digital diplomacy can push their insights in other directions and explore new ways of conceptualizing audiences in comparison to what has been accomplished up until this point.
Conclusions
In this article, we have suggested that practice approaches offer opportunities for theoretical and methodological advancement in the field of digital diplomacy. We argue that digital diplomacy provides opportunities to study the interplay between continuity and change in international politics, and that recent studies in this field have demonstrated the promise of using practice-oriented approaches. As digital diplomacy becomes an established international practice, we also argue that it is important to resist conceptualizing it merely as a subfield of diplomacy and instead favor integrating its premises with theories of IR. Digital diplomacy today is much more than world leaders' use of Twitter. It is a fundamental dimension of contemporary international politics. The article has sought to demonstrate the opportunities that digital diplomacy opens up for the further development of practice approaches in IR. The visibility, transparency, and visuality of digital media provide ways to observe new practices as they unfold. In order to make meaningful contributions to the intersection of digital diplomacy and practice theory in IR, we call for a more systematic research agenda. By taking stock of the promises, opportunities, and pitfalls of existing digital diplomacy research, we highlight three central areas for fruitful cross-fertilization with different versions of practice approaches that are already being explored by IR scholars.
First, digitalization has already led to changes in diplomatic agency in the sense of changing expectations of both what counts as diplomatic action and who counts as a diplomatic actor. These changes highlight the evolving interaction between "traditional" and "non-traditional" diplomatic agents and that digital diplomacy has emerged as a practice that is distinguished from online behavior or digital action through the material aspect of doing diplomacy. Digital diplomacy can thus be seen as part of the larger process of reconfiguring diplomacy as a social institution. Several recent studies draw on insights from practice theory in IR to make important contributions to our understanding of how digitalization changes diplomatic agency. In order to advance the research agenda, however, we stress the need to further specify how different types of agency are made possible in and through the emerging practice of digital diplomacy, and in so doing to get a better grasp of the stakes involved for those who are actually doing that digital diplomacy. The different understandings of what constitutes agency and the relationship to structural conditions can assist in exploring the multiplicity of agency change.
Second, there is complexity in the spatial and material aspects of "the digital" that requires careful distinction and more research in order to fully grasp how digitalization influences diplomatic practices. Technologies assist the embodiment and enacting of diplomacy in different ways. They may also constrain its effectiveness. Attempts to carry on diplomacy "as usual" during the COVID-19 pandemic have illustrated how digital tools can overcome spatial obstacles but still fall short of delivering the expected outcomes in the absence of physical, interpersonal, and situated rituals. Lessons from digital adaptation at moments of disruption are therefore valuable to advance our understanding of which digital diplomatic practices eventually become commonsensical while others are gradually abandoned. In addition, the different affordances of social media platforms and the varying outcomes produced by digital diplomacy practice suggest that we have more to learn from the sociotechnological environments in which diplomacy now also takes place. We therefore call for the careful treatment of the digital as a space, material resource, and means of agency.
Finally, practice approaches are challenged by the empowered role of audiences in international politics and how they increasingly affect and constitute aspects of the everyday practice of diplomacy. While practice approaches may not offer sufficient explanatory grounds on which to further our understanding of audiences in IR, we have argued that theorizing the role of audiences matters to our understanding of digital diplomacy practices. The way in which interactions with and among audiences have intensified the public nature of diplomatic practices must be taken into account. This development has changed the role of the public in diplomatic social interaction, and audiences may therefore have more influence on the logics of action in diplomatic practice than before the rise of social media.
In addition to these three central areas, we imagine that other developments in the wider field of IR can reinvigorate this research agenda further. More engagement with feminist and post-Western theories or the micro-sociology of emotions and affect could, for instance, open for new avenues of exploring relationships between the institutional legacies of overaching power relations and digital change in diplomacy.
We believe that there is great potential for theoretical, methodological, and empirical advances to be made through further study of the digital transformation of diplomacy, building on various insights from practice approaches. We invite scholars interested in diplomatic practices and the processes of digitalization to think in terms of how to contribute to such a research agenda, even though they might not think of themselves as primarily involved in practice-based research. We are aware that this article is only a first step and we welcome fellow scholars in IR and beyond to challenge our proposals in the spirit of critical engagement.
Funding
Elsa Hedling gratefully acknowledges funding from the Marianne and Marcus Wallenberg Foundation (project number 2018.0090).
|
v3-fos-license
|
2022-02-20T16:20:05.971Z
|
2022-02-18T00:00:00.000
|
246989642
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-1350735/latest.pdf",
"pdf_hash": "8ddf9d6d7efb0a844fe14e1af8b31ec82bd67628",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44102",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "015a9a599604f089bdc63f92c5dbc4154f4e6749",
"year": 2022
}
|
pes2o/s2orc
|
Do extravillous trophoblasts isolated from maternal blood and cervical canal express the same markers?
Christine Kongstad Department of Obstetrics and Gynecology, Aarhus University Hospital, Aarhus Ripudaman Singh ( rs@arcedi.com ) ARCEDI Biotech ApS Katarina Ravn ARCEDI Biotech ApS Lotte Hatt ARCEDI Biotech ApS Pinar Bor Department of Obstetrics and Gynecology, Randers Regional Hospital Ida Vogel Center for Fetal Diagnostics, Department of Clinical Medicine, Aarhus University Niels Uldbjerg Department of Obstetrics and Gynecology, Aarhus University Hospital, Aarhus
Introduction
Cell-based non-invasive prenatal testing (cbNIPT) may represent a superior alternative to cell-free fetal non-invasive prenatal testing (cffNIPT). The major argument in favor of cbNIPT is that the nucleated fetal cells contain the entire fetal genome, making it possible to perform a wider range of and more detailed genetic analyses [1][2][3][4][5][6]. Furthermore, unlike cffNIPT, cbNIPT is not signi cantly affected by increased maternal BMI [5,7]. cbNIPT can be based on fetal leucocytes [8,9] or nucleated red blood cells [10] found in maternal blood, but more promising and relatively well-established methods are based on extravillous trophoblasts (EVTs), circulating in maternal blood [1,3,4,6,11,12],cell or retrieved from the cervical canal of pregnant women [13][14][15][16][17]. Interestingly, both methods are based on enrichment of EVTs but target different cell surface markers.
As regards to EVTs in maternal blood, ARCEDI Biotech ApS holds proprietary technology for immunomagnetic enrichment of EVTs using antibodies against the mesodermal cell surface markers CD105 and CD141, and staining of EVTs with a combination of uorescent antibodies targeting ectodermal cytoskeletal markers [1,3,6]. In contrast, Trophoblast Retrieval and Isolation from the Cervix (TRIC) is based on EVTs retrieved from the cervical canal of pregnant women using cervical swabs and immunomagnetic enrichment targeting the EVT surface marker, human leukocyte antigen G (HLA-G) [13][14][15][16][17].
As illustrated in Fig. 1, trophoblasts originate from chorionic villi [18,19]. Villous cytotrophoblasts proliferate in cell columns at the tip of the villi and invade the decidual stroma, where they start expressing HLA-G and mesodermal markers [18,20]. These trophoblasts are called invasive interstitial EVTs, which may reach maternal circulation and potentially the cervical canal by different routes. HLA-G positive interstitial EVTs have been demonstrated in decidua basalis, uterine veins, arteries, and glands [13][14][15][16][17]. Two different routes have been proposed for EVTs reaching the cervical canal. At the margin of placenta, invasive interstitial EVTs may migrate directly into the uterine cavity or alternatively invade the uterine glands in decidua basalis and hereby reach the cervical canal during the rst trimester [20].
Furthermore, invasive EVTs migrate from decidual stroma to the uterine veins [21] and spiral arteries displacing the vascular endothelium, and lining the vessel wall to establish feto-maternal circulation [19,22]. This invasion is complex and involves transition from an epithelial to mesenchymal phenotype, which allows the EVTs to invade maternal vessels [23,24] (Fig. 1).
It is unknown whether EVTs found in the cervical canal and maternal blood express similar cell surface markers. Hence, the aim of this study was to investigate, whether the antibodies used in a cbNIPT protocol enriching EVTs from maternal blood can enrich EVTs from maternal cervical swabs.
Participants
Inclusion criteria were nulliparous or parous pregnant women carrying male fetuses in gestational age (GA) 7 + 0 to 11 + 6 referred for surgical termination of pregnancy at Randers Regional Hospital, Denmark. Blood processing Thirty mL of maternal blood was obtained in Cell-Free DNA BCT® tubes (Streck, USA) prior to termination of pregnancy. The maximum time from sample collection to sample processing was six hours. The gender of the fetus was determined by Y-chromosome speci c real-time PCR from plasma and only samples from male fetuses were processed further. Gender PCR was performed using a protocol as previously described [1,4,6]. The blood samples underwent cbNIPT blood processing, which includes xation with paraformaldehyde, followed by red blood cells lysis, washing with phosphate-buffered saline (PBS) (Gibco, pH 7.4 w/o Ca2 + and Mg2+) and magnetic-activated cell sorting (MACS) [1,4]. Figure 2 illustrates the research method.
Swab processing
A cervical swab was obtained by a cytobrush, EndoCervex CytoBrush® (Rovers Medical Devices, The Netherlands), which was inserted 1.5-2.0 cm into the cervical canal and rotated 360 degrees clockwise.
All samples were collected by the same researcher while women were under general anesthesia prior to surgical termination of pregnancy. Nulliparous women were given vaginal misoprostol two hours before the procedure. The swabs were kept for maximum 6 hours at 4°C in 10 ml sterile PBS until processing. Dissolvement of cervical mucus was replicated in accordance with the TRIC protocol as described earlier [14][15][16]. This included adding 250 µl concentrated glacial acetic acid to the ice-cold PBS containing the cytobrush followed by 5 min incubation at room temperature. The sample was centrifuged at 4°C and the cell pellet washed with ice-cold PBS three times, followed by centrifugation and removal of supernatant [14][15][16]. Instead of using ethanol for xation, we used paraformaldehyde-based xation (shown in Fig. 2).
Enrichment of extravillous trophoblasts
Enrichment and staining of EVTs from blood samples and cervical swabs were performed identically by using Magnetic Activated Cell Sorting (MACS) (Miltenyi Biotech, Germany) in accordance with the cbNIPT protocol, as previously described [1,4]. The cells were incubated with antibodies against CD105 and CD141 conjugated to magnetic beads before immunomagnetic enrichment on MACS columns, and stained with a cocktail of cytokeratin antibodies, before being smeared on FLEX IHC microscope slides (DAKO, USA). After air-drying, the glass slides were xed in 2% paraformaldehyde, and mounted with Vectashield containing DAPI (4′,6-diamidino-2-phenylindole) (Vector Laboratories, USA). The slides with cell smears were scanned using a uorescence microscope with an integrated scanner from MetaSystems. Potential EVTs were identi ed by their morphological characteristic, such as nuclear and cytokeratin staining.
XY chromosome uorescence in situ hybridization (FISH) was performed to allow identi cation of cells with a Y chromosome, which indicated fetal origin (shown in Fig. 1).
XY uorescence in situ hybridization XY chromosome FISH was performed as previously described [25]. Brie y, chromosome speci c repeat probes for CEP X alpha satellite (spectrum green, spectrum aqua) and CEP Y satellite III (spectrum orange, spectrum aqua) (Abbott Molecular, USA) were used for primary hybridization to the X and Y chromosome, respectively. Following hybridization, the slides from blood and cervix samples were scanned for Y chromosome signals indicating fetal origin (shown in Fig. 1). For nal validation of EVTs, re-hybridization with reverse probe colors was performed using chromosome speci c repeat probes. Positive controls of male cells from epithelial mouth mucosa were processed concurrently to con rm successful hybridization to the Y chromosome.
Results
Blood samples and cervical swabs were obtained from ten pregnant women carrying male fetuses (GA average 9 + 3, range GA 7 + 2 to 11 + 4. Replicating the TRIC protocol for mucus dissolvement and subsequently cbNIPT protocol for enrichment of EVTs we did not detect any XY cells in the cervical samples, thus no EVTs were identi ed by this method. We isolated EVTs from all blood samples (average 4.1, range 1 to 9). Figure 3A shows maternal cells with XX FISH signals enriched from a cervical swab due to unspeci c binding in the enrichment process. Figure 3B shows a positive control smear with XY FISH signals. Figure 4 shows an EVT isolated from a maternal blood sample with uorescent cytokeratin staining (A) and subsequent XY chromosome FISH validation (B). Among the EVTs identi ed in maternal blood by cytokeratin staining, 68% were con rmed by XY chromosome FISH. The remaining cells were either lost or gave no signals due to small and condensed nuclei.
Discussion
We were not able to enrich EVTs from cervical swabs by using a well-established cbNIPT protocol for enrichment of EVTs from maternal blood using CD105 and CD141. To our knowledge, this is the rst study to test a cbNIPT protocol developed for blood samples on cervical swabs.
A strength of this research is that it is based on two well-established methods, TRIC and cbNIPT. The cervical swab enrichment and blood sample enrichment steps were performed identically and simultaneous in the laboratory and only the rst step of swab processing and blood processing differed.
Our negative results may be explained by at least two hypotheses. First, the EVTs in cervical canal lack CD105 and CD141. It is likely that the endocervical EVTs and the circulating EVTs differ in antigen expression. The circulating EVTs may well be unique in their expression of both mesodermal and ectodermal markers, since they adopt to maternal circulation by expressing CD105 and CD141, allowing them to invade maternal vessels (Fig. 1) [1,4]. Second reason for the lack of EVTs in the cervical swabs could be technical. Despite, adhering to the well-established TRIC protocol, we cannot exclude that we failed to reproduce the protocols and failed to recover fetal cells from the cervical canal due to simple technical reasons such as sample collection, sample medium or inadequate mucus dissolvement.
In conclusion, CD105 and CD141 do not constitute an alternative to HLA-G when it comes to EVT isolation from the cervical canal of rst trimester pregnant women. Future research should be directed towards the comparison of endovascular and endocervical EVTs, which can shed light into their unique protein expression pattern.
Competing interests
The author(s) declare no competing interests.
Statement of ethics
All protocols, patient information and consent forms were approved by the Danish Research Ethics Committee (project ID: M-20110305). This research was conducted ethically in accordance with the guidelines for human studies and in accordance with the World Medical Association Declaration of Helsinki.
Consent to participate
All participants have given their written informed consent prior to inclusion in the research study.
Availability of data and material (data transparency) All data and material gathered can be made available upon request.
Funding information
The experiments in this study were funded by ARCEDI Biotech ApS.
Author Contributions CK conducted the study. KR, LH, RS and IV supervised laboratory work and helped nalizing the manuscript. PB assisted in sample collection and nalizing the manuscript. NU initiated and supervised the study. CK wrote the manuscript in collaboration with NU and RS. All authors read and approved the nal manuscript. Diagram showing the research method design The processing steps of blood samples and cervical swabs were identical except from the rst step allowing release of EVTs by red blood cells lysis, and mucus dissolvement, respectively. Blood processing and swab processing were followed by incubation with antibodies against CD105 and CD141 conjugated with magnetic beads, and subsequent magnetic activated cells sorting and staining with uorescent cytokeratin antibodies. Slides containing the enriched cell population were scanned in a uorescence microscope and potential EVTs were identi ed. As a nal step, uorescence in situ hybridization (FISH) was performed and potential EVTs manually validated for Y chromosome signals before being scanned for cells containing Y chromosome signals representing EVTs Figure 3 Cervical swab, enriched (A) and positive control (B) Maternal cervical cells of epithelial origin in immuno uorescence microscope after uorescence in situ hybridization for X chromosome (green) and Y chromosome (red) (a). Positive control from male mouth mucosa after uorescence in situ hybridization (FISH) for X chromosome (green) and Y chromosome (red) (b)
|
v3-fos-license
|
2021-09-27T20:56:08.732Z
|
2021-07-16T00:00:00.000
|
237643329
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10270-021-00909-7.pdf",
"pdf_hash": "39e45a30ff2721b22a970439ee7afc32637ba6b6",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44105",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "77e0274cd6bdddde7d54f2de49b3b324c0cd47f3",
"year": 2021
}
|
pes2o/s2orc
|
Model-based intelligent user interface adaptation: challenges and future directions
Adapting the user interface of a software system to the requirements of the context of use continues to be a major challenge, particularly when users become more demanding in terms of adaptation quality. A considerable number of methods have, over the past three decades, provided some form of modelling with which to support user interface adaptation. There is, however, a crucial issue as regards in analysing the concepts, the underlying knowledge, and the user experience afforded by these methods as regards comparing their benefits and shortcomings. These methods are so numerous that positioning a new method in the state of the art is challenging. This paper, therefore, defines a conceptual reference framework for intelligent user interface adaptation containing a set of conceptual adaptation properties that are useful for model-based user interface adaptation. The objective of this set of properties is to understand any method, to compare various methods and to generate new ideas for adaptation. We also analyse the opportunities that machine learning techniques could provide for data processing and analysis in this context, and identify some open challenges in order to guarantee an appropriate user experience for end-users. The relevant literature and our experience in research and industrial collaboration have been used as the basis on which to propose future directions in which these challenges can be addressed.
Introduction
User interface (UI) adaptation consists of modifying a software system's UI in order to satisfy requirements, such as the needs, wishes, and preferences of a particular user or a group of users. Adaptation falls into two categories depending on which the system or end-user is responsible for making the adaptation [15]: adaptability refers to the end-user's ability to adapt the UI, whereas adaptivity or self-adaptation refers to the system's ability to perform UI adaptation. Personalisation is a particular form of adaptivity, usually for the UI contents, that is based on data originating solely from the end-user, such as personal traits [15]. When the data origi- Université catholique de Louvain, BE, Ottignies-Louvain-la-Neuve, Belgium nate from sources that are external to the end-user, such as other user groups, recommendation occurs instead. Mixedinitiative adaptation [17] occurs when both the end-user and the system collaborate in order to make the adaptation.
UI adaptation should ultimately serve the end-user's benefit, by optimising factors contributing to the end-user's experience. For example, the objective of UI adaptation could be to increase efficiency (by reducing task completion time and error rate or by improving the learning curve), to ensure effectiveness (by guaranteeing full task completion) or to improve the subjective user's satisfaction, but could also be related to other factors, such as hedonic value or user disruption [18].
The challenge is to suggest the right adaptation at the right time in the right place in order to make it valuable for the enduser [4]. Otherwise, adaptation will be prone to limitations that could impede the expected benefits [21], if not thwart them: risk of misfit (the end-user's needs are incorrectly captured or interpreted), user cognitive disruption (the end-user is disrupted by the adaptation), lack of prediction (the enduser does not know when and how the adaptation will take place), lack of explanation (the end-user is not informed of the reasons for adaptation), lack of user involvement (the enduser does not have the opportunity to participate actively in the adaptation process), and risks as regards privacy (the system maintains personal information that the user wishes to keep private).
A number of model-based approaches with which to address these challenges have been proposed to support UI adaptation by the human-computer interaction (HCI) and software engineering (SE) communities. However, no study that summarises the current knowledge, reasoning, and experience gained by these approaches, along with their benefits and limitations, currently exists. These aspects are so numerous that positioning any new approach with respect to the prior work is difficult to achieve. Surveys of UI adaptation [2,13,20] synthesise adaptation concepts, methods, and tools. Most of them are, however, technology driven, limited in scope, or largely surpassed by recent technical progress, which makes them incomplete as regards covering the most recent adaptation approaches or exploring alternatives in a structured manner.
In this paper, we, therefore, present a conceptual reference framework for model-based intelligent UI adaptation that contains a set of conceptual adaptation properties. These properties are structured around the Quintilian questionswhat, why, how, to what, who, when, and where-posed for model-based UI adaptation. The objective of these conceptual properties is to facilitate the understanding and comparison of adaptation capabilities, in addition to their integration into the model-based or model-driven engineering of user interfaces of software systems, such as interactive applications, websites and desktop applications. These properties also help to identify open challenges and generate new ideas. In particular, progress in artificial intelligence (AI) and, more specifically, machine learning (ML), provides useful ways in which to support adaptation more effectively. We, therefore, analyse some opportunities that these fields may bring to model-based UI adaptation.
In Sect. 2, we present the current state of UI adaptation, while in Sect. 3, we define the conceptual framework for UI adaptation in order to locate the conceptual properties that support adaptation with respect to the Quintilian questions. These properties target the needs of two major stakeholder groups: they help system engineers to incorporate suitable UI adaptation mechanisms into model-based development more systematically, and they help practitioners to understand and compare UI adaptation methods. We conclude this paper in Sect. 4, with a call for action that includes a discussion of open challenges and future directions.
Current state of model-based UI adaptation
Pioneering work on UI adaptation started with Browne et al. [8], who used Moran's Command Language Grammar (CLG) to structure UI specifications into distinct aspects, ranging from tasks and abstract concepts to syntactic and physical components. These authors concluded that the major strength of CLG as regards UI adaptation is the principle of separation of concerns. Although this principle is enforced in CLG, it is not obvious how to easily propagate all specification aspects into the final code. These authors additionally state that CLG has very limited facilities with which to express UI presentation and behaviour. Dieterich et al.'s taxonomy [13] has long been considered a seminal reference when classifying different types of adaptation configurations and methods. This taxonomy was obtained after analysing more than 200 papers, after which the UI adaptation methods found were structured in four stages: (1) the initiative, which specifies the entity, end-user or system that expresses the intention to perform adaptation; (2) the proposal, which suggests those proposals that could be applied to adaptation, given the current context of use; (3) the decision, which specifies those adaptation proposals that best fit the requirements imposed by the context of use, and (4) the execution, which is responsible for enacting the adaptation method previously decided on.
However, López-Jaquero et al. [23] identified some shortcomings of this taxonomy: it does not support an explicit collaboration between entities (i.e. the user and the system, or even a third party) and it is restricted to the execution only. These authors specialised Norman's theory of action in the Isatine framework, which structures the UI adaptation into seven stages describing how the adaptation is carried out and by whom, thus addressing some of the Quintilian questions. The UI adaptation is understood to be a sequence of seven stages (Fig. 1, in which the user's parts are depicted in blue, while the system parts are depicted in green): (1) an entity obtained from UI adaptation goals that is formally expressed in the system or informally maintained in the enduser's head is established; (2) this entity takes the initiative in order to start a UI adaptation; (3) based on this input, some UI adaptation is subject to a specification so as to enable it to express how the adaptation will be conducted; (4) the UI adaptation selected is then applied; (5) a transition from an initial state before adaptation to a final state after adaptation is subsequently ensured in order to preserve continuity; (6) the results of this output are then subjected to interpretation by an entity based on the feedback provided by the system, and (7) the interpretation eventually leads to an evaluation of whether the initial goals established for the adaptation are (partially or totally) met. Depending on this evaluation, a new cycle could be initiated until the final goals are achieved. Paramythis et al. [32] proposed a framework that can be used to guide a layered evaluation of adaptive interactive systems. This approach decomposes the system into layers (i.e. collect input data, interpret the collected data, model the current state of the "world", decide upon adaptation and apply adaptation) that can be evaluated independently using a set of formative methods. The authors then addressed the aforementioned Quintilian questions for web sites and hypermedia systems, but not for any type of UI.
The taxonomy proposed by McKinley et al. [25] addresses how software could be adapted by employing a composition in which algorithmic or structural parts of the system are exchanged for others in order to improve the system's fit to its current context of use. This adaptation is based on the separation of concerns into the functional behaviour of the system and cross-cutting concerns, and on computational reflections expressing different aspects of a system, component-based design practices that enable the development of the different parts of a system separately, and a middleware that usually provides the compositional capabilities. The taxonomy is structured in three dimensions: "how to adapt", which corresponds roughly to Dieterich's proposal stage, "where to adapt", which is implicitly included in the execution stage, and "when to adapt", which was not originally covered, signifying that McKinley's taxonomy complements Dieterich's taxonomy. Several surveys on UI adaptation have been published in order to synthesise adaptation concepts, methods and tools. For example, Van Velsen et al. [41] presented a systematic literature review on the user-centred evaluation of adaptive and adaptable systems. Akiki et al. [2] presented a qualitative study analysing adaptive UI development systems that employed a model-driven engineering approach for UI adaptation. Motti et al. [28] conducted a survey investigating whether and how practices concerning model-based contextaware adaptation are perceived and adopted by practitioners. While most stakeholders recognise the relevance and benefits of the adaptation methods, they are still not considered or partially adopted during software development.
Finally, Fig. 2 depicts the PDA-LDA cycle employed to structure the UI adaptation according to the theory of control perspective [7]: each entity, the end-user (depicted in blue) or the system (depicted in green) enters a cycle of three stages: the perception (P) of the context before adaptation, the decision (D) to adapt and the action (A) taken in order to adapt. Unlike other frameworks that emphasise the adaptation steps, this cycle acknowledges that both the end-user and the system act symmetrically with these stages, which should be covered to some extent.
UI adaptation is, therefore, treated independently of any implementation method, which is desirable. However, in the context of model-based/driven engineering as a particular method for adaptation, there are few or no explicit recommendations on how to structure software components in order to support intelligent UI adaptation. The existence of some models, such as the user model, is frequently mentioned, but their structure and usage are not made sufficiently explicit to help modellers and developers to implement adaptation [1].
On the one hand, UI adaptation methods have been investigated in human-computer interaction (HCI), but without making the means required to practically implement them sufficiently explicit.
On the other, the software engineering (SE) community has advanced as regards principles and technologies with which to support the PDA-LDA cycle on the system side (e.g. the MAPE-K adaptation loop, models at runtime), but often relegates the perception, decision and action stages on the end-user side, typically addressed in HCI, to a secondary role.
Conceptual framework and properties for UI adaptation
In this section, we introduce a conceptual reference framework with intelligent model-based UI adaptation. The purpose of the framework is twofold: (1) to help software engineers to properly decompose the application into layers and modules that are suitable for supporting model-based UI adaptation, and (2) to provide a property-based classification of existing approaches in order to identify some trends and research directions. Figure 3 depicts our conceptual framework with which to support intelligent UI adaptation based on MDE principles (e.g. abstraction, separation of concerns, automation) and technologies (e.g. modelling and metamodelling, model transformation). This framework is decomposed into four parts:
The conceptual reference framework
-The context of use, which represents the actor(s) that interact with their platform or device in any physical environment [9]. For example, a business person interacting with a smartphone in a busy airport represents a context of use that is radically different from a tourist browsing a laptop in a hotel. In order to support contextaware UI adaptation, a context probe (e.g. a camera, a sensor) senses it and abstracts relevant data into useful model fragments corresponding to the context of use: a user model captures all data pertaining to the end-user (e.g. gender, age, interaction history, abilities, preferences, emotional state, past experience), a platform model captures data that are useful as regards characterising the target platform (e.g. screen resolution, sizes, interaction capabilities, CPU availability), and an environment model captures any environmental data that could influence the UI (e.g. location, stationary vs. mobile conditions, light, noise level, physical configuration, organisational and psycho-social constraints). For example, Figs. 4 and 5 reproduce two UIs of a trip planner in two different contexts of use. -The software system, which is usually an interactive application, consists of the semantic core component that contains the business logic functions appertaining to the application domain. These functions are executed from the intelligent UI of this software, which could exploit up to four models [9]: a "task and domain" model captures domain abstractions, usually in the form of an objectoriented diagram or a UML class diagram or any kind of domain model, while the "task" model captures how the end-user sees the interaction with the semantic core, independently of any implementation or technology. 1 The task and domain models are transformed into an "abstract UI" model 2 for a given context of use, but still without making any assumptions about the interaction modality and target platform. The "abstract UI" becomes a "concrete UI" when design options are decided for a particular modality, e.g. graphical, vocal, tactile, haptic or tangible, and for a particular platform, e.g. a smartphone, a tablet, a wall screen. A final UI is obtained from this concrete UI model by means of either model interpretation or modelto-code transformation. Any transformation between two levels can exploit the contents of the context model. Examples of these are: a forward engineering transformation (depicted as a downwards arrow in the software system), a reverse engineering transformation (depicted as an upwards arrow), or a self-modification (depicted as a loop), all of which can generate a subsequent model based on the previous one by exploiting the context model. -The Intelligent UI adaptor, which consists of six components, two of which are mandatory. At the core is the adaptation manager, which is responsible for performing any complete adaptation process from its initiation to its completion, such as according to the Isatine framework. This manager, therefore, stores and maintains adaptation parameters, which regulate the adaptation process with variable parameters, such as the level of automation, the frequency, the priority of adaptation rules or the preferred adaptation strategies. Adaptation parameters can be application-independent, such as the level of automation, or application-dependent, such as those shown in Figs. 6 and 7. The adaptation manager has its own UI, denominated as an adaptation manager UI, which is sometimes referred to as a meta-UI (the UI above the original UI [11]) or extra-UI (the UI external to the UI of the application [26]). This UI enables the end-user to access and update the adaptation parameters and to conduct the whole adaptation process interactively so as to specifically perform adaptation operations, review them, accept them or reject them. In order to clearly differentiate the UI of the adaptation manager from that of the software system, it should be located in a separate location. Depending on the parameters, the adaptation manager executes the adaptation logic contained in the adaptation engine, which is usually implemented in the form of adaptation rules. The adaptation manager can call the adaptation transitioner in order to convey to the end-user the transition between the status before and after adaptation. For example, animated transitions [12] apply morphing techniques to show this step and to preserve continuity (Fig. 8). If necessary, the transitioner provides the end-user with information on why, how and when the adaptation is performed by requesting the adaptation explainer, which is responsible for explaining and justifying why any adaptation proposal or step will be executed [16]. Finally, an adaptation machine learning system can monitor the whole process over time, learn what the good adaptations are or which are preferred by the end-user, and recommend them in the future [7]. For example, TADAP [27] suggests adaptation operations based on the user's interaction history that the end-user can accept, reject, or re-parameterise by employing Hidden Markov Chains. -The external sources contain any form of information that can be exploited in order to support and improve the adaptation process: data concerning individual items, information for semantically related data, knowledge gained from exploiting the information within a certain domain, and wisdom when knowledge can be reproduced in different domains. These sources are typically held by agents that are external to the software system, such as experts, brokers, recommenders, or any third party source. For example, when no adaptation proposal can be obtained, an external source may be required in order to attain one.
The conceptual properties for UI adaptation
The framework enables the location and definition of conceptual properties for model-based UI adaptation. Some properties were taken from existing frameworks, and others are based on our own experience of the topic. These properties can be structured according to the Quintilian questions [28,32]:
Who
This dimension refers to the actor that is responsible for carrying out the adaptation process from beginning to end. This actor could be the end-user, the software system itself (and by metonymy, its UI), or a third party, such as an external broker, the designer, the developer, any external stakeholder, or the crowd [29]. This dimension is defined by the following property: 1 Adaptation responsibility, which can be shared between different actors and may vary depending on the adaptation steps that are carried out. According to the Isatine framework [23], the adaptation steps followed by a particular approach can be grouped according to their impact on the execution and evaluation degree of the approach. The execu- Fig. 8 Animated transition from the initial state before adaptation to the final state after adaptation [12] tion degree can be assessed by analysing who performs the following three steps: -Initiative, which refers to who detects that there is a need to adapt the user interface. The adaptation process can usually be initiated by the user (U), the system (S) or a third party (T). For instance, the user can trigger an adaptation by selecting an element in the user interface or the system can decide that an adaptation is needed by inferring it from a change in the context of use. -Decision, which refers to who makes the decision to adapt the UI (i.e. the user, the system, or third party).
The decision is concerned with the identification of what adaptation proposals best fit the need for the adaptation detected. -Application, which refers to who is responsible for applying the adaptation, i.e. U, S, or T, or any combination.
The evaluation degree can be assessed by analysing who performs the following three steps: -Transition, which refers to how the transition is performed from the original UI to that which is adapted. This criterion indicates whether the end-user is able to perceive how the adaptation is conducted, i.e. whether the user is aware of the intermediate steps taken when adapting the user interface. -Interpretation, which refers to the user's ability to understand both the adaptation results and the adaptation execution itself. -User feedback, which refers to the ability of the approach to provide feedback about the quality of the adaptation.
What
This dimension refers to what is adapted, which is further characterised through the use of five conceptual properties: 2 Adaptation target, which refers to which UI part is subject to adaptation: its presentation (layout), its dynamic behaviour, its navigation, its contents (e.g. text, images, videos), or any combination. For example, after identifying the end-user, Diffie [37], highlights parts of a website that have changed since the last visit. 3 Adaptation granularity, which refers to the smallest UI unit that is subject to adaptation. The adaptation unit could cover the UI presentation, the dialogue, the navigational flow or the contents. The following units may be subject to adaptation: -Within widgets: the adaptation is applicable within an interaction object. For example, a list box is replaced with a drop-down list (Fig. 8). -Across widgets, within a group: the adaptation is applicable to widgets within the same group. -Across groups within a container: the adaptation is applicable to groups of widgets within the same container. -Across containers, within a software system: the adaptation is applicable to all groups within a software system, e.g. a single application. -Across software systems: the adaptation is applicable to software systems. For example, a particular adaptation is always applied to all applications used by a person.
A model-based approach may support the adaptation of one or more UI units. For example, Sottet et al. [36] support adaptations across different interaction objects (widgets), across groups within a container, and across containers, within a software system. 4 UI Type, which refers to the UI type that is subject to adaptation depending on its interaction modality: -Graphical: concerns only the graphical part. For example, a rich internet UI is rendered as a vectorial graphical interface [24]. Nomadic gestures [40] adapt command gestures for a particular user that are transferable from one software system to another. 5 UI Modality, which expresses how many modalities are incorporated into the UI adaptation, as follows: -Monomodal: the approach supports only the adaptation of a single UI type. -Bimodal: the approach supports only the adaptation of two UI types together. -Multimodal: the approach supports the adaptation of several UI types combined. 6 Context Coverage, which expresses which part of the context model is exploited for UI adaptation: the user model (e.g. user profile, preferences, goals, tasks, emotional state, physical state), the platform model (e.g. screen resolution, browser, battery) and/or the environment model (e.g. location, noise, light). For example, a business traveller who rents a car via a smartphone in a noisy airport is considered as one context of use, and a tourist who book a car on a laptop while sitting on a sofa at home is considered as another context of use. Figure 4 covers the three models: the user who is a tourist, the platform detected as a tablet and the environment detecting dark conditions. Any variation in the model involved in the context model can initiate a contextual variation that will or will not be reflected via a UI adaptation. A small contextual variation could be considered not sufficiently significant enough to trigger a UI adaptation, and it is not always desirable or advisable to perform such an adaptation for every slight contextual perturbation.
Why
This dimension is concerned with justifying the reasons why a UI adaptation is carried out. This depends on the user's goals and is defined by two properties: 7 Adaptation rationale, which refers to the reason why the adaptation is required and, more specifically what the new requirements that need to be satisfied through the adaptation are. For example, an end-user expressing a preference for data selection rather than data input will see some sort of UI adaptation based on this preference. 8 Adaptation QAs, which refer to the quality attribute(s) that should be impacted by the UI adaptation process. For example, the ISO/IEC 25010 standard for software product quality [19] can be used as a reference to specify the quality attributes to be guaranteed or improved by any UI adaptation, such as usability, UI aesthetics, flexibility and portability. Since the UI adaptation is, in principle, performed for the ultimate benefit of the end-user, and not necessarily the software system, quality attributes such as accessibility and continuity are often oriented towards the end-user. UI plasticity [10] also represents a frequent quality attribute, as it expresses the ability of a UI to adapt itself depending on contextual variations while preserving usability.
Where
This dimension is concerned with where the adaptation takes place and is defined by four properties: 9 Adaptation location, which refers to the physical location of the intelligent UI adaptor (Fig. 3) in the overall architecture of the software system as follows: -Client-side: when located inside in the software.
-Server-side: when located outside the software system, which is typically the case in cloud computing. -Proxy-side: when encapsulated in a proxy component to ensure some independence. For example, Fig. 5 depicts a UI in which weather forecasts were retrieved from a web service in XML and fed back into a proxy to decide how to present these data based on the adaptation parameters specified in Fig. 7. Locating this adaptation strategy inside the software would create a certain dependence between the UI and the web service.
The adaptation location directly influences how feedback loops are introduced into the components (Fig. 9). A software system devoid of adaptation (Fig. 9a) benefits from retroactive feedback only between the software system and the end-user. A software system with an adaptation engine (Fig. 9b) has two feedback loops: between the user and the adaptation engine and between the user and the system. Finally, three feedback loops are possible for an intelligent UI (Fig. 9c), which require the system to be decoupled from its intelligent UI. Current research efforts in the SE community [42] [3] are focused on providing strategies and facilities with which to support UI adaptation based on one control loop with an adaptation engine, and there is a shortage of approaches that support more intelligent strategies based on two control loops with an adaptation manager. 10 Adaptation scope level, which refers to the level at which the adaptation process occurs, which is based on the three levels proposed by Nierstrasz and Meijler [30]: 10 An adaptation manager with which to distribute tasks to platforms [36] -Framework level, when the process occurs at the level of a generic software architecture together with a set of generic software components that may be used to create specific software architectures. For example, Nivethika et al. [31] developed a UI adaptive framework by exploiting an inference engine with the purpose of adapting a UI based on user actions in one application to be propagated to other applications. -Class level, when the process occurs at the level of the components belonging to specific software systems or provided by frameworks for this purpose. For example, Yigitbas et al. [42] presented a model-driven approach for self-adaptive UIs that is applied at the model level of a particular UI. It supports the specification of an adaptation model that contains abstract UI adaptation rules in alignment with the IFML abstract UI modelling language. -Instance level, when the process occurs at the level of a running software system. For example, Akiki et al. [3] presented a model-driven UI adaptation approach that is applied only at the instance level of a particular UI. 11 UI Adaptation level, which refers to the abstraction level defined in the Cameleon Reference Framework (CRF) [9] at which the adaptation occurs as represented in the intelligent UI (Fig. 3): "task and concepts", "abstract UI", "concrete UI", or "final UI", or several levels simultaneously. CRF is a unified UI reference framework with which to develop multitarget UIs that is particularly suitable for an MDE approach [14]: -Task/domain model: when a task model and/or a domain model are exploited in order to perform UI adaptation. For example, TADAP [27] maintains a task model of the end-user's activity and suggests a final UI adaptation depending on its parameters (Fig. 11). UbiDraw [39] consists of a vectorial drawing application that adapts its UI by displaying, undisplaying, resizing, and relocating tool bars and icons according to the current user's task, task frequency, criticality, importance, or the user's preference for a particular task. The domain model denotes the application's universe of discourse and can typically be represented using a UML class diagram. Figure 10 depicts a UI of the adaptation manager enabling the enduser to distribute tasks (represented in a task model) to various platforms (represented in platform models), such as an HTML UI to one browser and another XUL UI to another browser with another rendering. -Abstract user interface model: this specifies the user's interactions with the UI without making any reference to any specific technology (i.e. modality). This model is typically represented with a User Interface Description Language (UIDL). -Concrete user interface model: this specifies the user's interactions with the UI with explicit reference to a specific technology, e.g. a graphical UI for a website or a vocal UI on a smartphone. For example, ReversiXML [6] reverse engineers the HTML code of a web page into a concrete UI model that is then derived for another platform.
-Final user interface model: this represents the actual UI produced by any rendering engine, i.e. by interpretation or by model-to-code generation. 12 Adaptation Domain, which refers to the domain of human activity in which the adaptation takes place. A model-based UI adaptation approach can be general purpose (independent of the application domain) or devised for a specific domain (e.g. smart home, Internet-of-things, ambient assisted living, smart cities, ERP system). We believe that the application domain may influence the adaptation rationale or the adaptation QAs that should be ensured by a particular approach. For example, the objective of Akiki et al.'s approach [3] is to improve the UI usability of enterprise applications, such as ERP systems, by providing end-users with a minimal feature-set and an optimal layout.
When
This dimension is concerned with when the adaptation takes place. This decision is not trivial, since the frequency of adaptation affects the system usability. It is defined by two properties: 13 Adaptation type. The UI adaptation type is said to be static when its process takes place during design (e.g. prototyping, sketching), development (e.g. compile), link or load time, dynamic, when its process takes place during runtime, or hybrid when both are combined. For example, in the Yigitbas et al. approach [42] a rule-based execution environment supports the UI adaptation at runtime. 14 Adaptation time, which refers to the exact moment of time at which the UI adaptation occurs, which could be at one specific moment (single-step) or distributed throughout several moments of time (multi-step). In order to further characterise this conceptual property, we rely on the adaptation dimensions proposed by McKinley et al. [25], which result from a survey of adaptive systems. It is said to be hardwired (when the UI adaptation is embedded in the code of the software application, typically the UI code), customisable (when the UI adaptation enables some degree of pre-computed freedom), configurable (when the UI adaptation technique could be configured before executing it), tunable (when the UI adaptation technique could fine-tune the UI at run-time without modifying its code), or mutable (when the UI adaptation technique subsumes the run-time code modification of the software system, namely the UI code). McKinley et al. [25] mention that hardwired, customisable, and configurable cases are static, while tunable and mutable cases are, by definition, dynamic. For example, MiniAba [34] uses generative programming to automatically regenerate a new C++ project from dynamic specifications, which are thus dynamic and mutable.
How
This dimension is concerned with how the UI adaptation is performed. One critical issue is to what extent the software system can access the various aforementioned models to optimise the UI adaptation and to exploit them. It is characterised by five properties: 15 Adaptation method, which refers to the software engineering method used to adapt the UI. An adaptation method can be model-based/driven or it can be combined with other methods, such as aspect-oriented modelling, componentbased design, computation reflection (i.e. a programme's ability to reason about, and possibly alter, its own behaviour), dynamic interconnection, higher-order functional composition, higher-order modelling, macro-command expansion, mashup, modelling or programming by example, syntactical expansion of parameterised component. This paves the way towards investigating the effectiveness of combining these techniques with the purpose of improving the existing model-based UI adaptation approaches. For example, Blouin et al. [5] presented an approach that combines aspect-oriented modelling with property-based reasoning to control complex and dynamic user interface adaptations. The encapsulation of variable parts of interactive systems into aspects permits the dynamic adaptation of user interfaces, and the tagging of UI components and context models with QoS properties allows the reasoner to select the aspects best suited to the current context. 16 Adaptation automation degree, which refers to the level to which the UI adaptation is automated. There is a wide range of possible adaptation levels between adaptability (when UI adaptation is performed entirely manually by the end-user) and adaptivity (when UI adaptation is performed entirely by the system), which we defined as follows based on [33] (see Fig. 12): -Level 1. Adaptability (fully manual): the UI adaptation is performed entirely by the end-user. -Level 2. Proposability: the intelligent UI manager proposes certain decisions that should be made in order to execute actions towards UI adaptation to be performed by the system and the end-user decides. -Level 3. Narrowing: the intelligent UI manager sorts the proposed decisions according to certain criteria to facilitate the end-users' decision. For example, Fig. 11 proposes a suite of six new layouts in decreasing order of performance based on past user actions and parameters. -Level 4. Identification: the intelligent UI manager identifies the best decision for the user to make from among all the proposals. -Level 5. Execution: the intelligent UI manager executes the decision made by the end-user. For example, the enduser selects one of the new layouts presented in Fig. 11 to replace the existing one. -Level 6. Restriction: the intelligent UI manager postpones the UI adaptation for a certain amount of time. If the enduser does not react, the UI adaptation will be processed as suggested. Otherwise, the end-user should use the adaptation manager UI to specify which actions to take. -Level 7. Information: the intelligent UI manager performs the UI adaptation and triggers the adaptation transitioner and/or explainer in order to inform the end-user of this decision. -Level 8. On-demand: the intelligent UI manager performs the UI adaptation and triggers the adaptation transitioner and/or explainer only if the end-user demands it. -Level 9. Self-explanation: the intelligent UI manager performs the UI adaptation and triggers the adaptation transitioner and/or explainer when it decides to do so. -Level 10. Adaptivity/self-adaptation: the intelligent UI manager performs the UI adaptation entirely automatically without any user intervention. Levels 2 to 9 represent various cases of mixed-initiative adaptation. While these levels cover a wide range of automation levels, they mainly relegate the end-user to a secondary role of decision maker. These levels should, therefore, be accompanied by appropriate actions that the end-user should take within the adaptation manager UI, which should offer more high-level actions to support UI adaptation. These levels are cumulative, thus requiring a sophisticated adaptation manager.
To be more practical, we suggest distributing the mixed initiative between the end-user, the system, and any third party according to the seven stages of adaptation ( Fig. 1): goal, initiative, specification, application, transition, interpretation, and evaluation. For example, AB-HCI [22] supports a mixed initiative for the three steps belonging to the gulf of execution, i.e. from initiative to application, but not the subsequent stages belonging to the gulf of evaluation. Each stage is managed through a particular agent in a multi-agent architecture which adequately distributes responsibilities.
An alternate characterisation of the adaptation automation degree could balance the UI adaptation with equal responsibility (when the UI adaptation is performed equally by the end-user and the adaptation manager), with more user involvement (when the UI adaptation is mostly performed by the user) or less involvement (when the UI adaptation is mostly performed by the adaptation manager). These cases should cover various degrees of user involvement depending on her willingness to drive the process and the knowledge required for this purpose. Most existing model-based/driven UI adaptation approaches do not properly involve the enduser during the adaptation process. Moreover, most of them use only data or information as external sources. There is consequently a shortage of approaches that use knowledge and wisdom to drive the adaptation process. 17 (Fig. 13).
Fig. 13
Dual monitor UI adaptation [35] 4 Opportunities for the modelling community The majority of the challenges related to the adaptation of software systems addressed by the modelling community in the last two decades have been mostly of a technical nature. However, in order to attain the full potential of the conceptual reference framework and its properties, the community should also address challenges that arise from the intersection of SE and HCI related to the engineering of human-centred aspects, along with other challenges that may appear as a result of the combined use of MDE and AI.
In the following, we explain some key properties of the framework and discuss the major challenges that need to be addressed in order to take advantage of the opportunities provided by model-based intelligent UI adaptation: 1. Take advantage of user models. Many aspects can be captured in a user model such as gender, age, emotions, personality, language, culture, and physical and mental impairments, all of which play an essential role in intelligent UI adaptation. How can we effectively and precisely capture these aspects and use them to propose appropriate UI adaptation? There is no shortage of means to probe the user as more sensors become affordable. For example, wearable devices capture biometric data and external sensors acquire data on the user's behaviour in order to discover adaptation patterns. The question is not so much how to probe the user, but how to make real use of the information obtained while respecting privacy. Another challenge is related to the integration of the user model with other models (e.g. user interface model, context model) in order to ensure the traceability and consistency of these models during the adaptation process.
2. Towards "grey models". The MDE community has long sought the ultimately expressive models and adaptation engines that would optimise UI adaptation in most contexts of use, thus producing "white boxes" that can be parsed, analysed, and reasoned about. On the other side, the AI community investigates ML techniques that are based solely on users' data, thus producing "black boxes" that cannot be scrutinised in order to understand them. Why not mix the best of both fields by feeding classical or new models with users' data abstracted by means of ML techniques [27], thus obtaining "grey models"? A representative example is a model-based reinforcement learning approach proposed by Todi et al. [38], which plans a sequence of adaptation steps (instead of a one-shot adaptation) and exploits a model to assess the cost/benefit ratio. 3. Towards a systematic exploration of adaptation automation degrees. While Fig. 1 decomposes the UI adaptation into stages to be explicitly supported by tools, Fig. 12 suggests the application of various degrees of automation. Very few of these degrees have been investigated to date when performing model-based/driven UI adaptation and there is a lack of knowledge on how and when to apply them. These challenges open up new research directions to be explored in the future, including the definition of strategies with which to progressively increase the level of automation and 'intelligence' of the Adaptation Engine by exploiting the data, information, knowledge and wisdom captured from specific domains. 4. Keeping the Human-in-the-loop paradigm. The aforementioned challenges will never be properly addressed if the end-user is not actively involved, and not just passively grazing between adaptation steps. The application of ML techniques could be structured on the basis of a "Perception-Decision-Action (PDA)" cycle ( Fig. 2): the UI adaptation manager uses all available means to perceive/sense the user and probe her context of use in order to suggest and make an appropriate decision for UI adaptation that could be undertaken in various mixedinitiative configurations. Similarly, the end-user should also enter a second PDA scheme in which the UI adaptation is adequately perceived, thus triggering certain human decisions and executing corresponding actions, and a new cycle then starts over. Adaptive systems and models at runtime are core enabling techniques behind an intelligent UI. Models at runtime has been successfully used to automatically reflect changes from a system into changes in models, and vice versa. However, as human cognition is involved, the traditional adaptation in which all the variabilities are pre-defined is not sufficient. We should turn to intelligent UIs that can learn how to adapt to different users based on a user model that captures the user preferences, style of interaction, expertise, emo-tions, etc. However, models at runtime is rarely applied to such models, and this may be challenging. 5. Relying on software co-evolution. Changes in software resulting from UI adaptation go far beyond merely modifying the UI and could potentially impact on any component of the software system or the others represented in Fig. 3. For example, how can we align user interface changes with changes in the software architecture and vice versa? There is a need to formalise these changes in order to reason about them for purposes such as maintainability, traceability, etc. The field of software evolution has an established tradition as regards formalising these aspects, but rarely as regards UI aspects. When the UI comes into play, software evolution should upgrade to software co-evolution in which changes on both sides, the user interface and the software system, should be formalised. 6. Considering adaptation as a multi-factorial problem.
Since many contextual aspects could influence the quality of UI adaptation, multiple quality factors (e.g. adaptation QAs, adaptation automation degree, the user's characteristics) should be considered together in the same multi-factorial problem. Improving user performance could come about at the expense of cognitive destabilisation. Another challenge is related to the analysis and resolution of conflicting UI adaptation alternatives. In this context, ML techniques could be used to support the decision making as regards the selection of the best UI adaptation that is closer to the end-user's intention.
The aforementioned suggestions represent opportunities for the modelling community to leverage UI adaptation of software systems by investigating some new avenues. A limitation of this approach is that the conceptual reference framework does not provide any prioritisation of its key features and how they should be explored. In addition, it is impossible to consider them all together, although they are intertwined. We need, therefore, to investigate trade-offs and dependencies among the different properties and their levels to get a better understanding of the potential of the proposed framework. Software co-evolution, as suggested, is a form of keeping the human-in-the-loop paradigm when the UI should be adapted as much as possible as a collaboration between the end-user, the system, and any third party, especially when no consensus is reached between the end-user and the system. Some properties of the reference framework present a level-wise assessment which represents particular capabilities of an approach to support intelligent UI adaptation, which increases the higher the level. More efforts are needed to validate more thoroughly the property levels for a wider set of existing adaptation approaches. In addition, instantiating the framework to specific model-based adaptation scenarios and building prototypes of the main components of the reference framework (e.g. adaptation engine, adaptation transitioner, adaptation machine learning, adaptation explainer) would allow us to get further insights.
The authors of this expert voice trust that the proposed framework for model-based intelligent user interface adaptation will serve as a call for action that could lead to research initiatives by the modelling community.
Silvia Abrahão is an Associate
Professor at Universitat Politécnica de Valéncia, Spain. Her research interests include quality assurance in model-driven engineering, empirical assessment of software modeling approaches, model-driven cloud services development and monitoring, and the integration of usability into software development. Contact her at sabrahao@dsic.upv.es Emilio Insfran is an Associate Professor at Universitat Politécnica de Valéncia, Spain. His research interests include requirements engineering, model-driven engineering, DevOps, and cloud services development and evaluation. Contact him at einsfran@dsic.upv.es Arthur Sluÿters is a PhD student in Computer Science at Université catholique de Louvain, Belgium, where he is an "aspirant FNRS" under contract no. 1.A434.21. His research interests include intelligent user interfaces (IUI), gesture recognition, gestural user interfaces, and radar-based interaction. Contact him at arthur.sluyters@uclouvain.be Jean Vanderdonckt is a Full Professor at Université catholique de Louvain, Belgium, where he leads the Louvain Interaction Lab. His research interests include engineering of interactive systems (EICS), intelligent user interfaces (IUI), multimodal systems such as gesture-based, information systems, and model-based/driven engineering of user interfaces. Contact him at jean.vanderdonckt@uclouvain.be
|
v3-fos-license
|
2022-06-24T15:12:21.462Z
|
2022-06-22T00:00:00.000
|
249973368
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00035-022-00285-y.pdf",
"pdf_hash": "91c060c23385d86a049b8e1fd79ad333f19f9bd1",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44107",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "94e5382bbbd142bba9da6461f574a718bece126a",
"year": 2022
}
|
pes2o/s2orc
|
Scale-dependent patterns and drivers of vascular plant, bryophyte and lichen diversity in dry grasslands of the Swiss inneralpine valleys
The inner-alpine dry valleys of the Swiss Alps are characterized by subcontinental climate, leading to many peculiarities in dry grassland species composition. Despite their well-known uniqueness, comprehensive studies on biodiversity patterns of the dry grasslands in these valleys were still missing. To close this gap, we sampled 161 10-m2 vegetation plots in the Rhône, Rhine and Inn valleys, recording vascular plants, terricolous bryophyte and lichen species, as well as environmental data. Additionally, we tested the scale-dependence of environmental drivers using 34 nested-plot series with seven grain sizes (0.0001–100 m2). We analysed the effects of environmental drivers related to productivity/stress, disturbance and within-plot heterogeneity on species richness. Mean species richness ranged from 2.3 species in 0.0001 m2 to 58.8 species in 100 m2. For all taxa combined, the most relevant drivers at the grain size of 10 m2 were southing (negative), litter (negative), mean annual precipitation (unimodal), gravel cover (negative), inclination (unimodal) and mean annual precipitation (unimodal). For vascular plants the pattern was similar, while bryophyte and lichen richness differed by the opposite relationship to mean annual precipitation as well as negative influences of mean herb layer height, grazing and mowing. The explained variance of the multiple regression model increased with grain size, with very low values for the smallest two grain sizes. While southing and litter had high importance for the fiver larger grain sizes, pH and gravel cover were particularly important at the intermediate grain sizes, and inclination and mean annual precipitation for the two largest grain sizes. The findings emphasize the importance of taxonomic group and grain size for patterns and drivers of species richness in vegetation, consistent with ecological theory. Differences in the diversity–environment relationships among the three taxonomic groups can partly be explained by asymmetric competition that leads to low bryophyte and lichen diversity where vascular plants do well and vice versa. The relatively low alpha diversity of vascular plants in dry grasslands in Swiss inner-alpine valleys compared to similar communities in other parts of the Palaearctic remains puzzling, especially because Swiss stands are often large and well-preserved.
Introduction
The xerothermic vegetation of the central valleys of the Alps has long attracted the interest of botanists (Christ 1879;Braun-Blanquet and Richard 1950;Braun-Blanquet 1961;Schwabe and Kratochwil 2004;Dengler et al. 2019b). The macroclimate of these valleys strongly deviates from their surroundings, as they are situated between the barriers of high mountain systems which largely block rain-carrying clouds. These valleys, therefore, constitute dry islands within the generally precipitation-rich European Alps. Together with the partly very low valley bottoms (sometimes only a few hundred metres above sea level), this leads to a relatively warm and dry climate, in strong contrast to the surrounding mountains (Braun-Blanquet 1961;Dengler et al. 2020c).
The climate, topography and isolated position lead to many peculiarities in flora and vegetation, partly resembling the Eastern European steppe vegetation. Apart from xerothermic forests and shrublands, different types of dry grasslands are a dominating feature of these landscapes (Braun-Blanquet 1961). These dry grasslands comprise diverse vegetation types, some growing on rocky outcrops with very shallow soils, others on slightly deeper soils. Below the alpine zone, most of these dry grasslands belong to seminatural grasslands originating from centuries of low-intensity agriculture. Natural grasslands are rare and restricted to sites where forest growth is limited by particular soil or topographic conditions, e.g. rocky slopes with skeletal soils (Braun-Blanquet 1961;Boch et al. 2020;Dengler et al. 2020c). In the central valleys of the Alps, widespread Central European dry grassland species meet with steppic species whose main distribution is in the central part of Eurasia. In addition, dealpine, submediterranean and Southwest European xerothermic species occur together with a few narrow endemics of the Alps (Becherer 1972;Wohlgemuth 1996;Dengler et al. 2019a).
While dry grasslands in most other parts of Europe have been intensively studied during recent decades with respect to biodiversity, ecology and conservation (see reviews by Dengler et al. 2014;2020c, Dengler andTischew 2018;Boch et al. 2020), comprehensive studies on diversity patterns in dry grasslands along environmental gradients from the Swiss Rhône, Rhine and Inn valleys are missing (but see Boch et al. 2019aBoch et al. , 2021Dengler et al. 2019b). Older literature mainly focussed on vegetation classification and the floristic and ecological characterization of dry grasslands (e.g. Christ 1879;Frey 1934;Braun-Blanquet 1961;Schwabe and Kratochwil 2004). However, knowledge on the distribution, ecology and biodiversity status of these dry grasslands is critical for developing management and conservation strategies. In Switzerland, this is of particular importance, as dry grasslands are among the most threatened vegetation types (Delarze et al. 2016). It has been estimated that about 95% of the dry grassland area in Switzerland has been lost since 1900, mainly due to land-use intensification or abandonment (Lachat et al. 2010), and habitat quality is still decreasing (Boch et al. 2019b). Consequently, 35% of about 350 dry grassland vascular plant species in Switzerland are currently considered threatened (Bornand et al. 2016).
Species richness in plant communities is controlled by a multitude of drivers (Grace 1999) which act simultaneously. Applying the concepts of Grime (1973), potential drivers can be grouped into those related to the stress-productivity axis and those related to the disturbance axis (Huston 2014), both of which are strongly modified by changing land-use systems and intensity (Allan et al. 2014). Generally, maximum fine-grain richness should be expected at intermediate levels of productivity and disturbance (Grime 1973;Huston 2014). However, what can be considered intermediate productivity depends on the level of disturbance and what can be considered intermediate disturbance depends on the level of productivity; thus, looking at the variables individually might yield positive, unimodal or negative relationships (Huston 2014). More recently, the heterogeneity of either productivity-or disturbance-related factors became a focus of diversity theory as a variable that nearly universally increases species richness (Tamme et al. 2010;Stein et al. 2014). Previous studies in Palaearctic grasslands found that specific drivers might be particularly relevant in one region but not in others (reviewed by Dengler et al. 2014), thus demonstrating the need to study such relationships in multiple regions and consider a large set of different drivers to achieve a more systematic understanding.
Most of the studies dealing with drivers of species diversity in grasslands focused on vascular plants (see review in Dengler et al. 2014), while the knowledge for bryophytes and lichens is still fragmentary. However, these two taxonomic groups often constitute a large fraction of the overall species diversity in dry grassland vegetation. Particularly in rocky and sandy dry grasslands, the fraction of non-vascular taxa can be substantial, sometimes exceeding the number of vascular plant species (Löbel and Dengler 2008;Dengler et al. 2020b). Some studies found that bryophyte diversity is positively related to vascular plant diversity (Löbel et al. 2006;Müller et al. 2019) and to that of several invertebrate taxa, and even has the strongest relation with the diversity of a wide range of other taxa ("multidiversity"; Manning et al. 2015). However, studies investigating vascular plants, bryophytes and lichens simultaneously are rare (Löbel et al. 2006;Turtureanu et al. 2014;Zulka et al. 2014). This lack of multi-taxon studies in vegetation is problematic, as bryophytes and lichens are known to react sensitively to different environmental conditions, such as soil pH (Löbel et al. 2006), and nutrient supply (Boch et al. 2018b), vascular plant cover (Löbel et al. 2006;Boch et al. 2016Boch et al. , 2018a and biomass (van Klink et al. 2017;Boch et al. 2018b). Therefore, it is important to understand how bryophyte and lichen richness in vegetation types is related to a wide array of potential drivers.
The inconclusive results about the effects of different drivers on species richness might partly be due to different grain sizes that have been studied. It has been proposed theoretically (Shmida and Wilson 1985) and later shown in several meta-analyses that spatial scale can influence the relative importance (Field et al. 2009;Siefert et al. 2012) or even the direction of the impact of certain drivers (Tamme et al. 2010). Studies with nested-plot data of dry grasslands in different Palaearctic regions have confirmed strong scale effects (Turtureanu et al. 2014;Kuzemko et al. 2016;Polyakova et al. 2016;Dembicz et al. 2021a). In agreement with theory and the mentioned meta-analyses, these studies found the strongest effects of soil variables mostly at the smallest grain size and a prevalence of climate variables at 1 3 larger grain sizes, but there were also many regional differences. Multi-scale sampling does not only allow an analysis of the drivers of α-diversity at different grain sizes, but also the application of species-area relationships (SARs) to study fine-grain β-diversity. SARs have been modelled with many different functions, with the power law (S = c A z with S = species richness, A = area, c and z modelled parameters) prevailing (Arrhenius 1921;Martín and Goldenfeld 2006;Dengler 2009). Recently it has been shown with an extensive dataset of nested-plot data from open vegetation types across the Palaearctic biogeographic realm that this function is generally the best SAR model also at fine grains in continuous habitats (Dengler et al. 2020a). The exponent z can then be used as a valid measure of β-diversity (Koleff et al. 2003;Jurasinski et al. 2009;Polyakova et al. 2016;Dembicz et al. 2021b). However, previous studies did not yield consistent results on the drivers of z values or the question whether z values are completely scale-invariant. Some studies found a slight peak of local z values for grain sizes around 0.01-0.1 m 2 (Turtureanu et al. 2014;Polyakova et al. 2016), while others did not find any scale dependence (Kuzemko et al. 2016;Dembicz et al. 2021a).
To shed light on these open points, the Eurasian Dry Grassland Group conducted an international research expedition to the dry grasslands of the three valley systems of the Swiss Alps with pronounced continental climate (Dengler et al. 2020b). Combined with some other datasets using the same methodology, these data were then used to address the following questions: (i) How does species richness across scales in these grasslands compare to that of other grassland habitats in the region and elsewhere? (ii) Which structural and environmental drivers are most important for the biodiversity patterns across taxonomic groups and spatial scales? (iii) Which factors drive fine-grain β-diversity (z values), and is there a scale-dependency of local z values?
Study system
We selected 27 study sites along the inneralpine valley systems of Switzerland within the river catchments of the Rhône (14), Rhine (5) and Inn (8), the numbers reflecting the different spatial extents of dry grasslands in these catchments (Online Resource 1). The sites extended from 6.98° N -10.38° N and 46.12° E -46.98° E and covered an elevational gradient from 511 to 1574 m a.s.l. (Fig. 1). Bedrock composition was diverse, including limestone, granite, metamorphic rocks (gneiss, amphibolite), flysch, moraine and alluvial deposits, with base-rich substrata prevailing overall. Regarding climate, the Rhône valley is the driest and most continental, followed by the Inn valley, while the Rhine valley is the least continental. Mean annual precipitation varies considerably, from 670 to 1,345 mm (Daym-etCH database, see below). Most sites are legally protected grasslands of national importance (Eggenberg et al. 2001; see Online Resource 1). Within a site, plots were selected to capture the existing diversity of ecologically and physiognomically varying dry grassland types (e.g. meso-xeric vs. xeric, skeleton-rich vs. fine-soil rich, north-facing vs. south-facing slopes). The studied communities belong to the vegetation classes Festuco-Brometea and Sedo-Scleranthetea (Dengler et al. 2020b).
Field sampling and lab measurements
We primarily used so-called "EDGG Biodiversity Plots" (n = 34; see Online Resource 1) to address the scaledependence of plant diversity (Dengler et al. 2016). These are square plots of 100 m 2 , with two nested subplot series of 0.0001, 0.001, 0.01, 0.1, 1 and 10-m 2 plots in two opposite corners of the largest plot. In addition, we sampled 93 10-m 2 plots, resulting in a total of 161 10 m 2 plots. Of these, 107 were located in the Rhône catchment, 23 in the Rhine catchment and 31 in the Inn catchment (Online Resource 1). Most of the plots (n = 139) were sampled during the 12th Field Workshop of the Eurasian Dry Grassland Group (EDGG; www. edgg. org) in 2019 (Dengler et al. 2020b), while a smaller proportion stems from sampling in the village Ausserberg in Valais in 2018 (published in Dengler et al. 2019b) and from a few other occasions in the years 2019-2020. Sampling was conducted in the months of May (n = 137), June (n = 8) and September (n = 16), i.e. at phenological stages at which experienced botanists could recognize more or less the complete species composition. To test whether sampling time could have introduced a bias, we compared total species richness (all taxonomic groups) of the 10-m 2 plots sampled in May with those sampled later in the year, but the means were nearly identical (35.0 in May vs. 34.4 in later months).
In each plot and subplot, all vascular plants, terricolous bryophytes and lichens with shoot-presence in the plots were recorded. Most bryophyte and lichen species were collected and identified in the laboratory. At the grain size of 10 m 2 , the percent cover of each species was estimated, and we recorded the following structural and environmental parameters (Online Resource 2): mean annual temperature, mean annual precipitation, inclination, aspect, microrelief, herb layer height, soil pH, conductivity, soil depth, stone cover, litter cover, gravel cover, grazing and mowing. The coordinates and the elevation were determined using a handheld GPS. Aspect and inclination were measured in degrees using a compass, and inclinometer or a smartphone, respectively. For the analyses, we took the southing component of aspect, i.e. -cos (aspect), ranging from − 1 on northern to + 1 on southern slopes. The percent cover of the tree, shrub, herb and cryptogam layer was estimated, as well as the percent cover of all layers together (hereafter: 'total vegetation'). In addition, the percent cover of abiotic layers like litter, dead wood, stone, gravel and fine soil was estimated. The vegetation height and soil depth were measured at five random points within each plot using a plastic disc and an iron pole, respectively. We then used mean vegetation height as a proxy for standing biomass and mean soil depth for the analysis. Further, we measured maximum microrelief perpendicular to a pole of 80 cm length placed on the soil surface where it showed the greatest difference in relief (Dengler et al. 2016). For soil property measures, soil samples of the uppermost 15 cm were taken at five random locations and then mixed. The soil samples were air-dried to measure soil pH and electrical conductivity with a multi-parameter probe (HANNA instruments HI 12,883, Woonsocket, Rhode Island, USA) in a suspension of 10 g soil and 25 ml distilled water. We assessed land use as the presence of grazing and mowing, respectively, both as binary variables by evaluating traces like faeces, signs of grazing or presence/absence of pasture weeds. Mean annual temperature and mean annual precipitation were derived from the DaymetCH dataset (D. Schmatz, WSL, Birmensdorf, unpublished, see https:// www. wsl. ch/ de/ proje kte/ clima te-data-portal. html# tabel ement1-tab1, version of September 2021). This dataset of 100-m resolution for the period 1981-2010 was created with the interpolation software Daymet (Thornton et al. 1997), based on the daily measured values of all weather stations of Mete-oSwiss and the digital elevation model DHM25 of Swiss-Topo. We decided against the widely used CHELSA climate dataset (Karger et al. 2017) because it underestimates the annual precipitation in our studied sites by on average nearly 200 mm as the comparison of DaymetCH and CHELSA data showed, while DaymetCH data were consistent with the climatic normal for the climate stations nearby. For the comprehensive list of variables, see Online Resource 1. The data are available from the GrassPlot database (dataset CH_D; Dengler andTischew 2018, Dengler et al. 2018b; https:// edgg. org/ datab ases/ Grass Plot).
Statistical analyses of species richness-environment relationships
In order to avoid multicollinearity, we first prepared a correlation matrix of all available metric predictor variables (environmental variables or proxies thereof) (Online Resource 3). Following the recommendation by Dormann et al. (2013), if two variables were highly correlated (Pearson's │r│ ≥ 0.6), we always retained the ecologically more meaningful variable (Online Resource 3). The selection ended up with the following 14 predictors: southing, inclination, maximum microrelief, mean soil depth, mean herb layer height, litter cover, stone cover, gravel cover, grazing, mowing, soil pH, electrical conductivity, mean annual temperature and mean annual precipitation.
First, we modelled species richness at the 10-m 2 grain size for vascular plants, bryophytes, lichens and all three groups combined. We checked if the models were improved by the addition of a quadratic term. The quadratic terms of inclination and stone cover were added to the full models as these strongly improved model performance (ΔAICc > 5 in the model with quadratic term compared to the one without). This led to a final selection of 17 predictor variables. We started by calculating generalised linear models (GLMs) with negative binomial distribution for each of the four richness variables using the MASS package (Venables and Ripley 2002). Spatial autocorrelation in the model residuals was tested using the Moran's I test (Paradis and Schliep 2018). Only in the case of bryophyte richness did significant spatial autocorrelation occur. Thus, for this taxonomic group, we applied a generalised linear mixed-effect model (GLMM) with scaled values and plot ID nested in site ID as random factors. This random factor combination successfully removed spatial autocorrelation from the residuals. We then compared the GLMM to the corresponding negative binomial GLM using AICc values. We calculated the effect of the predictor variables on species richness by conducting a multimodel inference using the MuMIn package (Bartoń 2019). Using the dredge function, we generated a set of models with combinations of fixed effect terms. The relative importance value of the variables was derived as the sum of Akaike weights over all possible models containing the variable.
Additionally, we modelled total species richness for each of the seven grain sizes. Due to the smaller dataset size of the nested-plot series, in these models we dropped the four predictors with the lowest variable importance in the previous model (electrical conductivity, stone cover squared, grazing and mowing) and calculated a model for each grain size. No significant overdispersion or spatial autocorrelation occurred in these models, except for total species richness in three smallest grain sizes where we found overdispersion (thus the results of these models should be treated with caution). The effects of important predictors on species richness of taxonomic groups were visualised as predicted values based on the averaged models.
Analyses of species-area relationships and β-diversity
For each nested-plot series, we fitted a power-law species-area relationship (SAR) to the data of total species richness after averaging the richness values of the two subseries, using linear regression in double-log space. The slope parameter of the SAR (z value) can be used as a measure of fine-grain β-diversity (Jurasinski et al. 2009;Polyakova et al. 2016). When species richness equalled zero in smaller grain sizes, these grain sizes were not used in the regression of that nested-plot series. The dependence of z values on environmental predictors was then modelled similarly to the grain size models, but with a Gaussian error distribution.
To test for potential scale-dependence of z (Crawley and Harral 2001;Turtureanu et al 2014), we calculated "local z values" (Williamson 2003), i.e. the slopes of the SAR in double-log representation between two subsequent grain sizes (provided each had a richness > 0) for each of the 34 nested-plot series. These local z values of the six grain-size transitions were then compared with an ANOVA, taking series ID as error term, followed by Tukey's post-hoc test.
Species richness across scales
In total, we recorded 818 taxa, of which 629 were vascular plants (77%), 109 bryophytes (13%) and 80 lichens (10%). Mean total species richness ranged from 2.3 species in 0.0001 m 2 to 58.8 species in 100 m 2 (Table 1). Across all grain sizes, vascular plants had the highest mean species richness, followed by bryophytes and lichens. The maximum total species richness at 100-m 2 scale was 113 species in a meso-xeric grassland in the Rhône catchment above the village of Ausserberg, from which 110 species were vascular plants and three bryophytes. The maximum richness at 100 m 2 for bryophytes and lichens was 19 and 24 species, respectively (Table 1).
The three river catchments in comparison
The topographic, soil and land use conditions were rather similar in the three river catchments (Online Resource 4). Major differences only occurred for elevation and the elevation-related climatic variable mean annual temperature and mean annual precipitation (Online Resource 4). Mean annual temperature was highest for the plots in the Rhône catchment (8.3 °C), intermediate in the Rhine catchment (6.8 °C) and lowest in the Inn catchment (5.1 °C). With around 800 mm, mean annual precipitation was similar in the Rhône and Inn catchments, but less variable in the latter, while it was much higher in the Rhine catchment (1055 mm). At the 10-m 2 scale, mean species richness values were very similar across the three catchments, e.g. for total species richness 34.0 species in the Rhône, 38.8 species in the Rhine and 36.0 species in the Inn catchment (Online Resource 4). There was no problematic autocorrelation for any taxonomic group that could be removed by adding catchment as a random factor to a GLMM (see Methods section), indicating that the small differences in richness patterns among the catchments were fully explained by the environmental predictors used in the regressions.
Diversity-environment relationships of different taxonomic groups
The variance of species richness explained by our multiple regression models varied from only 15% for lichens via 34% for vascular plants and 36% for all taxa to 46% (66% including the random factors) for bryophytes ( Table 2). The most important predictors for richness of all taxa were southing (negative), litter cover (negative), mean annual precipitation (unimodal with peak around 1100 mm), gravel cover (negative), inclination (unimodal with peak around 40°), maximum microrelief (positive) and stone cover (unimodal with peak around 55%) (Table 2, Fig. 2). When comparing the three taxonomic groups, only a few of the tested predictors had a consistent direction of the effect across all three, namely litter (negative), gravel cover (negative) and electrical conductivity (negative, but of low importance in all groups) (Table 2, Fig. 2). As the most species-rich taxonomic group, vascular plants showed patterns largely consistent with those reported for all taxa, except that soil pH (negative), mean soil depth (positive), mowing (positive), and grazing (positive) were more important. The species richness model for bryophytes differed from that for vascular plants mainly in the two climatic variables (mean annual precipitation being unimportant, while mean annual temperature being important) and mean herb layer height, grazing and mowing all having a clear negative relationship, while that of stone cover was pronouncedly unimodal with a peak around 45% (Fig. 2). Lichens differed from the two other taxonomic groups by showing positive relationship with soil pH. Although inclination had a negative quadratic term for all taxa combined and separately (Table 2), an actual unimodal relationship occurred only for all taxa together and vascular plants, while the curve was monotonously increasing for bryophytes and monotonously decreasing for lichens (Fig. 2).
Diversity-environment relationships across spatial scales
The importance of the analysed variables differed strongly and systematically across the seven spatial grain sizes ( Fig. 3; Online Resource 5). Southing and litter cover had a strong negative impact on species richness at most grain sizes, whereas inclination was a strong positive predictor for 10-and 100-m 2 plots. Mean annual precipitation had strong unimodal impact on richness in larger grains (1-100 m 2 ). The smallest grain sizes (0.0001 m 2 and 0.001 m 2 ) had no predictor with a relative importance greater than 0.5. The explained variance of species richness was highest for 100-m 2 plots (R 2 adj. = 0.60) and almost negligible for the two smallest grain sizes (Fig. 4, Online Resource 5).
Species-area relationships
The overall z values of all taxa ranged from 0.16 to 0.31 with a mean of 0.24 (Table 1). The strongest predictors for z values were mean annual precipitation (unimodal), mean annual temperature (positive), and inclination (u-shaped relationship; Online Resource 5). The explained variance of z values was relatively high compared to that of species richness at the different spatial scales (R 2 adj. = 0.60). The local z values for all taxa ranged from 0 to 0.602 with a mean of 0.235. Across the considered spatial grain sizes, they were highest for the transition from 0.01 to 0.1 m 2 with Table 1 Summary of species richness values (number of species recorded in a plot) for different plot sizes and different taxonomic groups The z values were derived from power-law species-area relationships fitted across the seven grain sizes of the nested plots. Note that the plots of areas ≤ 10 m 2 in each biodiversity plot were considered as independent observations. n -number of plots, SD -standard deviation *Only 10-m 2 plots from biodiversity plots **All 10-m 2 plots + n = 32 for bryophytes and lichens Fig. 5). Aleksanyan et al. 2020) in rather comparable situations found above-average richness. Also, base-rich alpine grasslands in the Swiss Alps are systematically richer (median of approx. 50 species in 10 m 2 : Dengler et al. 2020d). Similarly, bryophytes were systematically poorer in species in our plots than the Palaearctic average, except in 10 m 2 and 100 m 2 (https:// edgg. org/ datab ases/ Grass landD ivers ityEx plorer). By contrast, lichen species richness was clearly higher in the Swiss inneralpine stands of Festuco-Brometea than those elsewhere, albeit at a very low level (e.g. on average 2.2 species vs. 1.1 species in 10 m 2 ; https:// edgg. org/ datab ases/ Grass landD ivers ityEx plorer). One explanation for this puzzling finding might be the biogeographical history of the species pool (Zobel 2016). Table 2 Relative importance of predictors based on multimodel inference fitted to total, vascular plant, bryophyte and lichen species richness in 10-m 2 plots (n = 161) Importance values ≥ 0.5 (i.e. variable occurs in more than 50% of plausible models) are in bold. Positive (+) and negative (−) relationships are indicated. In the case of quadratic terms, a negative sign means a unimodal relationship, a positive sign a u-shaped relationship. Predictors are grouped into three categories related to different ecological theories. The variances explained by adjusted R 2 , the fixed effects (R 2 GLMMm ), the random factors site and plot ID (R 2 GLMMr ) and the whole model (R 2 GLMMc ) are also given The Alps were covered by a huge ice shield during the last glacial period, preventing any in-situ survival of plants with the exception of nunataks on mountain tops (Pan et al. 2020). While the species pool of alpine grasslands probably found sufficiently large refugia along the ice-free margins of the Alps (Tribsch and Schönswetter 2003), the majority of more thermophilous steppe species probably went extinct. Recolonisation from steppe areas further to the east seems to have played a rather minor role for the inner valleys of the Alps (Kirschner et al. 2020), resulting in a significantly smaller pool of steppe species in the Alps compared to other regions.
Species richness at different scales
Differentiated by catchment, our mean vascular plant species richness data at 10 m 2 were 28.8 (Rhône), 33.6 (Rhine) and 27.7 (Inn) for. This corresponds quite well to the data from the national monitoring program of the sites of national importance in Switzerland (WBS), with higher number of replicates that were sampled at optimum phenological stage, with 26.7 for the Western Central Alps (Rhône) and 34.6 for the Eastern Central Alps (Rhine and Inn) (Bergamini et al. 2019). This confirms that our sampling was mostly comprehensive. Compared to the Jura Mts. in Switzerland with 40.4 vascular plants in 10 m 2 , the plots in the inneralpine dry valleys are generally poorer in species, which might be attributed to the more pronounced summer drought that excludes various less drought-tolerant species form the grasslands.
Factors influencing diversity of taxonomic groups at 10 m 2
We found pronounced differences in the diversity-environment relationship between all three taxonomic groups considered. This is consistent with results of previous comparative studies on the plot-scale richness of these groups (Löbel et al. 2006;Turtureanu et al. 2014;Kuzemko et al. 2016;Polyakova et al. 2016;Dembicz et al. 2021a) and can be explained by their different ecological requirements and life histories.
Our proxies for the productivity-stress axis of Grime (1973) mostly showed inconsistent patterns across the three taxonomic groups. Only litter cover, which reflects both productivity and absence of disturbance, was negative for all of them. The strongest predictor of this variable group was mean annual precipitation, showing unimodal relationships for vascular plants and total richness, with maxima around 1000-1100 mm-as expected along a stress-productivity gradient (Grime 1973): below that amount of precipitation apparently edaphically dry sites become so stressful that only few species can exist, while above the maximum conditions become so benign that competition increasingly excludes species from the system. By contrast, lichen species richness peaked at lower precipitation and bryophyte richness even showed an inverse pattern compared to the Fig. 2 Effects of inclination, southing, stone cover, mean annual temperature and mean annual precipitation on total, vascular plant, bryophyte and lichen species richness from the full multiple regression models. Displayed are the original richness data and the predic-tions from general linear models and linear mixed-effect model. Each dot represents one or several 10-m 2 plots (n = 161). Inclination, stone cover and mean annual precipitation had quadratic relationships (see Table 2) vascular plants (u-shaped), which both probably could be explained by asymmetric competition, suppressing a diverse cryptogam layer where vascular plants thrive. Mean annual temperature showed opposite patterns for vascular plants (slightly richer at higher elevation with cooler climate) and non-vascular plants (particularly rich in the warmer lowlands). Our findings of richness-climate relationships for vascular plants, via the strong negative correlation with elevation (r = − 0.99) indicate increasing richness with elevation, which differs from the predominantly reported mid-elevational peak (McCain and Grytnes 2010). A possible explanation is that we sampled not the full elevation gradient, but only the increasing part of the hump, and thus missed the decreasing part at higher elevations. This fits to the fact that plot-based studies in Swiss grasslands found richness peaks between 1100 and 1600 m a.s.l. (Descombes et al. 2017;Boch et al. 2019b), while our highest plot was at 1574 m a.s.l. A particularly strong negative predictor of total, vascular and bryophyte richness was southing, which can be explained by the more stressful conditions in southfacing slopes vs. flat or north-facing areas. This is consistent with the strong negative effect of heat load index (a composite variable mainly based on aspect) found in Transylvanian dry grasslands (Turtureanu et al. 2014). Mean soil pH, which is often a particularly relevant parameter for plot-scale species richness (e.g. Schuster and Diekmann 2003;Löbel et al. 2006;Boch et al. 2016Boch et al. , 2018a, typically with a unimodal response, in our case showed opposing effects for the three taxonomic groups: negative for vascular plants, weak for bryophytes and positive for lichens. This might be related to Fig. 3 Variable importance of predictors for total species richness at seven spatial scales (plot sizes) (n = 68 for 0.0001-10 m 2 ; n = 34 for 100 m 2 ) derived from multimodel inference the fact that vascular plants prefer slightly more developed soils which coincides with decalcification, whereas lichens profit among the base-rich dry grasslands from the most extreme sites (highest pH) due to lower vascular plant competition. Last, mean herb layer height, which, in the way it was measured, reflects standing biomass and vegetation density, was strongly negative for the two non-vascular groups, but slightly positive for the vascular plants themselves. The strong negative effect of standing biomass of vascular plants on richness of the two other groups again could be explained by asymmetric competition for light (Löbel et al. 2006;Boch et al. 2016Boch et al. , 2018a. According to the Intermediate Disturbance Hypothesis (IDH; Connell 1978; see also Grime 1973;Huston 2014), one should expect maximum richness at intermediate levels of disturbance. We considered here three proxies of disturbance as follows: slope inclination related to erosion, and the two land-use variables grazing and mowing. While the quadratic term of inclination was negative in all models, we found a peak inside the possible values only for total richness and vascular plants, where highest diversity occurred at around 40°, thus in agreement with the IDH. By contrast, lichens showed a monotonously decreasing and bryophytes a monotonously increasing relationship with slope angle. This does not conflict with the IDH per se, but highlights the taxon Fig. 4 The explained variance expressed by adjusted R 2 for the models of total species richness for the individual plot sizes (n = 68 for 0.0001-10 m 2 ; n = 34 for 100 m 2 )
Fig. 5
Boxplots of local z values as measures of β-diversity across spatial scales (transitions between two subsequent plot sizes). Different letters denote significant differences in mean z values between transitions (Tukey´s posthoc test at α = 0.05) and context dependency of what can be considered "intermediate" (see Huston 2014). Grazing and mowing, which we had only as rough binary variables, showed contrasting effects on vascular plant richness (weakly positive) and on the richness of the two non-vascular groups (negative). Since farmers who are managing these legally protected grasslands of national importance must follow restrictions regarding mowing, stocking density and fertilization, land-use intensity in these low-productivity systems can be described as low to moderate. Thus, our plots rather reflect the increasing part of the IDH, which is supported in the case of vascular plants. However, for bryophyte and lichen richness, even this low land-use intensity obviously was already too much. This is in agreement with Allan et al. (2014), who found bryophyte and lichen diversity to be highest at very low land-use intensities and rapidly declining values already at moderate land-use intensity in German grasslands. In addition, Boch et al. (2018b) found strong negative direct effects of land-use intensity and indirect negative effects of land-use intensity via increased vascular plant biomass on bryophyte richness in German and Swiss grasslands. Increased vascular plant biomass can lead to competitive exclusion of bryophyte and lichens species and consequently lower species richness of these groups (Löbel et al. 2006;Boch et al. 2016Boch et al. , 2018b. There is comprehensive evidence that any type of environmental heterogeneity across grain sizes increases species richness (Stein et al. 2014), and this relationship only might be reversed at very fine grain sizes far below 1 m 2 (Tamme et al. 2010). However, for maximum microrelief as our most direct measure of within-plot heterogeneity, we found mixed effects: While it positively influenced the species richness of all taxa and of lichens, there were only weak positive effects for bryophytes and even negative ones for vascular plants. This contrasts with the strong positive effect of microrelief on plot-scale richness of vascular plants found in dry grasslands of Sweden and Siberia (Löbel et al. 2006;Polyakova et al. 2016). We do not have a good explanation why the Swiss inne-ralpine dry grasslands deviate here from similar plant communities elsewhere and from theoretical expectations. We also considered stone cover as a measure of heterogeneity, assuming highest heterogeneity and thus highest richness at intermediate levels of stone cover. However, while the relationships had a negative quadratic term for all three taxonomic groups, only in bryophytes showed a peak at intermediate stone covers, while vascular plants showed a monotonously decreasing and lichens a monotonously increasing curve (Fig. 2). Apparently, the areas with very shallow soils directly surrounding rocks and stones created spaces with low competition from vascular plants, beneficial for terricolous bryophytes and even more so for terricolous lichens, which overcompensated the loss of area by the stones themselves (as we did not record saxicolous species). On the other hand, for vascular plants, additional rocks and stones inside the plot just meant a loss of inhabitable area. Finally, we tentatively also listed gravel cover under heterogeneity-related factors. However, here we found a negative effect on all taxonomic groups, but particularly so for vascular plants. The difference of gravel vs. stones and rocks could be that the former through its smaller weight can be more easily moved by various agents (wind, water, trampling) and thus damage surrounding plants, particularly small ones. Moreover, gravel, if not embedded in a matrix of fine soil, has an extremely low water-holding capacity and thus could even exclude most species due to desiccation.
Diversity-environment relationships across spatial scales
We found that the explanatory power of our models for total richness was negligible for the two smallest grain sizes and then continuously increased towards 100 m 2 . Similarly, previous studies using the same method in dry grasslands of other regions (Kuzemko et al. 2016;Polyakova et al. 2016;Talebi et al. 2021; but see Dembicz et al. 2021a) found an increasing amount of explained variance with grain size. This makes sense for the following two reasons: (a) ecologically, one should expect species interactions (not covered by the measured environmental variables) to become more important towards finer grain sizes and thus lead to more variation and lower predictability, (b) methodologically, the scale mismatch increases also towards the smaller grain sizes as most predictors were measured for 10 m 2 and the climate variables for even larger grains 2 .
While some predictors had a strong and consistent effect across the studied grain sizes (southing and litter cover), mean annual precipitation was important only for the three largest grain sizes, whereas gravel cover had high importance for the intermediate grain sizes only (0.01-1 m 2 ) and soil pH even changed the sign of the impact from positive at smaller grain sizes to slightly negative at larger ones. These findings are in agreement with the general notion that diversity-environment relationships are strongly scale-dependent (Shmida and Wilson 1985;Field et al. 2009;Siefert et al. 2012). Specifically, the shift in the relative importance of soil factors (like pH and gravel cover) at smaller grain sizes to climatic variables (e.g. mean annual precipitation) at larger ones agrees with theoretical expectations (Shmida and Wilson 1985;Siefert et al. 2012). It is also largely consistent with the results from similar studies on dry grassland vegetation in other parts of the Palaearctic (Turtureanu et al. 2014;Kuzemko et al. 2016;Dembicz et al. 2021a;Talebi et al. 2021).
Species-area relationships
With a mean of 0.24, the z values found in the Swiss inneralpine dry grasslands are intermediate. It exactly matches the mean value reported for more than 2000 nested-plot series of various grasslands and other open habitat types across the Palaearctic modelled in the same way (Dengler et al. 2020a: Supporting Information 9). Specifically for the two studied vegetation classes, Festuco-Brometea (large majority of plots) and Sedo-Scleranthetea, Dembicz et al. (2021c) report z values calculated in log-S space of 0.239 and 0.325, respectively. Thus, in contrast to α-diversity (see above), β-diversity (of which z values are a measure) is not outstanding in the region. Compared to other studies using the same method, z values were well explained by environmental variables, with an R 2 of 0.60. The three most relevant predictors were mean annual precipitation (unimodal), mean annual temperature (positive) and inclination (negative). Maximum microrelief as our most direct measure of within-plot heterogeneity had a positive, but minor effect. These findings for z values of total species richness deviate quite strongly from the comprehensive study of Palaearctic open habitats (Dembicz et al. 2021b: Appendix S5) where the explained variance was low in general, and particularly low for the climate variables mean annual precipitation and mean annual temperature. Why these macroclimatic variables have such a strong influence on fine-grain beta diversity in our study system remains unclear for the time being.
Finally, when analysing the scale dependence of z values, we found a slight peak for the transition from 0.01 to 0.1 m 2 with decreases towards both the smaller and the larger grain sizes. These findings correspond to those in regional studies of Turtureanu et al. (2014), Polyakova et al. (2016) and Talebi et al. (2021), while other studies did not find a scale dependence of z values (Kuzemko et al. 2016;Dembicz et al. 2021b). This demonstrates that the scale dependence of z values in dry grasslands of the Palaearctic is generally weak, but if there is one, it always exhibits a peak at grain sizes clearly below 1 m 2 , pointing to a very fine-grained community organisation. In a recent Palaearctic synthesis, Zhang et al. (2021) found the same when basing the calculation of z values on shoot presence as we did. From a methodological point of view this means that, except for very detailed studies, one can safely analyse dry grasslands using the "normal" power function with a constant z value, as Dengler et al.
Conclusions and outlook
We found that patterns and drivers of species richness vary strongly between the three taxonomic groups (vascular plants, bryophytes and lichens) as well as across the seven studied grain sizes. This variation was mostly consistent with theoretical expectations and previous findings in other grassland types. Thus, our study contributes to an increasing knowledge of the often-neglected phenomena of taxon-and scale-dependence. It follows that one must be cautious, both in ecological research and biodiversity conservation, when transferring findings from other studies sampled with a different grain size or when considering one taxonomic group as surrogate for another. While the findings of this study largely confirmed our prior hypotheses, there were some unexpected deviations, e.g. the low importance of microrelief in the models and the high explained variance in the model for within-plot β-diversity (z values). Also, the low alpha diversity of vascular plants in Swiss inner-alpine valleys compared to dry grasslands in mountain valleys elsewhere in the Palaearctic remains puzzling. To understand the reasons for these unexpected patterns will require analysing alpha diversity of standard grain sizes with high-quality data across many regions in the Palaearctic simultaneously and including many potential drivers, an enterprise for which the steadily growing GrassPlot database (Dengler andTischew 2018a, Dengler et al. 2018b;Biurrun et al. 2021) provides good opportunities.
Data availability The vegetation-plot data underlying this study are stored in the GrassPlot database (https:// edgg. org/ datab ases/ Grass Plot), from which they can be requested according to the GrassPlot Bylaws.
Code availability The R code used for the analysis is available upon request from the first author.
Conflict of interest
We declare no conflict of interest.
Ethical approval The field work in protected areas was permitted by the conservation authorities of the cantons of Vaud, Valais and Grisons.
Consent to participate Not applicable.
Consent for publication All authors have approved the submitted manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
v3-fos-license
|
2020-07-30T02:10:03.707Z
|
2020-07-28T00:00:00.000
|
225464476
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cogitatiopress.com/urbanplanning/article/download/2941/2941",
"pdf_hash": "cec964797cf06cc8aa8e4b4fd827c4577412659d",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44108",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "f8a36e8712a480db962981978e2c1b90499d92dd",
"year": 2020
}
|
pes2o/s2orc
|
Between Hospitality and Inhospitality: The Janus-Faced `Arrival Infrastructure'
Although ‘arrival infrastructure’ is central to the experience of migrants arriving in a new city, is it sufficient to form a ‘hospitablemilieu’? Our article compares newcomers’ experiences with ‘arrival infrastructure’ in two European cities: Brussels and Geneva. Based on ethnographic research with 49 migrants who arrived a few months earlier, we show that arrival infrastructure is Janus-faced. On one hand, it welcomes newcomers and contributes to making the city hospitable. On the other hand, it rejects, deceives and disappoints them, forcing them to remain mobile—to go back home, go further afield, or just move around the city—in order to satisfy their needs and compose what we will call a ‘hospitable milieu.’ The arrival infrastructure’s inhospitality is fourfold: linked firstly to its limitations and shortcomings, secondly to the trials or tests newcomers have to overcome in order to benefit from the infrastructure, thirdly to the necessary forms of closure needed to protect those who have just arrived and fourthly to those organising and managing the infrastructure, with divergent conceptions of hospitality. By using the notion of milieu and by embedding infrastructure into the broader question of hospitality, we open up an empirical exploration of its ambiguous role in the uncertain trajectories of newcomers.
Introduction
The notion of 'arrival infrastructure' has increasingly been used over the last five years to describe the places, services, institutions, technologies and practices with which migrants are confronted in their process of arrival in a new city. The notion of infrastructure allowed scholars to see beyond the 'arrival neighbourhood' and to locate the process of arrival in a much wider context (Meeus, Arnaut, & van Heur, 2019). Although scholars acknowledge the ambivalent role of arrival infrastructure, it mostly bears positive connotations and is some-times equated with resources. We recognise that the lack of such infrastructure is problematic for migrants, but we also warn against the idea that it is automatically hospitable to newcomers. We argue that, owing to its ambiguity, arrival infrastructure is Janus-faced. On the one hand, it welcomes newcomers and contributes to making the city hospitable. On the other hand, it rejects, deceives and disappoints them, forcing them to remain mobile-to go back home, go further afield, or just move around the city-in order to satisfy their needs and compose what we will call a 'hospitable milieu.' Sometimes, arrival infrastructure even leads newcomers to reconsider their project of settling and to continue their journey.
Through ethnographic research and interviews with newcomers who arrived in Brussels and Geneva no longer than six months earlier, we analysed where they slept, but also where they spent their days and what they did. We argue that arrival infrastructure can be inhospitable in four ways which are paradoxically induced by properties and characteristics designed to stabilize the reception potential of the arrival infrastructure. Firstly, in order to secure a certain turnover and avoid appropriations, infrastructure always comes with limitations and shortcomings in terms of duration, space and amenities. For example, night shelters limit the number of consecutive overnight stays and close during the day. Secondly, the limitation of accessibility implies that benefiting from infrastructure requires overcoming certain trials or tests. These can be administrative (filling out a form) or logistical (arriving at a particular location or picking up a ticket in the morning to get a meal at noon). Thirdly, hospitality necessarily requires forms of closure to protect those who seek refuge. Low-threshold infrastructure can hardly be hospitable while being completely and permanently open and accessible to everyone (Trossat, 2019). Fourthly, social workers, activists and stakeholders organising and managing infrastructure have divergent conceptions of hospitality and aim to foster different types of relationship. Depending on who has the upper hand, infrastructure can be, to varying degrees, the centre of an inhospitable milieu.
What are the consequences of this ambivalent hospitality for newcomers? How do they create for themselves a 'hospitable milieu,' not only to meet their basic needs but also to pursue more consistent and elaborate plans or projects? The comparison of our two cases will raise questions concerning the link between the density of the arrival infrastructure and how easy it will be for newcomers to settle in. On one hand, Geneva is one of the wealthiest cities in the world, offering rather large and diverse arrival infrastructure. However, finding housing and a stable source of income there seems more complicated than in Brussels. On the other hand, some newcomers do not wish to stay in Brussels, but rather see the Belgian capital as a stopover on their way to England. Newcomers' expectations of the arrival infrastructure are therefore variable.
From Arrival Area to Arrival Infrastructure
Studying arrival areas has a long tradition in urban sociology. Chicago School scholars studied the 'ports of first entry' (Park, Burgess, & McKenzie, 1984) in the largest Midwest City. Their ecological and functional model implied that immigrants concentrated in specific areas. Cities with such areas were later labelled 'gateway cities' (Burghardt, 1971). Typically, large metropolitan areas with important immigrant populations were viewed as entrance points for immigrants. Apart from some opti-mistic depiction of the 'arrival city' (see Saunders, 2012), where newcomers experience upward social mobility as they settle down permanently, arrival areas have also been described as places of exclusion and of fierce competition. Just like the ghetto, the 'arrival city' can be both a sword and a shield (Wacquant, 2005(Wacquant, , 2018. It can be both a place of confinement and control, and a place of (self)protection.
The literature also raises the issue of scale: the arrival space ranges from the large metropolitan area such as Los Angeles (Benton-Short, Price, & Friedman, 2005) to a wasteland or a park turned into an ephemeral 'camp,' as we can see with the Calais 'jungle' in France (Agier, 2018;Djigo, 2016), or with Maximilian Park, a public park next to the Brussels North train station which, since 2015, has on several occasions been transformed into a camp for migrants (see Depraetere & Oosterlynck, 2017; see also Carlier & Berger, 2019;Deleixhe, 2018;Lafaut & Coene, 2019). The notion of infrastructure has allowed scholars to see beyond the 'arrival neighbourhood' andfollowing a post-colonial sensibility-to locate the process of arrival in a much wider context. For instance, Xiang and Lindquist defined 'migration infrastructure' as "the systematically interlinked technologies, institutions, and actors that facilitate and condition mobility" (Xiang & Lindquist, 2014, p. 122). Hall and colleagues argued that the 'migrant infrastructure' "is subject to a multitude of interpretations and events well beyond the confines of the neighbourhood" (Hall, King, & Finlay, 2017, p. 1313. For example, they show that the 'migrant infrastructure' in Birmingham and Leicester is shaped by the reaches of the former British Empire and by a more recent phenomenon like the 2008 financial crisis. They also show that the geography of the 'migrant infrastructure' is connected with the industrial past of these cities, explaining "why certain migrants 'land' in certain parts of the city" (Hall et al., 2017(Hall et al., , p. 1315. However, while the notion of 'migration infrastructure' (Xiang & Lindquist, 2014) focuses on what makes people move, 'migrant infrastructure' (Hall et al., 2017) refers to a long-term process of 'migrant sedimentation.' These notions do not exactly focus on the process of arrival or 'transit' (Djigo, 2016).
With the concept of 'arrival infrastructure,' scholars proposed an alternative to teleological and normative understandings of the notion of 'arrival neighbourhood' . This concept "emphasizes the continuous and manifold 'infrastructuring practices' by a range of actors in urban settings, which create a multitude of 'platforms of arrival and take-off' within, against, and beyond the infrastructures of the state" (Meeus et al., 2019, p. 2). Although scholars acknowledge its ambivalent role, arrival infrastructure mostly bears positive connotations. For example, Boost and Oosterlynck (2019, p. 154) explain that "arrival infrastructures provide migrants with (in)formal job opportunities, cheap and accessible housing, supportive social networks." However, scholars also insist on the contingency of the experi-ences of infrastructure (Schillebeeckx, Oosterlynck, & De Decker, 2019;van Heur, 2017). As Graham and Marvin (2001, p. 11) put it: "The construction of spaces of mobility and flow for some, however, always involves the construction of barriers for others." The question of access to arrival infrastructure is often limited to legal status differences. Undocumented newcomers do not have access to infrastructure to which refugees have access, for example. However, the literature on non-take-up of social benefit shows that the mechanisms preventing people from benefiting from forms of assistance to which they are entitled are manifold, ranging from the difficulty in gathering the necessary information to the shame experienced by potential users. How these factors affect access to arrival infrastructure remains to be investigated.
Then, access is not the only issue, especially in the case of newcomers in transit. Indeed, scholars have criticised the overdetermined and unidirectional trajectory implied by the notion of arrival area: Migrants are considered as having reached their final destination and being engaged in a process of settlement (Schrooten & Meeus, 2019). There is a risk of overlooking forms of migration without settlement, such as movement of guest workers, or of migrants who have not 'arrived' but are on their way to a further and uncertain destination. How do these newcomers 'in transit' experience infrastructure meant to help them settle? In this regard, Price and Benton-Short (2008) suggest other functions to what they call 'gateway cities,' besides that of entry point. Gateway cities could also be "nodes of collection and dispersion of goods and information, highly segregated settings, sites of global cultural exchange, turnstiles for other destinations, and urban immigrant destinations and settlements" (Price & Benton-Short, 2008, p. 34). We will draw on this expanded conceptualisation, implying that such cities do not only welcome people who wish to settle there, but also people who are passing through.
In our research and in this article, we make use of the concept of 'arrival infrastructure' and introduce the idea of a 'hospitable milieu.' The concept of milieu-inherited from the schools of urban ecology, pragmatism and pragmatic sociology (Stavo-Debauge, 2020)-conveys a sense of active transaction between human behaviour and its environment. Derived from von Uexküll's notion of Umwelt, milieu designates the perceived and appropriated environment that emerges amid the attempts of an organism, whether human or non-human, to maintain and locate its form of life. As Dewey (1948) recalls, a milieu is "not something around and about human activities in an external sense." It is rather "intermediate in the execution of carrying out all human activities, as well as being the channel through which they move, and the vehicle by which they go on" (Dewey, 1948, p. 198, emphasis in original). Von Uexküll's metaphor perhaps says it even better: "Every subject spins out, like the spider's threads, its relations to certain qualities of things, and weaves them into a solid web, which carries its existence" (von Uexküll, 2010, p. 53).
By mobilising the notion of milieu, we aim to emphasise that studying a network of infrastructures is not sufficient: What matters is to understand their role in the making of a 'hospitable milieu' that allows for each newcomer, alone or collectively, to take her placetemporarily or in the long term-in the city. We claim that such a shift towards both the question of hospitality and the processual concept of milieu is necessary in order to account for the Janus-faced nature of arrival infrastructure. The hospitality of a milieu depends on its capacity to make room for newcomers, protect them from hostility, fulfil their needs, sustain their 'engagements' (Thévenot, 2007) and help them realise their projects, which may or may not entail a desire to belong to the city. This analytical shift is similar to the one Sen proposed with his 'capability' approach where he invited us to consider not only the distribution but also the condition of appropriation of resources necessary to participate in the constitution of a life judged as 'worth living' (Sen, 1985).
Investigating Newcomers in Brussels and Geneva
To analyse the Janus-face of arrival infrastructure and its (in)ability to constitute relevant hospitable milieux, we designed an ethnographic study focusing on the newcomers and their daily activities during their first months in the city. Our research took place in two (partly) Frenchspeaking cities-Brussels and Geneva-that we consider as 'ordinary cities' (Robinson, 2006). Both cities perform a function of regional and national centrality in a region divided by administrative borders. The Brussels Capital Region comprises 19 municipalities and two linguistic communities for 1.2 million inhabitants concentrated within 161 km 2 , while the Canton of Geneva is made up of 45 communes containing 585,000 inhabitants within 285 km 2 (the city of Geneva itself forms one of the communes and has around 200,000 inhabitants). On a broader scale, Brussels metropolitan area's population is over 2.6 million, while the Grand Genève is a cross-border agglomeration encompassing 209 municipalities, some in Switzerland, others in France, with a population of 1 million. We believe it is elucidating to compare such different cases in order to develop a transversal approach to cities' hospitality towards newcomers. Both urban areas have a long history of migration and a large population of foreign origin, and both continue to receive newcomers who challenge their (in)hospitality (Necker, 1995;Rea, 2013;Remund, 2012;Wauters, 2017).
Although we also interviewed activists, social workers and stakeholders, and led observations where they work, our analysis focuses here on those who depend upon arrival infrastructure: newcomers. They are more or less welcomed by "those who were already there and who together have appropriated the environment for their use" (Stavo-Debauge, 2017, p. 23). The notion of 'newcomer' reminds us that migrants or foreigners are not necessarily newcomers, as their arrival sometimes goes back years. Moreover, this notion allows for an investigation of the process of arrival, which for the purpose of our research we delimited to the first six months, in order to focus on the early stages of familiarisation. This concept also facilitates the comparison between Geneva and Brussels. Indeed, in Belgium, the notion of 'transmigrant' is commonly used to describe a category of newcomers in transit, as if they were categorically distinct from other kinds of migrants (see Glick-Schiller, Basch, & Blanc, 1995, p. 48, for other contexts where the word describes migrants "whose daily lives depend on multiple and constant interconnections across international borders." For a critical perspective on the notion, see de Massol de Rebetz, 2018). Our interest lies in people who have arrived recently-irrespective of their projects, destination or legal status-and who rely on 'arrival infrastructure' and search for hospitable milieux, even if they might not plan on settling in the city and belonging to its political community.
We focused on newcomers who can be described as poor, not necessarily because they "suffer specific deficiencies and deprivations," as Simmel put it, but because they "receive assistance or should receive it according to social norms" (Simmel, 1965, p. 138). The newcomers we met were unfamiliar with their new environment, they lacked a stable income and faced precarious housing situations. This made them all the more dependent on the infrastructure that is supposed to facilitate their arrival and provide them with an ounce of hospitality.
In Brussels, we interviewed 24 newcomers. They were originally from Afghanistan, Algeria, Chili, Colombia, Egypt, Eritrea, Iran, Morocco, Peru, Romania, Senegal, Sierra Leone, Spain, Syria and Turkey, and aged between 18 and 42 years. They had been in Brussels an average of five months at the time of the interview. In Geneva, we interviewed 25 people, originally from Cameroon, Colombia, Ecuador, France, Gambia, Morocco, Peru, Romania, Salvador, Senegal, Syria, Turkey and the USA. They had been in Geneva an average of three months at the time of the interview. They were aged between 23 and 55.
We used several recruitment channels. We volunteered in organisations in order to get to know the population better and to get in touch with newcomers. Then, some participants introduced us to other potential participants. Finally, we met participants by chance on the street, in cafés and on trains. With most of them, we had the chance to conduct a semi-structured interview that we recorded and transcribed. With others, we had informal conversations and took notes. In each case, we made sure they understood that their involvement was voluntary, anonymous and that they could withdraw at any time.
We asked newcomers about their first weeks or months. They told us where they had been sleeping, where they had been eating, where they had sought information and advice, where they had been taking lan-guage classes and where they had killed time or kept themselves warm (most interviews took place during autumn and winter 2019). Based on their accounts, we tried to understand how they came to attend each part of the infrastructure, and what led them to stop going to such and such place. Newcomers explained how they had been received, but also how they had been rejected, deceived and disappointed, allowing us to distinguish between four dimensions on arrival infrastructure's inhospitality.
From 'Arrival Infrastructure' to 'Hospitable Milieu'
Our proposal to move from the notion of infrastructure to that of milieu is based on four dimensions of the Janusfaced arrival infrastructure. The first has to do with the limitations of the infrastructure itself, in terms of what it can offer to newcomers. The second has to do with the trials that condition access to the infrastructure and what it can offer. However, and this is the third point, accessibility is not necessarily enough and it may even limit hospitality. The fourth element concerns the actors involved in the arrival infrastructure and who may have conflicting understandings of what hospitality is. Lastly, we will insist on how a hospitable milieu lies in a transaction between the individual, with his or her characteristics and aspirations, and an environment that not only allows the newcomer to arrive, but also invites him or her to stay.
The Inevitable Limitations of Arrival Infrastructure
Firstly, institutional infrastructure always comes with limitations and shortcomings in terms of duration, space and amenities. This has to do with two typical and historical concerns of social institutions: the fear of unequal treatment and of abusive appropriation (Pattaroni, 2007). To address these concerns, various rules are set to avoid people staying too long and making themselves at home. The case of night shelters is exemplary. In Brussels and Geneva, most of them limit the number of consecutive overnight stays. For example, the Salvation Army's shelter in Geneva allows ten nights every month. After his ten nights there, Amadou-a 40-year-old Cameroonian we met one month after his arrival-went to the office where the local authorities issued a card that allowed him to stay for 30 nights in an underground shelter on the other side of the city. After a few nights, these confined housing conditions caused him to have epileptic seizures. Twice he woke up in the hospital, and some of his belongings left at the shelter were stolen. Amadou had left his public sector job in Cameroon temporarily with the hope to open an art gallery in Geneva. He never expected such a harsh living and housing experience: "There's no windows, it's a bunker. And there are some people (who) are in bad shape (and) that are very difficult to live with. I am not used to such living conditions." In Geneva, the use of anti-atomic shelters-renamed 'bunkers'-to provide temporary housing has been denounced as a strategy to deter new arrivals or repel newcomers (Del Biaggio & Rey, 2017).
Furthermore, these places close during the day. Although he feels that Geneva is rather generous and does a lot regarding "social issues," Amadou deplores the limited hours of the shelters: "Even on Sundays, you have to wake up at seven in the morning and leave at eight….Even if we have nothing to do, even if it rains or snows." Others, however, have had positive experiences with the shelters. Mehdi is of Moroccan origin and is 50 years old. He arrived 40 days before our interview and spent 25 days in the same shelter as Amadou. By contrast, he is used to living in difficult conditions and although he also complains about the opening hours, he thinks the underground shelter is "really good. It's the best, actually. You sleep, then have a shower, a breakfast….'' These two cases illustrate the conflictual nature of these shelters that welcome newcomers and at the same time are sometimes experienced as so inhospitable that they damage their guests' health. Some staff we spoke to would like to do more to accommodate their guests' needs if they had the means to do so. Others accepted this relative inhospitality, explaining that their primary mission is to provide emergency housing, not to offer long-term solutions. As usually stated by social workers driven by ideals of autonomy and activation (Cantelli & Genard, 2007), hospitality should not lead to dependency. This dimension of an infrastructure's inhospitality is thus not necessarily due to a lack of funding or of resources. The stakeholders organising the arrival infrastructure either wanted to prevent their users from settling in, or wished to focus on one type of service, or on one group of users, and thus voluntarily limited the extent of their hospitality. Incidentally, an important part of their work was to redirect users to other organisations. As a result, newcomers who depended on them had to navigate their way between multiple infrastructures in order to meet their needs.
The Trials of Arrival Infrastructure
Secondly, to profit from infrastructure requires overcoming trials and tests. The literature on 'non-take-up' of social benefits and assistance reveals that people sometimes lack awareness of their rights, but also sometimes lack the capacity to actualise them (van Oorschot, 1991). Indeed, complex administrative procedures complicate access. Moreover, the value of individual responsibility and a moral obligation to be self-sufficient lead people to not claim benefits despite being eligible for them. The same analysis applies to arrival infrastructure. Benefiting from it requires overcoming trials or tests.
The most obvious test is getting to know what is available. In the course of newcomers' first days in the city, social and community workers as well as internet pages and information boards provide addresses where they can seek assistance, food, shelter, clothes, etc. Newcomers also usually rely on word of mouth for recommendations. Those who had met and asked well-informed people, but also those who master French and know how to read information on paper and online, knew a significant amount about the arrival infrastructure. However, even in the smaller city of Geneva, and despite various organisations' communication efforts, the newcomers we met were always unaware of important opportunities and relevant amenities.
Then, knowing about the arrival infrastructure is not enough. To newcomers unfamiliar with the city and its language, finding their way around is a real test. Yonasfrom Eritrea-had arrived in Brussels two months before we met. Once, he had an appointment with a lawyer who could have helped him with his asylum application: "I was looking for the address and I was close to there, you know, and my battery went off, my phone…and I've lost the address." Navigating the city and finding addresses are a crucial part of the process of arrival. It is no surprise that many newcomers told us of having invested some of their scarce economic resources in a local SIM card and public transport pass, often right after their arrival.
John, a 24-year-old Portuguese resident born in Gambia, had arrived two months before we met in Geneva. As he intensively searched for work and tried to distribute his resume to as many companies as possible, he insisted on the importance of his phone's GPS: "People tell me 'go to this place, this street,' I would not understand [because I don't speak French]. But when I put it in my phone, I can go directly." A friend of his buys him 30 francs (about 28 EUR) credit every month. These 30 francs might seem a superfluous expense for a person who has to monitor his expenditure scrupulously. But without a smartphone, the arrival infrastructure would be partly inaccessible to newly arrived people. A migrant interviewed by the ARCH research team stated that losing his phone or having his phone stolen was the worst thing that could happen (Mannergren Selimovic, 2019).
Of course, the phone itself is part of a constellation including telecommunications providers, GPS services, apps, etc. Infrastructure can thus be virtual, as in the case of Facebook pages through which newcomers exchange advice and information. The smartphone is not only an audiovisual window and door to their former 'homes' (Guérin, 2019), it is also an essential arrival device, compensating for, as is the case for tourists but in a more vital way, the lack of 'familiarity' (Felder, 2020;Thévenot, 2007). It helps newcomers with 'spatial integration,' what Buhr defines as learning "where to find shelter, soup kitchens or to distinguish safe areas from no-go zones" (Buhr, 2018, p. 3). Importantly, as Buhr reminds us, "learning to navigate a city does not necessarily have to do with one feeling at home in that space or with feeling one belongs there. Rather than having a set of spatial coordinates, urban apprenticeship is about understanding how a city works" (Buhr, 2018, p. 3). However, two newcomers do not have the same understanding of how the city works, as this knowledge is highly personal and localised. The concept of familiarity (Thévenot, 2007) thus better acknowledges the personal and ecological dimensions of newcomers' knowledge of how and where to find help and resources.
Finally, accessing arrival infrastructure also has a socio-psychological cost to reputation and self-worth. As suggested in the classical work of Margalit (1998) on the 'decent society,' what could be institutionally considered as 'just' and legitimate social aid could be experienced as humiliating. Exploring the experience of arrival infrastructure, we better understand how its appraisal depends on one's conception of dignity. Arman, an Iranian atheist seeking asylum in Brussels, stated that he stays away from soup kitchens and other humanitarian infrastructure as he is not at ease with heteronomous and asymmetrical relationships: "I don't like queues," he says, "I'd rather die than be like that" (he mimics begging). His case echoes the one of Diego, who arrived in Geneva from Colombia with a tourist visa and no intent to seek asylum. His uncle, who hosted him in his studio apartment, gave him one month to find a job. Diego attended free French classes but was reluctant to ask for other forms of help than that offered by his uncle: "I want to make a living on my own merit, you understand?" After having dropped dozens of resumes off to businesses, temporary staffing firms and even to passersby, Diego resolved to leave Switzerland and try his luck in Spain, where he at least speaks the local language. His uncle bought him a plane ticket and directed him to an acquaintance in Catalonia. While unquestionably helpful to newcomers, arrival infrastructure (even the highly personal aspects) contains certain barriers to entry.
Openness and Accessibility Are Not Everything
A third way, intrinsic to hospitality, in which infrastructure can both welcome and repel (or even reject) lies in the contradictory combination of openness and protection-which implies appropriation and closure (Stavo-Debauge, 2017). The fact that shelters, soup kitchens and other low-threshold places are open to all paradoxically limits their ability to provide a peaceful and safe place. The collective shelter was not hospitable to Amadou because he did not have control over whom he had to share his room with and had no opportunity of appropriating the place in a personal and familiar manner.
As illustrated by its archetype of welcoming someone into your home, hospitality necessarily requires forms of closure to receive and protect those who seek refuge in its milieu (Stavo-Debauge, 2017). When Major-a young Eritrean we interviewed-first arrived in Brussels, he stayed only three days before going to the Netherlands where he remained for two months and two weeks. He came back and then went to Calais for five months in the hope of reaching the UK, before turning back and deciding to stay in Brussels. While there, Major avoided collective shelters: "[There's] too much stress…, it is too loud, there are a lot of people." He preferred to sleep by himself in what he called the 'Green Hotel' (i.e., the Maximilian Park), but soon stopped going to the park to avoid the company of its other occupants who were in a similar situation. "It's negative to see the others…if you live in the street, you cannot have dreams," he told us. Major abandoned his idea to reach the UK and resolved to seek asylum in Belgium. He was then hosted in two flats by two Belgian citizens who offered him the comfort of a room and the possibility of closing a door behind him. But being able to close a door and to rest in a safe place does not mean living in isolation, cut off from the outdoors. One of Major's hosts offered him a bicycle, which he used not only to reach his temporary home, but also, for example, to reach the language school run by the volunteers and located five kilometres south of the Northern Quarter, knowing that his belongings were stored safely at home. To compose a hospitable milieu, infrastructure cannot be completely and permanently open and accessible, as it shall offer protection from unwelcome social company, from public exposure and inquisitorial gazes and from other drawbacks of street life (Carlier, 2018).
The Human Dimension of Infrastructure
The fourth dimension concerns the actors involved in the arrival infrastructure. The degree to which infrastructure is welcoming and can be considered as a resource and safe, profitable place can be highly variable, as it is caught up in power struggles between parties with different conceptions of hospitality. For example, between 2014 and 2018, some material transformations occurred in and around the immediate vicinity of Maximilian Park in Brussels. If humanitarian NGOs, activists and concerned citizens, like those gathered around the Citizen's Platform (Deleixhe, 2018), tried to facilitate hospitality within the park and to foster a welcoming atmosphere, with various portable facilities and temporary arrangements, others were less inclined to do so. Public benches were displaced, CCTV cameras appeared, trees were cut down and fences were erected (as documented in Dresler, 2019). While the former had done their best to improve the experience of migrants, other actors did what they could to deter their presence.
People also intervene directly in the way infrastructure are experienced. For example, the staff at reception centres usually answer questions and inform, while some newcomers would need them not only to retrieve telephone numbers, but also to make the phone call for them. Newcomers experience this approach as a 'limited' hospitality (Thévenot & Kareva, 2018). The latter is formed and constrained by the 'liberal grammar of communality' where everybody (even unfamiliar newcomers) is treated-and is expected to act-as an 'autonomous individual.' More fundamentally, this raises the question of conflictual understandings of what a good form of hospitality is, liberal forms being based on a non-interference principle where other 'grammars of hospitality' (Stavo-Debauge, 2017) expect more active engagement from the hosts.
People facilitating access to infrastructure (or turning a blind eye to heterodox uses of places) is therefore crucial. Be it waiters and waitresses who do not awake a newcomer sleeping at a café, park wardens who ignore or guard sleeping bags in Maximilian Park (Lempereur, 2019), citizens hosting newcomers in their houses or providing transportation with their cars, they are all temporary but essential parts of the infrastructure as they all contribute to ensuring a certain level of hospitality to newcomers. Having many 'qualities' besides a simple 'opening' (Stavo-Debauge, 2018), hospitality is duly judged by the newcomers who happen to be affected by limitations, constraints and requirements of places where they are received. In other words, people and places providing what may appear as valuable resources do not always positively affect newcomers' experience of hospitality.
Sometimes it is the whole city's potential to provide a hospitable milieu for the projects and aspirations of the newcomer that is questioned. Before heading to Geneva, Amadou had experienced staying in a small Swiss city in the Alpine region (population: 20,000), where he first arrived in Switzerland. While he had had the possibility of good housing conditions there, it rapidly became apparent to him that the small city was ill suited for his project to open an African art gallery. Driven by his desire to find an urban environment hospitable to-and suitable forsuch a project, he quickly left the small city and went to Geneva, exchanging in the process a warm welcome at a friend's house for basic and precarious accommodation in a Salvation Army centre, before ending up in an underground shelter.
Being hosted by friends or relatives, however, is no guarantee of hospitality. In Brussels, even if he managed to obtain a place in an aunt's apartment, which would seem to offer a good level of hospitality, especially when friends of his slept in Maximilian Park, Omar still decided to leave this setting, judging that the hospitality on offer was "abusive": When I arrived here, the family in Senegal put me in touch with my aunt….In fact, I encountered quite a lot of difficulties. I was the one who bought the food, I helped with the electricity, the bills and everything, even the medicines, I was buying….Her home was her home, she was abusing the situation and that's why I left there.
In Omar's case, his aunt's hospitality was problematic due to being far from unconditional. However, not having to bear a financial burden is not always enough to make one appreciate the hospitality given. Migration scholarship sometimes depicts migrants' social networks only in a positive light. However, as Simone put it, people can be considered as forming part of an inhospitable infrastructure because they engage in transactions not necessarily based on solidarity (Simone, 2004, p. 419) or equity, raising the question of profit-oriented infrastructure and, too often, abusive ones as is the case with 'slumlords' or 'loan sharks.'
Looking for a Hospitable Milieu
Newcomers constantly experience the various dimensions of a Janus-faced arrival infrastructure, requiring active work to constitute a hospitable milieu that will allow them to find a satisfactory way to temporarily or more lastingly take their place in the city. The first side is welcoming and essential for their survival. It offers them a place to spend the night, to eat, to learn the local language, to work on a resume, etc. The other side, however, is less welcoming, as we have just shown. Even if this negative side can be experienced on the first day, it sometimes only appears once the most urgent issues are dealt with, when newcomers start to assess their new lives and try to fulfil their projects and desires. The search for a hospitable milieu may then involve mobility: going back home, going further afield, or just moving around the city.
The last time we met Amadou in Geneva, he was coping with life in the shelters. His plans to open an art gallery were slipping away and he was even considering returning home. One month after we met, Diego had left for Spain. He had been welcomed by his uncle who offered to let him sleep on the couch of his small studio for a month. But after this period, he was unable to find work on the informal labour market, so his uncle asked him to leave. In Brussels, Yonas applied for asylum and was subsequently forced to leave the city. The authorities sent him to an accommodation centre in Liege, where he now lives, despite coming back to Brussels regularly for interviews with migration officers. For newcomers, an obvious consequence of this Janus-faced, ambivalent welcome appears to be the obligation to be mobile. However, this mobility requires caution and risk assessment.
In Brussels, while Yonas remained very mobile, being forced to expand his 'arrival area,' other newcomers restricted their movements and made sure they did not hang around too much in open public spaces, especially at night. For them, the street is a place of 'mistrust' (Le Courant, 2016): mistrust of police control but also of ordinary civil interactions that can go wrong, and then possibly involve the police. Omar, a Senegalese man who once slept in Maximilian Park and now resides in the south of Brussels, often roams in Matongé, a neighbourhood with a large African population (Rea, 2013), but only during daylight. He told us: There are environments where, you see, it's a bit dangerous because often there are controls.…If I'm not working, I'm at home, otherwise I'm in Matongé at my friends' house until seven, eight PM, then I go home.
But then I have friends who go out at night. They ask me to go out and I say 'no, I don't go out at night.' Such a fear is not equally distributed, even among the undocumented newcomers. It varies with their origins and phenotypes (are they part of a visible racialised minority or not?), their step in the migration process (are they still on the road or settling?), and their gender. In contrast to Omar's situation, Melissa-a 42-yearold Peruvian woman-had family members who hosted her and helped her with her daughter's education and finding a flat. Even if undocumented, she feels safe and she does not even mention the police among possible 'worries': So far we've had a lot of good experiences, we haven't had any problems at all, like racism or…no, no worries.…The most positive case is that even if we don't have proper papers, our children can study. That's the most positive.
These differences highlight the perceptual and relational dimensions of the milieu.
While we emphasised the experiences of inhospitality that cause newcomers to leave or consider leaving, not all of them had plans to stay. These are migrants "in transit who only stay…for the time it takes to find a way to cross the Channel to reach Great Britain," who "do not wish to apply for asylum in Belgium and are therefore neither protected by the Geneva Convention nor eligible for a place in reception centres" (Deleixhe, 2018, p. 131). Among them, some-like Major, whose case we described earlier-eventually build up a sufficiently hospitable milieu to decide to stay. However, others do not abandon their dream of reaching England. Sara had been in Brussels for two months when we met, but she had left Eritrea six months prior to that. She arrived in Italy, stayed there only one day before taking a bus to Brussels. She chose Brussels in order to go to England: "I knew it was good to come here to go to the UK." With a friend in the same situation, they spend, on average, one night outside attempting to travel to England and then one night in a collective shelter or in a 'family,' i.e., enjoying the private hospitality of citizen hosting set up by the Citizen's Platform. With her mind set on arriving in England, she did not care much about her living conditions in Brussels: "I don't care of cooking, of quality of food…the only thing important is 'I go UK.' When I wake up, I think 'I go UK' and that's it." To her, the park is part of a 'departure infrastructure,' a site where she can wait, protect herself from police hostility (Printz & Carlier, 2019) and 'organise' her journey to Great Britain. To people like her, infrastructure proves hospitable when it allows them to rest and sleep during daylight, as the night is a time for the 'try'-that is, when they take their chance to reach the UK.
The four dimensions of Janus-faced infrastructure make it difficult to assess beforehand how hospitable a city will be. We have seen that its ability to become a hospitable milieu for a newcomer depends not only on the characteristics and aspirations of the newcomers themselves, but also on the qualities of the infrastructure, the trials that limit access to it, its ability to provide protection, and finally the people who manage it. However, there are dimensions of the environment that affect all newcomers and either promote or limit their ability to weave, like von Uexküll's spider, a web to sustain their existence.
Although Geneva-one of the richest cities in the world-offers a rather large and diverse arrival infrastructure, finding housing and a stable source of income there seems more complicated than in Brussels. Geneva's saturated housing market and high cost of living can hardly be compensated for by the arrival infrastructure. While providing a more limited arrival infrastructure, Brussels seems more auspicious for the creation of hospitable milieux. However, some newcomers do not wish to stay there, but rather see the Belgian capital as a stopover on their way to the UK. Their expectations of the arrival infrastructure are therefore distinct. The newcomer with no intention of settling will tend to keep a very instrumental relation to infrastructure while this changes when someone starts familiarising themselves with a broader milieu.
Conclusion
This article tackled the Janus-face of arrival infrastructure. Although a lack of such infrastructure is problematic for newcomers, we showed that infrastructure does not automatically prove hospitable. On one hand, it welcomes newcomers and contributes to making the city hospitable. On the other hand, it rejects, deceives and disappoints them, forcing them to navigate between multiple parts of the infrastructure in order to satisfy their needs and compose a hospitable milieu. Indeed, as we have shown, infrastructure offers limited and often conditional resources. Moreover, accessing these resources involves overcoming trials (finding information, locating places, overcoming a sense of stigma, etc.). We have also shown that hospitality is not just a question of access, and that infrastructures that are open to everyone sometimes fail to provide the protective shield that some newcomers need. Finally, we discussed the sometimes conflicting positions of those who manage the infrastructures. Different 'grammars of hospitality' (Stavo-Debauge, 2017) coexist, ranging from a noninterference principle to more active engagement from the hosts.
This analysis casts the arrival infrastructure back into the broader and more ambiguous history of the management of poor and mobile populations. In his history of poverty, Geremek shows that the poor have almost always sparked both compassion and repression. In the Middle Ages, he wrote, "the gallows and the alms house have stood side by side" (Geremek, 1994, p. 8). Today, this tension is particularly salient in the case of the mobile poor who face "compassionate repression" (Fassin, 2005, p. 362). Although migrants face increasing restrictions on their social and legal rights, they are nevertheless offered various forms of assistance by the state, NGOs or private citizens. The motives behind this assistance are de facto much more complex than the simple opposition of compassion and repression as they entail considerations of legal duty, moral responsibility, political solidarity and so on.
To better reflect this complexity, we have proposed the notion of 'hospitable milieu.' This notion of milieu challenges the idea that the hospitality of an environment towards a newcomer can be assessed beforehand as a function of its arrival infrastructure. The milieu, as we have shown, is shaped by a dynamic relationship between the individual and the environment. It emerges in the transaction between the potentialities of an environment and an individual with specific characteristics, aspirations, cognitive and practical skills, resources, and moral and political convictions. Such transaction and the specific role of the different characteristics of newcomers deserve further research. Of special interest is the question of the moral conceptions of what it means to be welcomed and helped in relation to different ideas of dignity and 'good' ways of life.
Notwithstanding those further developments, the notion of milieu appears essential as it reflects, on one hand, what the environment has to offer the newcomer: This includes the arrival infrastructure as understood by Meeus et al. (2019), but also the qualities of the social and built environment which, beyond the moment of arrival, will or will not allow the newcomer to take her place in the broader urban order. These include, for example, the general level of prices, which is much higher in Geneva than in Brussels, or the degree of openness in the labour and housing markets, which seems to be greater in Brussels than in Geneva.
On the other hand, the notion of milieu takes into account the different ways in which newcomers experience this environment and realise their projects in it. Importantly, we pointed out in the case of the 'transmigrants' in Brussels that this project does not always involve settling in. Importantly, the constitution of a milieu does not only depend on infrastructure and resources. For example, we have shown that the public space can be more or less hospitable depending on the gender, race, appearance, and legal status of the newcomer. Finally, hospitality cannot be limited to providing access and enabling survival. A hospitable milieu is one that invites the newcomer to stay.
Luca Pattaroni (PhD) is a Sociologist and Maître d'Enseignement et de Recherche at the Laboratory of
Urban Sociology of the EPFL where he leads the research group "City, Habitat and Collective Action.'' He has been visiting professor at the Federal University of Rio de Janeiro and visiting scholar at the University of Columbia. He is board member of the Swiss Journal of Sociology and of Articulo, the Journal of Urban Research. His work is concerned with the expression of differences and the making of the common in contemporary capitalist cities.
Marie
Trossat is an Architect Socio-Anthropologist and a PhD student at the Laboratory of Urban Sociology of the Swiss Federal Institute of Technology in Lausanne (EPFL) in Lausanne and at the Metrolab in Brussels. Beside an experience of three years in the architectural practice in Paris and in Brussels, she explored the habitability issue in the prison, monastery, camping, squat, homelessness and occupied land. Her doctoral research investigates the forms of hospitality in Brussels by following 25 newcomers' arrival routes and leading ethnographical studies on diverse arrival infrastructures.
Guillaume Drevon (PhD) is a Geographer. He is developing his research at Urban Sociology Lab of Swiss Federal Institute of Technology Lausanne. His research focuses on life rhythms. Dr Drevon develops new conceptual framework and methods to analyse urban rhythms. He is currently developing the concept of rhythmology to analyse contemporary mobilities and societies. He has recently published ten articles about life rhythms, a book about rhythmology and three co-edited books concerning urban rhythms.
|
v3-fos-license
|
2023-04-23T06:17:37.140Z
|
2023-04-22T00:00:00.000
|
258275991
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "CLOSED",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14034948231165853",
"pdf_hash": "d4f0d980ef40ee108298c2a88209c7d54a3d260f",
"pdf_src": "Sage",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44109",
"s2fieldsofstudy": [
"Art"
],
"sha1": "bce15578698a6161f4f9c897d784b5b2f6a19e73",
"year": 2023
}
|
pes2o/s2orc
|
Arts and culture engagement and mortality: A population-based prospective cohort study
Aims: The aim of this study was to investigate associations between having visited the theatre/cinema and an arts exhibition during the past year and all-cause, cardiovascular disease (CVD), cancer and other-cause mortality. Methods: The 2008 public health postal survey in Scania, Sweden, was distributed to a stratified random sample of the adult population (18–80 years old). The participation rate was 54.1%, and 25,420 participants were included in the present study. The baseline 2008 survey data were linked to cause-of-death register data to create a prospective cohort with 8.3-year follow-up. Associations between visit to the theatre/cinema, visit to an arts exhibition and mortality were investigated in survival (Cox) regression models. Results: Just over a quarter (26.5%) had visited both the theatre/cinema and an arts exhibition during the past year, 36.6% only the theatre/cinema, 4.9% only an arts exhibition and 32% neither of the two. Not visiting the theatre/cinema during the past year was associated with higher all-cause and CVD mortality. Not visiting an arts exhibition was associated with higher all-cause and other-cause mortality. The combination of having visited neither the theatre/cinema nor an arts exhibition during the past year was associated with higher all-cause, CVD and other-cause mortality. Conclusions: There is an association between attending arts and culture activities and a reduced risk of CVD and other-cause mortality but not cancer mortality, although model imperfections are possible.
Introduction
Non-communicable diseases explain approximately 71% of global mortality according to the World Health Organization [1].In 2019, an estimated 17.9 million people died from cardiovascular diseases (CVDs), representing 32% of all global deaths, and 38% of 17 million premature deaths (below the age of 70) were caused by CVD [2].Although CVDs are highly preventable, they are the most common cause of death in the world [3].CVDs can be prevented by addressing behavioural risk factors (including tobacco use, unhealthy diet and obesity, physical inactivity and harmful use of alcohol) [2].They place a considerable burden on budgets and health-care systems [4].
In Sweden, CVD is the main cause of morbidity and mortality [5].Measures to reduce the burden of CVD have been prioritised in Sweden with initiatives and programmes implemented by the Swedish National Institute of Public Health [6].Such initiatives and programmes are combinations of both community-and individual-based strategies [5].Cancer is a leading cause of death worldwide in countries of all income levels, and cases and deaths are expected to rise rapidly as populations grow, age and adapt lifestyles that increase cancer risk [7].In the Nordic countries, more than one in four deaths were due to cancer in 2015, and cancer is the disease group causing the most burden in terms of disabilityadjusted life years (DALYs), to a higher extent even than CVDs [8].
using the arts for health promotion and prevention of illness has increased on a global scale [9] as well as in Scandinavia [10].epidemiological studies have shown that engagement with the arts and culture can have positive effects on physical health.Arts and culture engagement may protect against chronic pain in old age [11].Such engagement has also shown associations with survival [12] (and longevity with prolonged follow-up time [13]) and a lower risk of CVD and cancer mortality [14] and cancer incidence in urban areas [15].They can also have mental health benefits such as increased life satisfaction and a lower risk of anxiety and depression [16].Still, only a few prospective cohort studies, including the Norwegian HuNT study [14], are available that look at associations between arts and culture engagement and mortality.
The aim of this study was to investigate associations between having visited the theatre/cinema and an arts exhibition during the past year and all-cause, CVD, cancer and other-cause mortality in a prospective cohort study, adjusting for relevant covariates.
Study population
A public health survey was conducted in Scania, southern Sweden, in the autumn of 2008.This crosssectional survey was based on a stratified sample of the adult (between 18 and 80 years old) register population on 1 January 2008.A letter of invitation was posted together with a questionnaire, and three postal reminders were sent out to the sample population.A questionnaire version was also available online.Some 28,198 respondents chose to participate (a 54.1% participation rate).This cross-sectional public health survey was conducted by region Skåne, the regional public authority responsible for the health-care system in Scania.The public health questionnaire included 134 items including, for example, socio-demographic characteristics, selfreported health, self-reported psychological health (e.g.GHQ12, Sf36), social support, social capital, working conditions, health-related behaviours, discrimination and items related to neighbourhood security.The public health questionnaire in Scania 2008 was designed to achieve a broad general overview of the public health situation in Scania at that point in time.The stratified random sample was stratified geographically by 59 municipalities and city parts (in four major cities; Malmö, Lund, Helsingborg and Kristianstad).The number of participants in this geographical stratification was based on age, sex and education to obtain statistical power in all areas.The stratified sample was selected by Statistics Sweden from the national population register where the population weight was created.The weight compensates for the stratification to achieve representativeness of the entire Scania population.The 2008 cross-sectional baseline data were linked to register-based mortality (causes of death register) from the National board of Health and Welfare (Dödsorsaksregistret at Socialstyrelsen), creating a prospective closed cohort population.
Dependent variable
All-cause and diagnosis-specific mortality was followed prospectively from between 27 August and 14 November 2008 (depending on registration date of the individual answer) to 31 December 2016 (8.3 years onwards), or until death.A total of 25,420 participants were included in this study, after exclusion of 2642 respondents with internally missing values on any or several of the items analysed in this study.A further 136 participants in 2008 were lost at the follow-up.All respondents with any (one or more) internally missing values on any of the items/variables included in the multiple survival (Cox) regression analyses were thus excluded.The International Classification of Diseases 10 was used for causes of death.The connection of baseline survey data with national death register data is possible by using the 10-digit person number system.This was conducted by a third party (private company).The person numbers were erased from the data set before delivery from the National board of Health and Welfare to the research group.
All-cause (total), CVD (I00-I98), cancer (C00-C97) and all other causes (causes other than I00-I98 and C00-C97) of mortality were included as broad categories.All-cause mortality is the sum of the three broad cause-specific categories above.
Independent variables
The two items 'visit to the theatre/cinema during the past year' and 'visit to an arts exhibition during the past year' were assessed with yes or no responses.The phrasing 'past year' refers to the period from autumn 2007 to autumn 2008 (depending on the exact date the respondent answered the questionnaire in the autumn of 2008).In Tables I and II, these two items were combined into the four categories: (a) visited both activities during the past year, (b) visited only the theatre/cinema, (c) visited only an arts exhibition and (d) none of the two (no activity).The two items are part of a larger social participation category item which includes 13 items as well as the option to report none of the 13 activities during the past year at all.Theatre/cinema and arts exhibition were extracted because they are the only two items among the 13 social participation items that depict cultural activities.
With regards to sex, men and women were collapsed in all analyses, and age was included in the analyses as a continuous variable.Country of birth was defined as either born in Sweden (see Table I) or born in any country other than Sweden.
Socio-economic status (SeS; by occupation and in relation to the labour market) is divided by Statistics Sweden into non-manual employees in higher, medium and lower positions; skilled and unskilled manual workers; and the self-employed (including farmers).The categories outside the workforce comprise the unemployed (job seekers) as well as the 'not job seeking' categories such as those who have retired early (<65 years of age), old-age pensioners (>65 years of age), students and people on long-term sick leave.There is also an unclassified category.
Information regarding chronic disease was obtained by the item 'Do you have any long-term disease, ailment or injury, any disability or other weakness?' with response options of yes and no (Table I).Smoking was measured with the item 'Do you smoke?' with response options of daily, non-daily and non-smoker (Table I).Leisure-time physical activity (LTPA) was obtained with four alternatives: (a) regular exercise (at least three times per week for at least 30 minutes/occasion, leading to sweating), (b) moderate regular exercise (exercising once or twice per week for at least 30 minutes/occasion, leading to sweating), (c) moderate exercise (more than two hours walking, cycling or equivalent activity/ week) and (d) low or no LTPA (less than two hours walking, cycling or equivalent activity/week).The first three alternatives were collapsed as high LTPA (Table I) and the fourth as low LTPA.The LTPA variable in the 2008 survey has been described previously [17].Alcohol consumption during the past year was measured with the item 'How often have you consumed alcohol during the past 12 months?' with response options of daily or almost daily, several occasions per week, once per week, two or three times per month, once per month, once or a few times per half year and more seldom or never.
Generalised trust in other people is self-reported and reflects self-perceived trust.The statement 'Generally, you can trust other people', with four response options of 'do not agree at all', 'do not agree', 'agree' and 'completely agree', was dichotomised, with the two first options indicating low trust and the two latter indicating high trust (Table I).
Statistics
Prevalence (%) of all variables stratified by participation/visiting both the theatre/cinema and an arts exhibition, only the theatre/cinema, only an arts exhibition or neither the theatre/cinema nor an arts exhibition during the past year were calculated.The differences between these four categories of participation were assessed using an analysis of variance test for continuous variables and chi-square test for categorical variables (p-values; Table I).Associations Hrr: hazard rate ratio; 95% CI: 95% confidence interval.
between each item/variable included in this study and all-cause mortality were analysed in univariate survival analyses with hazard rate ratios (Hrrs) and 95% confidence intervals (95% CI; Table II).Hrrs with 95% CIs for associations between visits to the theatre/cinema or no visit to the theatre/cinema during the past year and all-cause, CVD, cancer and other-cause mortality were calculated in multiple adjusted models.five models were calculated: model 0 unadjusted; model 1 adjusted for sex and age; model 2 adjusted for sex, age, country of birth, SeS and chronic disease; model 3 additionally adjusted for smoking, leisure-time physical activity and alcohol consumption; and model 4 additionally adjusted for generalised trust in other people (Table III).Hrrs with 95% CIs for associations between visits to an arts exhibition or no visit to an arts exhibition during the past year and all-cause, CVD, cancer and othercause mortality were calculated in multiple adjusted models (Table IV) with corresponding models 0-4 as for visits to the theatre/cinema (Table III).finally, the theatre/cinema and arts exhibition items were combined and analysed with the groups having visited both the theatre/cinema and an arts exhibition, only the theatre/cinema and only an arts exhibition in the past year in one category (active), and the group who visited neither the theatre/cinema nor an arts exhibition in the past year (not active) in the other category.The active and not active categories were analysed according to all-cause, CVD, cancer and other mortality in multiple adjusted survival models (Table V) with corresponding models 0-4 as in Table III.follow-up time (days) was included from baseline to death or last follow-up date (31 December 2016).
Analysis of sampling variability without distributional assumptions regarding the study population was made possible by bootstrap analysis (SAS/STAT Software Survey Analysis, 2021).Variance estimation on weighted data with confidence intervals and p-values were calculated accurately with bootstrap analyses including 1000 replicates.The assumption of proportional hazards was determined by introducing an interaction term with time and visit to the theatre/cinema and an arts exhibition, respectively, during the past year to test the assumption of proportional hazards.Schoenfeld residuals were calculated for theatre/cinema during the past year and arts exhibition during the past year (active versus not active) and mortality (figure 1).Calculations were performed using SAS v9.4 (SAS Institute, Cary, NC).
results
Table I shows that 26.5% had visited both the theatre/cinema and an arts exhibition during the past year, 36.6% had only visited the theatre/cinema, 4.9% had only visited an arts exhibition and 32% had visited neither the theatre/cinema nor an arts exhibition.A significantly higher proportion of men had only visited an arts exhibition or neither the theatre/cinema nor an arts exhibition during the past year.In contrast, a significantly higher proportion of women had visited the theatre/cinema or both the theatre/cinema and an arts exhibition during the past year.Participants with a low SeS, born abroad, with chronic disease, with low LTPA, who smoke daily, who abstain from alcohol and with low generalised respondents with no visit to the theatre/cinema, with no visit to an arts exhibition, who were men, who were older, who had retired early, who were old-age pensioners, who were on long-term sick-leave, who reported chronic disease, who had low LTPA, who smoked and who had low trust all had significantly higher all-cause mortality than their reference groups.
In contrast, respondent groups who had consumed alcohol once a month or more seldom, two to four times a month and two to three times a week displayed significantly lower all-cause mortality than the never consumption reference group.The respondent group born abroad displayed a Hrr of 0.9 (95% CI 0.7-1.0).
Table III shows that no theatre/cinema visit during the past year was associated with significantly higher Hrrs of all-cause mortality throughout the multiple analyses: Hrr 1.3 (95% CI 1.1-1.5) in full model 4 compared to the reference category that visited the theatre/cinema during the past year.Similar patterns with significantly higher Hrrs were observed throughout the multiple analyses for CVD mortality: Hrr 1.5 (95% CI 1.1-2.1) in model 4 compared to the category that visited the theatre/cinema during the past year.In contrast, the statistically significant association for cancer mortality disappeared after adjustment for health-related behaviours in model 3 (Hrr 1.1 (95% CI 0.9-1.4) and for other-cause mortality after adjustments in model 3 (Hrr 1.3 (95% CI 1.0-1.8)).
Table IV shows that no visit to an arts exhibition during the past year was associated with significantly higher Hrrs of all-cause mortality throughout the multiple analyses: Hrr 1.3 (95% CI 1.1-1.5) in full model 4 compared to the reference category that visited an arts exhibition.A similar pattern was observed throughout the multiple analyses for other-cause mortality: Hrr 1.6 (95% CI 1.2-2.2) in model 4. In contrast, the statistically significant associations for CVD mortality and cancer mortality disappeared after adjustment for health-related behaviours in model 3: Hrr 1.2 (95% CI 0.9-1.6)and Hrr 1.2 (95% CI 0.9-1.5),respectively.The proportionality test with interaction term between active versus not active (categories 1, 2 and 3, n=17,324, versus category 4, n=8096) and all-cause mortality over the 8.3-year period is not significant (p=0.101)which indicates proportionality).
Table V shows that the not active category (visit neither the theatre/cinema nor an arts exhibition during the past year) had higher all-cause, CVD and other-cause mortality throughout the multiple adjusted survival analyses in models 0-4 compared to the active (theatre/cinema and arts exhibition or only theatre/cinema or only arts exhibition) reference category.In the final model 4, the not active category displayed a Hrr of 1.3 (95% CI 1.2-1.6)for allcause mortality, 1.4 (95% CI 1.0-1.8)for CVD mortality and 1.6 (95% CI 1.2-2.1)for other-cause mortality.No significant associations between not active and cancer mortality were observed in models 3 and 4.
The Schoenfeld residuals calculated with the active (theatre/cinema and arts exhibition or only theatre/cinema or only arts exhibition) versus not active (neither theatre/cinema nor arts exhibition) categories and all-cause mortality show consistency and stability over the 8.3-year follow-up (figure 1).The interaction term between active/not active and all-cause mortality was p=0.101, which indicates proportionality.
Discussion
Not visiting the theatre/cinema during the past year was associated with higher all-cause and CVD mortality.No visit to an arts exhibition was associated with higher all-cause and other-cause mortality.The combination of having visited neither the theatre/cinema nor an arts exhibition during the past year was associated with higher all-cause, CVD and othercause mortality.The lack of significant associations between the two cultural activities and cancer mortality may be due to other aetiology and pathogenesis because the biological mechanisms in the psychosocial stress model specifically apply to CVDs.Tests of proportionality indicated that proportionality criteria were fulfilled.
The findings indicate associations between engaging in arts and culture activities (theatre/cinema and arts exhibition) and mortality, even after adjustments for socio-demographic characteristics, SeS, chronic disease, health-related behaviours and trust.These findings are consistent with other studies from the Nordic countries that have shown that engaging in arts and culture activities can have health benefits [18,19].However, this study showed no association between culture activities and cancer mortality.In contrast to the results of the present study, a previous study found that individuals who engaged in music, singing and theatre had a reduced risk of cancer-related mortality when compared to non-participants [14], and another study found that people living in urban areas who rarely attend cultural activities have a three-fold higher risk of cancer-related mortality when compared to people who are frequent attendees [20].Yet, this study found an association between attending arts and culture activities and a reduced risk of CVD and othercause mortality, which resonates with studies showing that an overall reduced risk of CVD mortality was associated with engaging in creative activities [14], and high engagement in cultural activities is independently associated with decreased all-cause mortality [18].
Other studies show that engagement in arts and culture activities can be active ingredients in promoting psychosocial health, including improved psychological well-being and increased social connectedness [9,10,21,22] as well as biological mechanisms such as release of cortisol [23].The possible causes for associations between people who are participating in arts and culture activities and a reduced risk of cardiovascular and other-cause morbidity might be found in that the positive benefit of arts and culture activities may enhance an existential connection with oneself and the wider world, which can be experienced at different levels.beyond the psychological, physical and social aspects of existences, they may also promote spiritual and existential health and well-being [24].In terms of research value, this article adds to existing knowledge of how participation in arts and culture activities can improve different aspect of health and chronic diseases, particularly with a focus on CVD morbidity and other-cause mortality of the population in southern Sweden.This suggests a potential for using arts and culture activities as preventive health measures and in public health promotion.
It is possible in these data to study the cultural activities in relation to specific causes (diagnoses) of death.However, the smaller numbers increase the random error.We identified several specific diagnoses with sufficiently high numbers of deaths to be analysed separately in the present study population (N=25,420).We identified them only from the CVD and other-cause categories because the cancer category was not significantly associated with mortality.These specific diagnoses included ischemic heart disease (I20-I25; n=169), stroke (I60-I69; n=70), pneumonia and influenza (J10-J18; n=15), chronic obstructive pulmonary disease (J44; n=51), accidents (V01-X59; n=37) and intentional self-harm (X60-X84; n=24).When we analyse these specific diagnoses in the same multiple survival (Cox regression) models, we find all associations pointing towards increased mortality risk for the group visiting neither the theatre/cinema nor an arts exhibition during the past year.Several of the effect measures are comparatively high, but all are statistically not significant in the final models possibly due to small numbers (Supplemental Table SII).Autopsies were conducted in only approximately 15% of the deaths (see Supplemental Table SIII).In sum, inclusion of only autopsy-verified specific diagnoses would have yielded even smaller numbers and higher random error than the already wide CIs displayed.
Strengths and limitations
This study is large, population based and longitudinal.The participant population in the 2008 baseline public health survey is representative of the adult population in Scania in 2008 regarding age, sex and education to an acceptable extent.The risk of selection bias is thus moderate [25].
The social participation item has been utilised in Sweden since the 1970s [26,27].The fact that the numbers of visits to the theatre/cinema and an arts exhibition are not included in these cultural activity items is a limitation of the study.The cultural activity items do not measure frequency.Still, the difference between no visit and one visit is probably more crucial than the difference between one and two or more visits.Swedish register data regarding causes of death have high validity, although cancer diagnoses may have higher validity than other diagnoses.The three aggregate groups of diagnoses are also very broad, and one of them (cancer) most probably has higher validity, which means that the risk of misclassification is smaller than if specific diagnoses were analysed.Also, as demonstrated in the discussion above, analyses of specific diagnoses would incur low statistical power in this questionnaire-based study.It should also be noted that the register data analysed in this study are the same cause of death data as the data from Dödsorsaksregistret analysed in other Swedish register-based studies.The self-reported chronic disease item was included to adjust for chronic disease issues at baseline.This item measures a combination of undiagnosed self-perception of illness and known diagnoses with consequences for perceived health, which may be associated with participation in cultural activities.This item is also associated with all-cause mortality (see Table II), that is, it is a confounder.unfortunately, we currently do not have access to prescription data or morbidity data (diagnoses) to investigate validity of the chronic disease item.relevant covariates and mediators were included in the multiple regression analyses.SeS can be defined according to occupation, education and income.The three SeS dimensions are strongly correlated but not identical dimensions of social status.In this study, SeS was measured as occupation and relation to the labour market because income is not available in the data, and education introduces a substantially higher number of internally missing in the analyses.Analyses including education did not alter any of the estimates.Country of birth was included because there are people from 183 countries living in Scania and because country of birth is associated with self-rated health, which is a predictor of allcause mortality [28].
The exclusion of respondents with internally missing values on any of the items included in this study is preferable to other alternative strategies.The alternatives imputation or letting the number of internally missing increase in each model after the introduction of new and additional covariates in the multiple regression models are methodologically more problematic.Since the other-cause group includes unnatural deaths such as accidents and suicides, the negative association with going to an arts exhibition might be due to case selection.Yet, case selection would then apply to both an arts exhibition and the theatre/cinema and is therefore less likely.
Conclusions
No visit to the theatre/cinema during the past year was associated with higher all-cause and CVD mortality.No visit to an arts exhibition was associated with higher all-cause and other-cause mortality.The combination of having visited neither the theatre/cinema nor an arts exhibition during the past year was associated with higher all-cause, CVD and othercause mortality.There is an association between attending arts and culture activities and a reduced risk of CVD and other-cause mortality, although model imperfections are possible.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship and/or publication of this article: The present study was funded by Swedish ALf Government grants.
figure 1 .
figure 1. Schoenfeld residuals according to active versus not active and all-cause mortality over the 8.3-year period.The 2008-2016 Scania Public Health Survey with 8.3-year follow-up, men and women combined.N=25,420.
Table I .
Descriptive characteristics (%) of age, sex, SeS, country of birth, chronic disease, low leisure-time physical activity, smoking, alcohol consumption and generalised trust in other people by social participation.
Table II .
Crude model: Hrrs with 95% CIs of all-cause mortality.
Table III .
Hrrs with 95% CIs of all-cause, CVD, cancer and other-cause mortality according to theatre/cinema visit at least once during the past year.The 2008-2016 Scania Public Health Survey with 8.3-year follow-up, men and women combined.Weighted hazard ratios.bootstrap method (1000 replicates) for variation estimation.Model 0 unadjusted; model 1 adjusted for sex and age; model 2 additionally adjusted for socioeconomic status, country of birth and chronic disease; model 3 additionally adjusted for leisure-time physical activity, smoking and alcohol consumption; model 4 additionally adjusted for generalised trust in other people.Statistically significant values are shown in bold.
|
v3-fos-license
|
2022-12-05T16:55:29.781Z
|
2022-11-30T00:00:00.000
|
254240956
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jurnal.unpad.ac.id/pjd/article/download/39296/18669",
"pdf_hash": "f310b612a0b6f2b806c27e8ddba5de90f1f5995a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44111",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d328918aadb3e50626aae2326bf55a7bff096a48",
"year": 2022
}
|
pes2o/s2orc
|
Analysis of FOXE1 rs4460498 and GSTP-1 I105V associated with non syndromic cleft lip and palate among Deutero Malay Subrace in Indonesia
Introduction: FOXE1 rs4460498 and GSTP-1 I105V gene polymorphisms are suspected of having a role in some of the non-syndromic cleft lip and palate (NS CLP) populations worldwide. This study aims to analyze FOXE1 rs4460498 and GSTP-1 I105V polymorphisms associated with NS CLP as the risk factor among Deutero Malay Subrace in Indonesia. Methods: This study was a case-control design, using samples from the venous blood of 102 NS CLP subjects and 102 healthy control subjects. After DNA was extracted, the PCR-RFLPs method was performed using TasI restriction enzyme on 100 blood samples of FOXE1 rs4460498 group and Alw26I restriction enzyme on 105 blood samples of the GSTP-1 I105V group. The Chi-Square test was used with the Kolmogorov Smirnov and Exact Fisher alternatives. Results: T mutant allele (OR= 0.926, p>0.05) and CT genotype (OR= 0.0, p>0.05) of FOXE1 rs4460498 and the G mutant allele (OR= 0.988,p>0.05) and AG genotype (OR= 0.675,p>0.05) of the GSTP-1 I105V are not the risks of NS CLP. Conclusion: FOXE1 rs4460498 and GSTP-1 I105V gene polymorphisms are not associated with non-syndromic cleft lip and palate among Deutero Malay Subrace in the Indonesian population.
INTRODUCTION
An orofacial cleft is a common congenital malformation in the craniofacial area which can include cleft lip (CL), cleft palate (CP) or cleft lip and palate (CLP). 1 CLP disorders are characterized by incomplete formation of the upper lip, palate caused by failure of normal fusion of the lip and palate at the midline during embryonic period. 2 The prevalence of CLP is estimated to be 1.5 per 1000 live births worldwide 3 conditions, racial groups and socioeconomic factors. 4 The continent of Asia has the highest prevalence of CLP among other continents. 5 The prevalence of CLP may vary among different countries or populations due to racial, climatic, cultural diversity, and differences in pregnant mother's treatment, and also the influence of different environment situations may create various risk factors. 6 The case of CLP can be in the form of syndromic (S) or non-syndromic (NS) based on the presence or absence of other organ malformations. 7,8 The incidence of NS CLP cases is around 65%-70%, while the prevalence of syndromic cases is 30%. NS CLP disorder occurs more in the Asian population than in the African population. 7,8 Syndromic CLP (S CLP) is usually associated with the presence of other malformations or syndromes such as Stickler's syndrome, Van der Woude's syndrome and DiGeorge syndrome while NS CLP is not associated with other disorders, 9,10,11 and the cases are due to monogenic or Mendelian disorder. 12 The prevalence rate of NS CLP is estimated at 76.8% of a total of 5,918 cases of CLP, and 7.3% of cases were S CLP, this results may vary based on geographic area, ethnicity, and socioeconomic status. 13 Impairment of highly complex process in craniofacial morphogenesis resulting in CLP, that is characterized by a failure of fusion of the frontonasal and maxillary processes and also palatal shelves of the maxillary processes during embryonic period. 14,15 The etiology of NS CLP is multifactorial with complex interactions between genetic and environmental factors. 16,17 Genetic factors are believed to be the main factor causing CLP. 10,18 There are candidate genes that are involved in NS CLP disorders, some of them are FOXE1 rs4460498 and GSTP-1 I105V gene polymorphisms. 7,19 Forkhead Box E1(FOXE1) is located on chromosome 9q22.q33 which consists of 1 exon and is expressed transiently in the developing thyroid and the anterior pituitary gland. FOXE1 rs4460498 polymorphism located in the downstream region and cause a substitution of base C into T (C>T), 20 which is a point mutation that form a change in a single base pair. 21 The FOXE1 gene belongs to a family of transcription factors that contains a DNA-binding forkhead domain that can bind and open chromatin structures and can also aid the binding of transcription factors to DNA. 3 FOXE1 is known to play an important role in the formation of lip and palate during embryonic period, and overexpression of FOXE1 contribute to the formation of cleft palate (CP). This is based on experiments in mouse models that have been genetically modified by activating various components of the FOXE1 gene resulting in abnormal development of the lips and palate, indicating an essential function of FOXE1. 22,23 Glutathione S-Transferase P1 (GSTP-1) gene is located on chromosome 11, 11q13 which consists of 5 exons. GSTP-1 I105V gene polymorphisms cause a substitution of base A to G at 313 base pair (bp) which will eventually result in substitution of isoleucine (ATT, ATC, ATA) to valine (GTT, GTC, GTA). 19,24 GSTP-1 gene is the most important isoform at the embryonic development stage. Polymorphism in GSTP-1 will cause a decrease in protein enzymatic activity and reduce the catalytic activity involved. will affect the enzymatic activity. 25 FOXE1 rs4460498 and GSTP-1 I105V polymorphisms have been studied among different populations with various results but have not been examined in Deutero Malay subrace among Indonesian population, so we are interested to study FOXE1 rs4460498 and GSTP-1 I105V polymorphisms associated with the risk of NS CL/P disorders in Deutero Malay Subrace among Indonesian population, which is the largest population in Indonesia.This study aims to analyze FOXE1 rs4460498 and GSTP-1 I105V polymorphisms associated with NS CLP as the risk factor among Deutero Malay Subrace in Indonesia.
Subjects of study
Sampling was done by consecutive sampling method by using 102 patients with NS CL/P and 102 healthy controls without a family history of NS CL/P. among them, the PCR-RFLPs method was performed on 100 samples FOXE1 rs4460498 gene group (50 NS CLP subjects and 50 control subjects) and 105 samples from GSTP-1 I105V gene group (52 NS CLP subjects and 53 control subjects). All the sample was from venous blood, then DNA isolation was done by using the manual method from Home Brew.
Then, the tube containing the PCR mixture was put into the Thermalcycler machine with PCR conditions for both polymorphisms consist of: the denaturation temperature was 95°C for 1 minute, the annealing temperature was 51,5°C for 1 minute and the extension temperature was 72°C for 1 minute. The first cycle of denaturation was added up to 5 minutes, while the final extension was added up to 3 minutes, and the total cycle was 35 cycles. The primer for FOXE1 rs4460498 was 5'ATTCCGCTGTATGTCTTGG3'(forward) and 5'TTTGTTGCTGGTTCCCTA3' (reverse) 22 and for GSTP-1 I105V was 5'GTAGTTTGCCCAAGGTCAAG3' (forward) and 5'AGCCACCTGAGGGGTAAG3' (reverse). 25 The optimal PCR results were evaluated using by 2% agarose gel electrophoresis. The 100 bp DNA ladder marker from the universal ladder was used as a marker of DNA size. The amplified DNA fragments that had been stained with Nucleic Acid Dye were then visualized by using UV transluminator. After optimal PCR results were obtained, PCR-RFLPs was perfomed by using the TasI restriction enzyme to evaluate the FOXE1 rs4460498 polymorphism, and Alw26I to evaluate GSTP-1 I105V polymorphism.
The PCR-RFLP mixtures of FOXE1 rs4460498 were incubated at 65°C for 15 minutes and GSTP-1 I105V were incubated at 37°C for 15 minutes. The results of PCR-RFLPs were re-evaluated by using 3% agarose gel electrophoresis. The results of PCR-RFLPs would be evaluated by the Sanger sequencing method.
Statistical analysis
The data was from examination results and processed descriptively, numerical scale data were presented with the mean, standard deviation, median and range. The categorical scale data will be analyzed by unpaired T-test if normally distributed and the Mann Whitney test if it is not normally distributed to test the significance of the comparison between two groups characteristics. To analyze allele and genotype frequencies of FOXE1 rs4060498 and GSTP-1 I105V polymorphisms between patient and control subjects, the chisquare test was used. Fisher's Exact test and the Kolmogorov Smirnov would be used for other alternatives.
The odds ratio (OR) would be determined from the contingency
RESULTS
The results of thePCR products, RFLPs and DNA sequencing of FOXE1 rs4460498 gene polymorphisms showed in Figure 1,2, and 3, also Figure 4,5 and 6 for GSTP-1 I105V gene polymorphisms. The optimal PCR products and PCR-RFLPs results are described in Figure 1 for FOXE1 rs446098 and Figure 4 for GSTP-1 I105V. For FOXE1 rs446098, the optimal PCR product was a single band of 315 base pairs (bp) and for GSTP-1 I105V was a single band of 433bp. In , 107 bp, and 220 bp).
A B C
A B *p-value statistically is not significant (p>0.05) Table 9 Comparison of AA and GG genotypes of GSTP-1 I105V In Figure 5, PCR-RFLPs for MTHFR A1298C, resulted in the feature of AA genotype (wildtype) (327 bp, and 105 bp), AG genotype (mutant heterozygous) (105 bp, 107 bp, 220 bp and 327 bp) and GG genotype (mutant homozygous) (105 bp, 107 bp, and 220 bp). To examine PCR-RFLPs results from both polymorphisms, we performed Sanger sequencing method over some samples (Figure 3 and 6). The distribution of alleles and genotypes of FOXE1 rs4460498 and GSTP-1 I105V in NS CLP and control groups are presented on Table 1 and 2. There was no significant association of both polymorphisms in NS CLP risk. In order to reveal the role of each genotypes, we did a comparison of each CC, CT and TT genotypes of FOXE1 rs4460498 that can be seen in tables 2,3,4. and 5. Case-control analysis revealed no significant differences in genotype comparisons of CC and CT, CT and CC, CT and TT, CC and TT. Comparison of each genotypes AA, AG and GG of GSTP-1 I105V can be seen in tables 6,7,8,and 9. Case-control analysis revealed not significant in comparison AA and AG, AG and AA, AG and GG, AA and GG (p>0.05)
DISCUSSION
According to our result, the T mutant allele (Table 1) and the CT heterozygous mutan and TT homozygous mutant genotypes (Table 4) were not significantly associated with NS CLP (p value > 0,05) . This indicates that the FOXE1 rs4460498 gene polymorphism is not a risk factor for the NS CLP in Deutero Malay sub race in Indonesian population. This result was in contrary with a study conducted by Ludwig et al in 2014 in Central Europe and Mesoamerica Maya which was found that there were significant results between the FOXE1 rs4460498 polymorphism and NS CLP abnormalities (p=6.5 x 10 -5 and p=0.015). 26 Study conducted by Liu et al in 2015 in Northeast China found that there were also significant results between the FOXE1 rs4460498 polymorphism and NS CLP abnormalities (p=0.006). 22 A study conducted by Lammer et al in 2016 showed that there was a contribution of the FOXE1 gene polymorphism to the incidence of NS CLP and NS CP in Hispanic and non-Hispanic populations in California. 27 These different study results revealed that the prevalence of FOXE1 rs4460498 polymorphism may varies across geographic areas and ethnic groups means that there is also different role of this polymorphism associated with NS CLP among different population.
The FOXE1 gene is very important in embryonic development and it is part of a family of transcription factors that contains a forkhead winged helix DNA binding domain and it is an intronless single exon gene that encodes transcription factor FOXE1 (or Thyroid Transcription Factor-2 (TTF-2)). 7 FOXE1 regulates transcription of the Thyroglobulin (TG) and Thyroid Peroxidase (TPO) genes by binding to specific regulatory DNA sequences in the promoter region via its forkhead DNA binding domain. 28 Genome-wide association study (GWAS) have related FOXE1 with NS CLP in different populations. This FOXE1 rs4460498 gene polymorphism is associated with a disturbance in FOXE1 activity which can decrease DNA binding and transcriptional activity which in turn will disrupt embryonic development and prevent fusion of palate processes.
The FOXE1 rs4460498 gene polymorphism can affect the specific expression pattern of FOXE1 at the time of fusion between the maxillary and nasal processes which plays an important role in palatogenesis. This expression pattern is found in the oropharyngeal epithelium and the thymus. 7 The FOXE1 gene also regulates 2 candidate NS CLP genes, namely the MSX1 and TGF3 genes. 22 Study by Venza et al demonstrated that MSX1 and TGFB3 genes can be upregulated in response to FOXE1 at the transcriptional and translational levels as well as recruitment of FOXE1 to specific binding motifs. 28 However, the role of FOXE1 rs4460498 in NS CLP among Deutero Malay subrace in Indonesian population can not be explained yet based on our study result.
In this study, G mutant allele (Table 1) and the AG and GG genotypes were not significantly risk factors for the incidence of CLP NS (p value > 0.05) ( Table 8). This indicates that the GSTP-1 I105V gene polymorphism is not a risk factor for the CLP NS in Deutero Malay subrace among the Indonesian population. In table 8 it was found that the 2 homozygous GG genotype was only found in the group of NS CLP patients but the results were not significant. This finding can also mean that the GG genotype homozygous mutant can indicate that the GG genotype was still likely to have an influence on the incidence of CLP NS if the sample is larger. In a study conducted by Krapels I et al in the Netherlands, it was found that there were significant results between the GSTP-1 I105V gene polymorphism with or without maternal smoking on NS CLP disorders. 25 In contrast to the study conducted by Lie RT et al in Norway, it was found that there was no significant change between the GSTP-1 I105V gene polymorphisms and NS CLP disorders. 29 The GSTP-1 gene is the most important isoform at the embryonic development stage and GSTP-1 I105V gene polymorphism cause the substitution of amino acids A to G, this substitution change will cause a decrease in protein enzymatic activity and reduce the catalytic activity involved. will affect the enzymatic activity. The GSTP-1 I105V gene polymorphism as a risk factor of NS CLP is closely associated with smoking during pregnancy, smoking can affect the expression of genes involved in palatogenesis, such as matrix metalloproteinases (MMPs) or modify the concentrations of important transcription factors including folic acid. Teratogens in cigarette smoke include nicotine, polycyclic aromatic hydrocarbons (PAHs), arylamines, N-nitrosamines and carbon monoxide. These compounds absorb into the mother's blood and reach the fetus, however the mechanism by which cigarette smoke causes abnormal development is still poorly understood. The presence of developmental abnormalities in infants whose mothers smoked during pregnancy may be related to the level of exposure to teratogens in the fetus. Exposure may be related to number of cigarettes smoked, rate of placental and fetal transfer, and maternal and fetal metabolic biotransformation. 23 Detailed information regarding the location and process of mutation in a gene is not yet fully known. Several mechanisms in DNA synthesis can be one of the suspects for the emergence of several abnormalities in humans. In this study the FOXE1 rs4460498 and GSTP-1 I105V gene polymorphism did not affect the risk during the development of NS CLP, therefore there was no influence of environmental and ethnic factors associated with the FOXE1 rs4460498 and GSTP-1 I105V gene. In this study, there was no association between the FOXE1 rs4460498 and GSTP-1 I105V gene and NS CLP abnormalities in the Indonesian Malay Deutero population.
CONCLUSION
FOXE1 rs4460498 and GSTP-1 I105V gene polymorphisms are not associated as a risk factor of NS CL/P among Deutero Malay Subrace in the Indonesian population.
|
v3-fos-license
|
2021-05-05T00:09:47.101Z
|
2021-03-10T00:00:00.000
|
233652422
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/13073/11844",
"pdf_hash": "88bf064dcd574a32f4ccd0acf2a75f4ed14892a0",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44114",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Environmental Science"
],
"sha1": "165a8b00c725f4bba2be2c3b65e8fbd04a67c504",
"year": 2021
}
|
pes2o/s2orc
|
Leaching from leaves of Sarcomphalus joazeiro and Cenostigma bracteosum stimulate or inhibit the germination of Mimosa caesalpiniifolia?
Allelopathy is an ecological mechanism that influences the development of neighboring plants. The objective was to evaluate the allelopathic potential of Cenostigma bracteosum and Sarcomphalus joazeiro on seed germination and initial growth of Mimosa caesalpiniifolia seedlings. Seeds of this species were placed to germinate on paper towel substrate, and then moistened with extracts from dry leaf of S. joazeiro and C. bracteosum at 1.0; 2.5; 5.0 and 10.0% (w.v), and control (0.0% distilled water) at 25 °C. The variables evaluated: germination, germination speed index, primary root length and root system dry weight of the seedlings. Positive allelopathic effects of S. joazeiro leaf extracts were observed on the vigor of M. caesalpiniifolia; when used in low concentrations (up to 2.6%), C. bracteosum leaf extracts stimulated germination of M. caesalpiniifolia seeds and showed more severe toxic effects when exposed to high concentrations (5.0%). S. joazeiro leaf extracts favor the germination and vigor of M. caesalpiniifolia seedlings, while C. bracteosum leaf extracts cause phytotoxic effects on seed germination and initial growth of M. caesalpiniifolia seedlings from the concentration of 5%. Therefore, there are indications of benefits for regeneration or associated forest composition between M. caesalpiniifolia and S. joazeiro.
Introduction
Allelopathy is considered an ecological mechanism that influences primary and secondary plant succession, community formation, plant dominance, and crop management and productivity (Chou, 1999;Almeida-Bezerra et al., 2020).
Species that produce allelopathic compounds generally have a greater competitive capacity than those that do not have this mechanism (Silva et al., 2019).
Allelopathic compounds are by-products of primary and secondary metabolism released into the environment by plants which interfere in each other's development in a beneficial or detrimental way, among which can be mentioned terpenes, tannins, phenolic and nitrogen compounds (Taiz & Zeiger, 2017;Almeida-Bezerra et al., 2020). Their release into the environment occurs through leaching from living and dead plant tissues, root exudation, tissue decomposition, and volatilization (Reigosa, Pedrol & González, 2005). Affecting ecosystem dynamics, structure, composition, and interaction between plants .
In Seasonally Dry Tropical Forest located in the Semi-arid region of Brazil, known as 'Caatinga" (Queiroz et al., 2017), there is still a lack of information on the phytosociological structure, ecological succession processes and natural regeneration, making it relevant to understand the processes that influence regeneration of these environments for plant restoration purpose and biodiversity conservation. In semiarid regions, studies with Senna cearensis Afr. Fern. (Fabaceae) and interactions between Pityrocarpa moniliformis (Benth.) Luckow & R.W. Jobson (Fabaceae) and Cynophalla hastata (Jacq.) J. Presl (Capparaceae) on the Mimosa tenuiflora (Willd.) Poir. (Fabaceae) were evaluated for the allelopathic effect .
As the vegetation of a given area may have a succession model conditioned to the preexisting plants and the chemicals they have released in the environment (Ferreira & Aquila, 2000), it is important to estimate the allelopathic potential of the species that make up the ecosystem. Among the autochthonous species found in "Caatinga", Cenostigma bracteosum (Tul.) Gagnon & G.P. Lewis, Sarcomphalus joazeiro (Mart.) Hauenshild, and Mimosa caesalpiniifolia Benth. can be cited.
C. bracteosum (Fabaceae) is a tree-sized species and with high economic potential Mendonça, Passos, Victor-Junior, Freitas & Souza, 2014). This species has a high capacity for regrowth and rapid growth, revealing potential for recovering degraded areas (Maia, 2012). The species has been observed in associations of plants that develop in stony soils and in humid plains (Chaves et al., 2015).
Thus, the objective was to evaluate the allelopathic potential of C. bracteosum and S. joazeiro on seed germination and initial seedling growth of M. caesalpiniifolia.
The extracts were prepared with 100 g of fresh S. joazeiro or C. bracteosum leaves were ground in a blender with 900 mL distilled water to obtain an aqueous extract with a concentration of 10.0% (w.v -1 ). The extracts were obtained from successive dilutions of this concentration at 5.0, 2.5 and 1.25% (w.v -1 ), which were then used to moisten the substrate used in germination.
The M. caesalpiniifolia seeds were scarified to overcoming dormancy (Bruno, Alves, Oliveira & Paula, 2001) and disinfested with 2.5% sodium hypochlorite solution for 5 min. The germination substrate (paper substrate -Germitest ® ) were moistened in the amount equivalent to 2.5 times the weight paper substrate (Brasil, 2013), with the following solutions that corresponded to the five treatments: distilled water (0.0% -control); and leaf extracts at concentrations of 1.25; 2.5; 5.0 and 10.0% (w.v -1 ). The paper sheets were organized as rolls, packed into transparent plastic bags, and incubated in a Biochemical Oxygen Demand (BOD) germinator under a constant temperature of 25 °C and photoperiod of 8 h for seven days.
Evaluated variables
The variables evaluated: germination (%) -the germinated seeds that originated normal seedlings were counted on the 7 th day after sowing (Brasil, 2013); germination speed index (GSI) -daily counting of normal seedlings was performed until the 7 th day after sowing and calculated according to the equation proposed by Maguire (1962); primary root length (cm seedling -1 ) -the primary root length of normal seedlings was measured using a ruler graduated in millimeters at the end of the experiment; root system dry weight (mg.seedling -1 ) -the root system of normal seedlings was packed into Kraft paper bags and placed in a dry oven with forced air circulation regulated at a constant temperature of 60 ºC until reaching constant weight.
The seedlings were removed from the dry oven and weighed on an analytical scale (0.001 g). Research, Society and Development, v. 10, n. 3, e15610313073, 2021 (CC BY 4.
Experimental design
The experiment was conducted under a completely randomized design consisting five treatments (0.0, 1.25, 2.5, 5.0 and 10%) with four replications of 25 seeds each. Therefore, the study is characterized as quantitative according to Pereira, Shitsuka, Pereira and Shitsuka (2018). Data were submitted to analysis of variance and polynomial regression, with the significant model (p <0.05 by the F-test) and the highest coefficient of determination (R 2 ) being selected. The analyzes were performed using Assistat statistical software, version 7.7 (Silva & Azevedo, 2016). For determining the extract concentration, the maximum points of the regression equations were estimated through the derivative of "Y" in relation to "X".
Results and Discussion
There are increase in M. caesalpiniifolia seed germination percentage up to the concentration of 2.6% ( Figure 1A).
Then from this concentration there was a reduction in germination, with a marked decrease from 5.0% of the C. bracteosum leaf extract concentration. Regarding S. joazeiro, there was no statistical difference between the studied concentrations for germination ( Figure 1A). These results corroborate those presented by Oliveira et al. (2012) in which none of the extract concentrations prepared from S. joazeiro leaves were phytotoxic to lettuce seeds (Lactuca sativa L.), interfering with their viability. However, Coelho et al. (2011) observed that there was reduced germination when testing extracts prepared from S. joazeiro seeds. These results indicate that active principles responsible for the allelopathic effects of this species are possibly distributed in distinct plant organs. The saponin presence can be observed on inner plants bark (floem) (Higuchi et al., 1984), while n-alkaline and triterpenoids are observed in leaves (Oliveira & Salatino, 2000).
The saponins are water-soluble glycosides or insoluble polymers from secondary metabolism that act in defense against pathogens and herbivores, protection against ultraviolet radiation and reduced growth of neighboring plants (Taiz & Zeiger, 2017). While terpenoids are volatile compounds that act in biochemical signaling and plant establishment (Rice, 2012;Almeida-Bezerra et al., 2020).
Even though the S. joazeiro extracts did not negatively interfere in the seed germination percentage, they caused reduced vigor evaluated by the germination speed index from the concentration of 4.9%, for which the maximum germination speed was obtained ( Figure 1B). This is because the allelochemicals have a greater effect on germination speed and synchrony than on the final percentage (Ferreira, 2004). S. joazeiro leaf extract presents saponins, flavonoids, phenols, and tannins, with these being the allelochemicals most likely responsible for this result (Brito et al., 2015).
The extracts dry leaf of C. bracteosum caused phytotoxic effects on seed germination ( Figure 1A) and initial growth ( Figures 1B, 1C and 1D) of M. caesalpiniifolia seedlings at 5.0% concentration. In addition, aqueous extract with fresh leaf of C. bracteosum did not affect the germination of M. caesalpiniifolia seeds, but it exerted a negative effect on the physiological quality of the seedlings of this species (Medeiros, Correia, Santos, Ferrari & Pacheco, 2018). Aqueous extracts of P. moniliformis leaves also do not affect the germination of M. caesalpiniifolia seeds, although they have a negative allelopathic effect on the speed and growth of seedlings (Pacheco et al., 2017).
Regarding the allelopathic potential of C. bracteosum, it is presumed that the extract's toxicity on the germination of.
M. caesalpiniifolia seeds is due to the presence of tannin, since this is a phenolic compound abundant in the leaves of this species (Gonzaga-Neto et al., 2001). Phenolic compounds are present in plant decomposition products in the soil that cause widespread cytotoxicity and physiological damage to neighboring plants, such as reduced plant growth and photosynthetic capacity, and impaired absorption of ions, water and mineral nutrients (Rice, 2012;Almeida-Bezerra et al., 2020).
The C. bracteosum extracts also interfered in the germination speed ( Figure 1B), where its reduction occurred from the concentration of 1.3%. In a bioassay performed by Ribeiro et al. (2012), the reduced germination speed indicated a synchrony loss in the metabolic reactions of germination, demonstrating the inhibition of lettuce seed vigor when treated with Stryphnodendron adstringens (Mart.) Coville (Fabaceae) leaf extracts. These changes also indicate interference of the allelochemicals in metabolic reactions during germination (França, 2008).
Also, C. bracteosum extracts reduced the primary root length of M. caesalpiniifolia ( Figure 1C) from the 1.0% concentration. These results are due to the deleterious effects of allelochemicals that are more drastic on root metabolism, especially during initial plant growth, which is characterized by high metabolism and sensitivity to environmental stresses (Cruz-Ortega, Anaya, Hernández & Laguna, 1998).
It is also verified in Figure 1C that the primary root length of M. caesalpiniifolia seedlings increases as the S. joazeiro extract concentration increases to 6.0%. It is a positive allelopathic effect, since the extracts also optimized the germination percentage ( Figure 1A), as well as the germination speed index ( Figure 1B) of the M. caesalpiniifolia seeds.
These results indicate that the natural regeneration of M. caesalpiniifolia can be benefited by S. joazeiro if they are used in the composition of mixed forest plantations for forest restoration.
However, it can be observed that when under high concentrations (from 6.0%). The S. joazeiro extract negatively affects the initial growth of the M. caesalpiniifolia seedlings, being more expressive over the primary root length ( Figure 1C) than the dry weight ( Figure 1D), since root length has been one of the most sensitive variables in detecting allelopathic effects in seedlings of forest species.
Few studies report the growth stimulus of one plant relative to the other by allelopathy, with detrimental allelopathic effects being more common than beneficial effects (Rice, 2012). However, Reigosa, Sánchez-Moreiras & González (1999) emphasized that each physiological process has a different response to certain doses of each allelopathic substance, corroborating the results found in the present study.
The influence of the extracts on seed germination and root growth of M. caesalpiniifolia seedlings suggests the existence of relevant allelopathic substances in S. joazeiro and C. bracteosum leaves. We verify that there was both positive and negative stimulated on the germination process.
Positive allelopathic effects on germinative performance were verified up to the concentration of 2.6% of leaf extracts, while negative effects were intensified with increasing leaf extract concentrations of both species in the present study.
This information can help to understand ecological processes among these species in the vegetation of dry forests. In addition, the obtained results can be used for the correct population size of the studied species if they are chosen to compose agroforestry systems or mixed plantations for forest restorations.
Conclusion
S. joazeiro leaf extracts favor the germination and vigor of M. caesalpiniifolia seedlings, while C. bracteosum leaf extracts cause phytotoxic effects on seed germination and initial growth of M. caesalpiniifolia seedlings from the concentration of 5%. Therefore, there are indications of benefits for regeneration or associated forest composition between M. caesalpiniifolia and S. joazeiro. In addition, new studies in field must be carried out to verify the effect of interaction of S. joazeiro on the growth and development of M. caesalpiniifolia.
|
v3-fos-license
|
2019-03-12T13:12:25.937Z
|
2006-10-01T00:00:00.000
|
261072939
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "BRONZE",
"oa_url": "https://www.jmcp.org/doi/pdf/10.18553/jmcp.2006.12.S8-A.1",
"pdf_hash": "2ce1bac9508e5d63450f9c330ee101a00a018a4a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44116",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "72088a92483f4aa2b63304e14da83f617aa5d29a",
"year": 2006
}
|
pes2o/s2orc
|
Managing Patients With Chronic Angina: Emerging Therapeutic Options for Improving Clinical Efficacy and Outcomes
1. describe the epidemiology, impact, pathogenesis, patient presentation, and treatment of stable angina; 2. compare and contrast the pharmacology and pharmacodynamics of ranolazine with that of other antianginal drug therapies, discuss the approved uses and pharmacokinetics of ranolazine, summarize the results of clinical studies of the efficacy of ranolazine for the treatment of chronic angina, and name a safety concern associated with the use of ranolazine; 3. estimate the direct and indirect costs of chronic stable angina in the United States and identify the largest components of the direct costs of narrowly defined chronic angina and coronary artery disease; and 4. describe recent trends in the use of coronary revascularization in the United States and compare and contrast the initial and long-term costs and clinical outcomes from percutaneous coronary intervention, coronary artery bypass grafting, and medical management in patients with stable angina
C oronary heart disease (CHD) remains the leading cause of death among American men and women. 1 However, advances in the treatment of acute coronary syndrome and an increasing number of therapies to reduce recurrent cardiac events have led to more patients surviving with chronic CHD. The primary symptom of chronic CHD is angina (chest pain on exertion or under mental or emotional stress). 2 More than 6.5 million Americans suffer from angina, and the prevalence will continue to grow as patients live longer with CHD and as the population ages. 1,2 Angina can severely limit patients' functional status and diminish their quality of life. 3,4 Patients with angina are less satisfied with their care. 5 Moreover, angina is predictive of subsequent acute coronary syndrome and death among CHD outpatients. 6 Given its prevalence and impact on health, chronic angina should not be treated as a benign condition and deserves increased attention from health care practitioners.
While angina is treatable through a range of pharmacologic treatments as well as coronary revascularization, 2 it is often inadequately treated in clinical practice. [7][8][9][10] For example, outpatients with chronic angina report a median frequency of 2 episodes/week, and the majority of these patients perceive their health as "fair" or "poor." 10 There is also a misperception that angina is largely obviated in an era of coronary stenting and early invasive therapy for acute coronary syndrome. Yet, more than one quarter of patients have some angina 1 month after discharge for acute myocardial infarction, 11 and one third of patients report daily to weekly angina 7 months after admission to the hospital for treatment of acute coronary syndrome. 12 Ultimately, many CHD patients are left with varying degrees of residual angina despite treatment. This has provided the impetus to develop new pharmacologic therapies to better manage chronic angina.
The first article in this supplement describes the epidemiology, pathogenesis, and treatment of chronic angina, including the use of vasculoprotective and antianginal drug therapies and coronary revascularization procedures. The approved uses, pharmacology, pharmacodynamics, pharmacokinetics, efficacy, safety, and place in therapy of ranolazine-the first new antianginal drug therapy introduced in more than 20 years for the treatment of chronic angina-are addressed in detail in the second article. In the third article, the economic burden of chronic angina in the United States is quantified, and recent trends in the use of coronary revascularization are characterized. The clinical outcomes from and longterm costs of percutaneous coronary intervention, coronary artery bypass grafting, and medical management are compared in patients with chronic angina.
TARGET AUDIENCE
Managed care pharmacists and other health care practitioners
LEARNING OBJECTIVES
Upon completion of this program, participants will be better able to 1. describe the epidemiology, impact, pathogenesis, patient presentation, and treatment of stable angina; 2. compare and contrast the pharmacology and pharmacodynamics of ranolazine with that of other antianginal drug therapies, discuss the approved uses and pharmacokinetics of ranolazine, summarize the results of clinical studies of the efficacy of ranolazine for the treatment of chronic angina, and name a safety concern associated with the use of ranolazine; 3. estimate the direct and indirect costs of chronic stable angina in the United States and identify the largest components of the direct costs of narrowly defined chronic angina and coronary artery disease; and 4. describe recent trends in the use of coronary revascularization in the United States and compare and contrast the initial and long-term costs and clinical outcomes from percutaneous coronary intervention, coronary artery bypass grafting, and medical management in patients with stable angina.
S table angina is one of several possible manifestations of coronary artery disease (CAD), a common, deadly, and costly disease in the United States (see the Introduction to this supplement). CAD and other cardiovascular diseases are agerelated conditions (Figure 1). 1 While CAD is the leading cause of death for both women and men, women usually develop CAD about 10 years later than men, and they experience myocardial infarction (MI), sudden death, and other serious sequellae roughly 20 years after men. 1 The initial manifestation of CAD usually is angina in women and MI in men. 2 Among patients with CAD, the total number of patients with chronic stable angina is difficult to determine. Chronic stable angina is the initial manifestation of ischemic heart disease in approximately half of patients. 3,4 Using these numbers, along with estimates based on patients surviving MI, it is predicted that between 6 and 12 million Americans have chronic stable angina. The risk of mortality is greatest for white men, followed by white women, black men, and black women. 5 ss Impact Angina has a substantial impact on mortality and quality of life. Angina episodes typically are triggered by exertion or emotional stress, so the physical activities of patients with stable angina are limited. 6 In a prospective study of 8,908 Veterans Affairs outpatients with CAD who were followed for an average of 2 years, the risk of death increased progressively with the self-reported degree of physical limitation due to angina. 7 The average age of participants was 67 years, 98% were male, 66% were white, and 25% had diabetes mellitus (DM). There were 896 deaths. A high degree of physical limitation increased the risk of death 2.5 times compared with little or no physical limitation, a difference that is significant. The degree of physical limitation may reflect the extent of atherosclerosis, which narrows the coronary arteries and reduces the blood and oxygen supply to the myocardium.
The patient characteristics, frequency of angina attacks, and impact of angina on perceived well-being were assessed in 5,125 outpatients with chronic stable angina living in a variety of geographic areas. 8 The average patient age was 69 years, 53% of patients were women, 70% had more than 1 associated illness, and 64% received more than 1 cardiovascular drug. The median frequency of angina was approximately twice weekly. Ninety percent of patients experienced angina during activity, and 47% also had angina at rest. The frequency of angina was significantly correlated with patients' perception of their overall well-being, with poorer health associated with higher frequencies of angina.
ss Pathogenesis
Angina is the result of myocardial ischemia, which is due to an imbalance between myocardial oxygen supply and demand. In a healthy person, the myocardial blood flow and oxygen supply increase in response to increases in oxygen demand during physical exertion through various humoral, neural, and metabolic mechanisms that regulate vascular resistance and coronary blood flow. 2 Angina episodes in patients with chronic stable angina are typically precipitated by an increase in myocardial oxygen demand (MVO 2 ) in the setting of a fixed decrease in supply. The major determinates of MVO 2 include heart rate, myocardial contractility, and intramyocardial wall tension. Intramyocardial wall tension is the leading contributor to increased MVO 2 and is directly related to the radius or size of the ventricular cavity and blood pressure, but indirectly related to the ventricular muscle mass. The rate of increase of MVO 2 can be as important as the total amount of MVO 2 . The rate-pressure product, or double product, is a common noninvasive measure of MVO 2 , which is the product of the heart rate and systolic blood pressure (SBP). However, any change in contractility or volume loading of the left ventricle (LV) is not considered by the double product.
The etiology of the fixed decrease in supply is long-standing, well-developed atherosclerotic plaques. Coronary plaques that contribute to exertional angina symptoms usually obstruct ≥70% of the epicardial coronary vessel lumen. 2 The reduction in supply is a result of obstruction of coronary blood flow by a large plaque compared with a ruptured plaque as in an acute coronary syndrome. The plaques in chronic stable angina patients are more stable, have a reduced lipid pool, and rupture infrequently. Since their geometry does not typically change acutely, they provide a relatively fixed decrease in myocardial oxygen supply.
The plaques provide a resistance to coronary blood flow in the epicardial vessels that generally do not offer any resistance to flow in patients without disease. Increases in MVO 2 are met by vasodilation of endocardial vessels that feed the myocytes. In patients with a fixed coronary lesion in the epicardial vessels, the endocardial vessels must dilate to provide adequate oxygen and blood supply to the myocytes at rest. During periods of increased MVO 2 in these patients, the endocardial vessels are already maximally vasodilated and, therefore, can provide no additional myocardial oxygen supply. The increased MVO 2 can come from increased physical activity or emotional stress. Since the increased MVO 2 demand cannot be satisfied due to the fixed reduction in supply, and maximal endocardial vasodilation at rest, angina is precipitated.
ss Patient Presentation
The diagnosis of stable angina involves a detailed history to characterize the nature, timing, and location of chest discomfort; precipitating factors; and the response to palliative measures (i.e., nitroglycerin or rest). 2,9 The PQRST mnemonic (Precipitating factors and Palliative measures, Quality of pain, Region and Radiation of pain, Severity of pain, Temporal factors) often is used by clinicians to evaluate chest pain and help rule in or out a cardiac cause. Other elements of a diagnostic work-up for a patient with suspected angina and CAD should include a physical examination, history of risk factors for CAD (cigarette smoking, dyslipidemia, hypertension, DM, and family history of premature CAD), electrocardiography, chest X-ray, and possibly an echocardiography, radionuclide imaging studies, and/or coronary angiography. 9 The typical complaints of a patient with chronic stable angina includes chest pain that is precipitated by exertion, such as walking, gardening, house cleaning, or sexual activity. Upon exertion, MVO2 has exceeded what can be provided by the fixed decrease in supply from the occlusive atherosclerotic plaque. The chest pain is typically relieved by rest or sublingual nitroglycerin (SL NTG). The quality of angina chest pain is often described as squeezing, crushing, a heaviness, or tightness in the chest. It can also be more vague and described as a numbness or burning in the chest. Chest pain that is described as sharp in origin, pain that increases with inspiration or expiration, or a reproducible pain with palpation is usually not cardiac pain. The region of the pain is substernal and may radiate to the right or left shoulder, right or left arm (left more commonly than right), neck, back, or abdomen. The severity of cardiac chest pain can be difficult to quantify since pain is a subjective measure, but the pain is usually considered severe and ranked a 5 or higher on a 10-point scale. It is important to remember that women and the elderly may present with atypical chest pain, and patients with DM may have a decreased sensation of pain due to complications of neuropathy. By definition, the timing or duration of the chest pain in patients with chronic stable angina is less than 20 minutes, but is usually around 5 to 10 minutes. A variety of noncardiac causes of chest pain must also be considered, such as gastroesophageal reflux, esophageal motility disorders, biliary colic, costosternal syndrome, or musculoskeletal disorders. 2,9 The most recent guidelines from the American College of Cardiology (ACC) and American Heart Association (AHA) for www.amcp.org Vol. 12 managing patients with chronic stable angina were released in 2002. 9 Important advances have been made since then, but the guidelines remain relevant. According to the ACC/AHA guidelines, the goal of treatment in most patients with stable angina is the complete or nearly complete elimination of chest pain and a return to normal activities with minimal adverse effects, although the goal for an individual depends on his or her clinical characteristics and preferences. 9 Prevention of MI and death is another therapeutic objective in this patient population. A 3-pronged approach to treatment is outlined in the guidelines, including modification of CAD risk, antianginal therapy, and patient education about the risk factors, pathophysiology, complications, and treatment of CAD. 9 Despite the fact that the patient with chronic stable angina has already developed CAD, risk-factor reduction and management are important to prevent progression of atherosclerotic disease.
Smoking cessation and lifestyle modification (i.e., diet, exercise, weight reduction if overweight) for patients with dyslipidemia or hypertension also are recommended to reduce CAD risk. Antilipemic and antihypertensive drug therapy in accordance with guidelines of the National Cholesterol Education Program' s Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults and the Joint National Committee for Prevention, Detection, Evaluation, and Treatment of High Blood Pressure, respectively, may be required. 10,11 Management of DM through lifestyle modification and, if needed, antidiabetic drug therapy, is advised because DM is considered a CAD risk equivalent. 10 Low-dose aspirin (81 mg/day) is recommended, with clopidogrel as an alternative for patients with contraindications to aspirin. 9 The antithrombotic effect of aspirin and clopidogrel reduces the risk of MI and death. 9 ss Antianginal Drug Therapy The antiangina drug therapies used in patients with stable angina provide their benefit by mainly reducing the different components of MVO 2 (Table 1). Some agents may also provide some coronary vasodilation and increased myocardial oxygen supply, but this is not the primary method of benefit, and results are not consistent between agents. Aspirin, clopidogrel, and angiotensin-converting enzyme (ACE) inhibitors are vasculoprotective therapies that reduce the risk of MI and death in patients with stable angina. 9 All patients with chronic stable angina should have access to SL NTG tablets or spray as recommended in the ACC/AHA guidelines. 9 Regardless of whether the patient is utilizing medical management, revascularization, or a combination of approaches, patients need treatment for acute attacks of angina. About 75% of all exertional angina episodes will be relieved with the first SL NTG dose with another 10% to 15% of patients achieving relief with the next 2 doses. 12 The tablets also may be used prophylactically before situations likely to provoke angina such as exercise. 12 Clearly the major contributing factor to successful use of SL NTG is appropriate patient education from the pharmacist. If patients do not receive appropriate patient counseling on the use and storage of this agent, the opportunities for successful utilization are greatly reduced. 13 Beta-blockers are commonly used in the management of patients with chronic stable angina. By reducing heart rate, myocardial contractility, and intramyocardial wall tension (Table 1), beta-blockers impact all of the major contributing factors of MVO 2 . Heart-rate reduction may also improve myocardial oxygen delivery by prolonging diastole and increasing the time for myocardial perfusion. Beta-selectivity does not impact the efficacy of beta-blockers in the treatment of chronic stable angina, and all agents appear equally effective. Beta 1-selective agents would be preferred in patients with chronic obstructive pulmonary disease (COPD), peripheral vascular disease, DM, dyslipidemias, and sexual dysfunction. 9 Calcium channel blockers (CCBs) are also effective in reducing angina episodes in patients with chronic stable angina. Like betablockers, nondihydropyridine (NDHP) CCBs reduce all of the components of MVO 2 ( Table 1). The similarity in pharmacodynamic effects of beta-blockers and NDHP CCBs suggests that either drug class might be used as initial antianginal therapy in patients with stable angina. All dihydropyridine (DHP) CCBs provide blood pressure reduction and, therefore, a reduction in intramyocardial wall tension. However, there is variation between the DHP CCBs in their impact on contractility and development of reflex tachycardia. Both DHP and NDHP CCBs provide some increase in myocardial blood flow. This increase in flow is due to the ability of the CCBs to decrease the cellular uptake of calcium and dilate epicardial coronary arteries. Most studies comparing beta-blockers and CCBs in patients with stable angina focused on the impact on the number and duration of angina episodes or the increase in exercise time to 1-mm ST segment depression. 9 Few studies have compared the impact of these drug therapies on cardiovascular outcomes or mortality in patients with stable angina.
Calcium channel blockers and beta-blockers are equivalent in efficacy for relieving angina and increasing exercise time, although Impact of Antianginal Drug Therapies on Myocardial Oxygen Demand 2,9,12 some studies suggest that beta-blockers are more effective than DHP CCBs. 9 In a randomized double-blind, parallel-group study of 330 patients with chronic stable angina, both the beta-blocker bisoprolol and the DHP CCB nifedipine reduced the number (8.1 episodes/48 hours to 3.2 episodes/48 hours for bisoprolol and 8.3 epidoses/48 hours to 5.9 episodes/48 hours for nifedipine) and duration of angina episodes (99.3 minutes/48 hours to 31.9 minutes/48 hours for bisoprolol and 101 minutes/48 hours to 72.6 minutes/48 hours for nifedipine). While both groups demonstrated a significant benefit over baseline, bisoprolol was significantly more effective than nifedipine for both outcomes (P <0.001). 14 In a randomized, double-blind, placebo-controlled study of 280 patients with stable angina, the beta-blocker metoprolol and nifedipine both increased exercise time (66 seconds metoprolol and 43 seconds for nifedipine; P <0.01 for both compared with baseline), although metoprolol was more effective than nifedipine (P <0.05). 15 These findings suggest that an NDHP may be a better choice than a DHP for patients with stable angina if a CCB is chosen. In a randomized, double-blind, parallel-group study of the beta-blocker atenolol, the DHP CCB nifedipine, and a combination of these 2 drugs in 682 patients with chronic stable angina, there was a nonsignificant trend toward a lower incidence of cardiac death, nonfatal MI, and unstable angina with combination therapy compared with monotherapy (12.8% atenolol, 11.2% nifedipine, and 8.5% combination therapy; P = 0.14), and there was no significant difference between atenolol and nifedipine. 16 In 809 patients with chronic stable angina, there were no significant differences between metoprolol and the NDHP CCB verapamil in mortality (5.4% metoprolol vs. 6.2% verapamil; P = NS [not significant]) or quality of life. 17 These findings suggest that cardiovascular outcomes and mortality are similar regardless of whether a beta-blocker or CCB is used as initial antianginal therapy in patients with stable angina.
According to ACC/AHA guidelines, a beta-blocker should be used as initial antianginal drug therapy in patients with stable angina in the absence of contraindications to beta-blocker use. 9 The guidelines call for the addition or substitution of an NDHP CCB if beta-blockers are contraindicated, cause unacceptable adverse effects, or are ineffective in controlling angina episodes. The ACC/AHA recommendation for use of beta-blockers as initial antianginal therapy in patients with stable angina is based largely on robust evidence of a survival benefit from beta-blocker therapy in other patient populations that often develop stable angina (e.g., patients with a recent MI or hypertension). 9 The choice between a beta-blocker and CCB for an individual with stable angina probably will hinge on patient characteristics and the contraindications and adverse effects associated with the drugs ( Table 2). For example, a beta-blocker is appropriate for a patient with a recent MI, but an NDHP CCB may be preferred for a patient with COPD and no history of MI.
The choice of add-on therapy for a patient with inadequate control of angina episodes from a beta-blocker may depend on whether the patient has continued hypertension. A long-acting nitrate may be added if hypertension is absent, or a DHP CCB could be added if hypertension is present. Tachycardia associated with nitrates or DHP CCBs is attenuated by beta-blockers and NDHP CCBs. 9 Long-acting nitrates (e.g., nitroglycerin ointment and transdermal patches, isosorbide dinitrate, and isosorbide mononitrate extended-release tablets) dilate coronary arteries, which increase myocardial oxygen supply, and they decrease intramyocardial wall tension and MVO 2 , although they may cause reflex tachycardia. 12 Long-acting nitrates increase exercise tolerance and prevent or delay the onset of angina. 12 Long-acting nitrates should not be used as monotherapy in patients with stable angina because a 10-to 14-hour nitrate-free interval is needed on a daily basis to prevent the development of tolerance, which limits the efficacy of such therapy. 12,13 The use of another antianginal agent in combination with the longacting nitrate is needed to provide protection from ischemia during this nitrate-free interval. Ideally, a long-acting nitrate is added to a beta-blocker or NDHP CCB in a patient whose heart rate is at or near the goal of 55 to 60 beats per minute at rest. Long-acting nitrates also are beneficial for patients with heart failure. 12 www.amcp.org Vol. 12 Current antiangina drug therapy options may prove inadequate for managing anginal episodes for a variety of reasons. A patient may have contraindications to the use of one or more drugs or be unable to tolerate initial or larger, therapeutic dosages. Additive hemodynamic effects from combination therapy (e.g., the blood pressure-lowering effects of beta-blockers and CCBs) may cause problems before angina relief is achieved. Angina may persist despite the use of combination antiangina therapy at maximum tolerated dosages, even with careful monitoring of blood pressure and heart rate. Patients may be dissatisfied with therapy despite a measurable improvement in exercise tolerance and/or reduced number of episodes. Patients' perceptions of what to expect from therapy and what is delivered can vary widely. Surgical revascularization plays an important and growing role in the treatment of chronic stable angina. Revascularization options usually consist of coronary artery bypass grafting (CABG) surgery or percutaneous coronary intervention (PCI) with or without stent placement. Other options are available and under development, but they are less established. The goal with revascularization is to prolong life and eliminate or reduce symptoms. 9 Whereas most of the pharmacologic approaches reduce MVO 2 , revascularization increases myocardial oxygen supply in vessels with critical stenosis. This is accomplished by opening the vessel via PCI with or without stent placement or using alternative transplanted vessels to bypass a critical stenosis in the setting of CABG surgery. While both of these therapies provide significant improvement in the care of patients with chronic stable angina and have advantages in certain groups of patients over a pharmacologic approach, both revascularization treatments have limitations.
PCI involves insertion of a catheter with a deflated balloon at the tip into the femoral artery in the groin area and threading of the catheter through the aorta into a coronary artery narrowed by an atherosclerotic plaque. 18 Inflation of the balloon compresses the plaque, restoring patency to the vessel. In most cases, a drug-eluting intracoronary stent is inserted in the reopened vessel to prevent restenosis and reduce the need for repeat revascularization. 1 In CABG surgery, a segment of the saphenous vein or the internal mammary artery is grafted onto the coronary vasculature to circumvent an occluded coronary artery and restore perfusion to an area of the myocardium with an inadequate blood supply. 9 New approaches to CABG surgery have been developed in an attempt to minimize the morbidity related to the operation. One of these approaches is the off-pump bypass coronary surgery that is performed on a beating heart. Patients undergoing off-pump bypass experience the same relief from angina, vessel patency, and mortality benefit (evaluated out to 1 year) as traditional CABG surgery. 19 Patients utilizing off-pump bypass with sternotomy can undergo multivessel bypass, but data on patients with left main disease and impaired LV function are limited. By reducing the need for cardiopulmonary bypass and preventing the need for clamping of the aorta, there is a significant reduction in adverse neurologic events, length of hospitalization, and cost. 19 Revascularization does not always eliminate angina episodes or the need for antianginal medications. In 1,205 patients with multivessel disease who underwent PCI or CABG in the hope of achieving angina relief, approximately 10% to 20% continued to experience angina and roughly 60% to 80% required antianginal medication 1 year after the procedure. 20 In another group of 1,755 patients who underwent PCI for angina symptoms or acute MI, angina persisted 1 year after the intervention in 26% of patients. 21 ss Role of ACE Inhibitors ACE inhibitors have no effect on angina, but they may reduce the risk of MI and death in patients with stable angina. 9 The results of studies of the use of ACE inhibitors in patients with CAD are somewhat controversial. Several, but not all, studies demonstrated a reduction in morbidity and mortality, especially in patients with heart failure, DM, or a previous MI. 9, 21-24 Therefore, ACE inhibitors have a role in the management of stable angina despite their lack of an impact on angina episodes. However, the use of ACE inhibitors may be limited by hypotension from antianginal drug therapies or may limit the ability to use certain antianginal drug therapies for the same reason.
ss Conclusion
Patients with chronic stable angina make up a significant portion of patients with CAD. The goals of therapy in these patients consist of reducing angina symptoms and prolonging life. Current medical therapy with the use of beta-blockers, CCBs, and nitrates has been shown to improve angina symptoms, but not reduce mortality. Proper patient evaluation and screening will aid in appropriate antianginal medication selection for each individual. When used in appropriate situations, revascularization can improve angina control compared with medical therapy alone. Regardless if a medical and/or revascularization approach is utilized, patients require aggressive risk-factor reduction against smoking, hypertension, and hyperlipidemia. The pharmacist must play a key role in not only recommending and monitoring the prescribed therapy but also in educating patients.
DISCLOSURES
This article is based on a presentation given by the author at a symposium titled "Emerging Therapies for Management of Patients with Stable Angina: Focus on Clinical Efficacy and Outcomes" at the Academy of Managed Care Pharmacy' s 18th Annual Meeting and Showcase in Seattle, Washington, on April 5, 2006. The symposium was supported through an educational grant from CV Therapeutics, Inc. The author received an honorarium from CV Therapeutics, Inc. for participation in the symposium. He discloses no potential bias or conflict of interest relating to this article. 16 D espite documented efficacy in reducing the burden of angina, conventional antianginal agents (i.e., beta-blockers, calcium channel blockers, long-acting nitrates) are limited by contraindications (e.g., beta-blockers in asthma), as well as intolerance because of side effects or adverse hemodynamic manifestations. While revascularization with either percutaneous coronary intervention (PCI) or coronary artery bypass grafting (CABG) effectively reduces the number of anginal attacks, not all patients will receive complete relief. In addition, many patients are not candidates for revascularization therapy even when they experience continued anginal attacks while on aggressive medical therapy with conventional antianginal agents. Given the limitations with current treatment options for angina, there is a clear need for new therapies, particularly therapies without the limitations of conventional antianginal medications. Ranolazine, the first new antianginal drug introduced in more than 20 years, appears to be such an agent.
Ranolazine was approved by the U.S. Food and Drug Administration (FDA) in January 2006 for the treatment of chronic angina in combination with amlodipine, beta-blockers, or nitrates. 1 Ranolazine is not indicated for monotherapy at this time and should be reserved for patients with an inadequate response to other antianginal drugs. 1 Ranolazine, available as 500 mg extendedrelease tablets, should be initiated at a dose of 500 mg twice daily and may be titrated up to 1,000 mg twice daily as needed based on clinical symptoms. 1 ss Pharmacology Conventional antianginal agents reduce myocardial oxygen demand, usually by decreasing heart rate and blood pressure or by increasing myocardial oxygen supply as a result of coronary vasodilation (see the preceding article by Dobesh in this supplement). Unlike most other antianginal agents, the antianginal effect of ranolazine does not depend on a reduction in heart rate or blood pressure. 1,2 While it is established that ranolazine does not significantly affect hemodynamics, recent work has potentially identified its mechanism of action. In the past, ranolazine was thought to maintain myocardial function during ischemia by inhibiting fatty acid oxidation and shifting myocardial energy production from fatty acid oxidation to glucose oxidation, which produces a higher amount of adenosine triphosphate per oxygen molecule consumed. 3 However, these effects are observed only at plasma concentrations that far exceed those achieved with doses used clinically. Therefore, at this time, metabolic modulation does not appear to play a significant role in the relief of angina by ranolazine. 2,4 More recent clinical evidence indicates that the antianginal effect of ranolazine may involve an electrophysiologic mechanism.
Depolarization of myocardial cell membranes is initiated by the rapid influx of sodium ions into the myocardial cell through Advances in the Management of Stable Angina TOBY C. TRUJILLO, PharmD, BCPS ABSTRACT OBJECTIVE: To describe the approved uses, pharmacology, pharmacodynamics, pharmacokinetics, efficacy, safety, and place in therapy of ranolazine, the first new antianginal drug therapy introduced in more than 20 years for the treatment of chronic angina.
SUMMARY: The mechanism of action of ranolazine is unknown, but it may involve inhibition of the late sodium current in the myocardium, thereby preventing sodium-induced intracellular calcium overload during ischemia. This mechanism differs from that of other antianginal agents, which primarily affect myocardial oxygen supply or demand through hemodynamic effects. Ranolazine undergoes extensive metabolism, primarily by cytochrome P-450 (CYP) 3A4, so interactions with drugs that are moderate to potent inhibitors of CYP3A4 need to be considered. Ranolazine is also a P-glycoprotein (P-gp) substrate and inhibitor, and it may interact with other P-gp substrates and inhibitors. In patients with an inadequate response to other antianginal agents, the addition of ranolazine to existing antianginal therapy increases exercise duration and the time to angina on an exercise treadmill test, and it decreases the frequency of angina attacks and nitroglycerin use. The drug produces antianginal effects without significantly affecting either heart rate or blood pressure. Ranolazine prolongs the QT interval on the electrocardiogram, but the overall electrophysiologic effects of the drug suggest that it is not expected to cause torsades de pointes.
CONCLUSION: Ranolazine has a unique mechanism of action that may be complementary to that of conventional antianginal agents in the treatment of chronic angina. An understanding of the potential for drug interactions, disease interactions, and contraindications is needed to ensure safe and effective use of the drug. sodium channel openings. 5 Within milliseconds of this rapid influx of sodium into the cell, sodium channels are rapidly inactivated through a gating mechanism. However, under normal conditions, a certain percentage of sodium channels fail to inactivate, resulting in a small but detectable late sodium current during the plateau phase of the action potential. Recent investigations indicate that this late sodium current is augmented in pathologic conditions such as ischemia or heart failure. This increase in intracellular sodium ultimately results in an increase in intracellular calcium, likely through the reverse mode of the sodium-calcium exchanger. Elevated intracellular calcium results in myocardial dysfunction, as well as increased left ventricular diastolic wall stiffness (i.e., increased myocardial oxygen demand). Additionally, elevated wall tension causes extravascular compression of coronary vessels, which may decrease oxygen supply to the myocardium. These effects may create a cycle of progressively worsening ischemia. 2,5 Recent animal studies have identified that, at clinically relevant plasma concentrations, ranolazine selectively inhibits late sodium entry into the cell without significantly affecting the rapid upstroke of sodium at the onset of the action potential. 2,5 Consequently, ranolazine would be expected to prevent consequences of ischemia, such as myocardial dysfunction, elevated wall tension, and reduced oxygen supply. In fact, based on this proposed mechanism, ranolazine would be expected to produce greater clinical benefit in patients with more severe or frequent angina. It is important to note that conventional antianginal agents work to prevent myocardial ischemia from developing through restoration of the balance between myocardial oxygen supply and demand. 6 Because ranolazine would be expected to prevent the consequences of ischemia once it develops, the drug should be an effective complement to conventional antianginal agents in treating patients with chronic stable angina.
ss Pharmacokinetics
Ranolazine is rapidly and extensively metabolized in the intestine and liver, and its absorption is variable. 1 Peak plasma concentrations of ranolazine are reached 2 to 5 hours after oral administration of the extended-release formulation. 1 The bioavailability of extended-release tablets is 76% compared with oral ranolazine solution. 1 Food does not have a clinically important effect on the peak plasma concentration or area under the plasma concentration-time curve (AUC) of ranolazine. 1 Therefore, the drug may be taken with or without meals.
The apparent terminal half-life of ranolazine is 7 hours. 1,7 Steady-state ranolazine plasma concentrations are achieved within 3 days of twice-daily dosing with the extended-release preparation. 1 The peak-to-trough ranolazine plasma concentration ratio is 1.6 to 3.0, suggesting that the drug will produce relatively consistent therapeutic effects throughout the dosing interval. 1 At steady-state and therapeutic dosages, the relationship between dosage and both peak plasma concentration and AUC is nearly linear, but these pharmacokinetic measures increase slightly more than proportionally to the dosage. 1 Ranolazine is approximately 62% bound to plasma proteins (primarily α1-acid glycoprotein) at therapeutic plasma concentrations. 1,7 Ranolazine undergoes extensive metabolism in the liver and intestine, with less than 5% of an oral dose excreted unchanged in the urine and feces. 1 After a single oral dose of ranolazine oral solution, approximately 75% of the dose was excreted in the urine and 25% was excreted in the feces. 1 Ranolazine is metabolized primarily by cytochrome P-450 (CYP) 3A4 and to a lesser extent (10% to 15% of a given dose) by CYP2D6. 1,2,7,8 It is unknown whether the metabolites of ranolazine are pharmacologically active. 1
ss Clinical Trials
Initial studies with ranolazine utilized an immediate-release formulation dosed 3 times a day. While these studies did produce favorable results for ranolazine in terms of increasing exercise tolerance at peak concentrations, the peak-to-trough ratio was unfavorable. Subsequently, the efficacy of the extended-release formulation of ranolazine in the treatment of patients with chronic stable angina was demonstrated in 3 pivotal phase 3 clinical trials. Participants in these studies were primarily white, mostly male, with an average age between 60 and 65 years. As would be expected of patients with chronic stable angina, many patients had a history of diabetes mellitus, heart failure, hypertension, myocardial infarction (MI), and PCI or CABG. 1,2,9,10,12 ss Monotherapy Assessment of Ranolazine in Stable Angina (MARISA) Study The MARISA study was a randomized, double-blind, placebocontrolled, 4-period crossover study of 191 adults with coronary artery disease (CAD) and angina. 9 Patients were eligible for the study if they had at least a 3-month history of stable angina that responded to either beta-blockers, calcium channel blockers, or long-acting nitrates. Upon discontinuation of their current antianginal medications, patients were enrolled if they developed exercise-limiting angina or electrocardiogram (ECG) changes indicative of ischemia on 2 exercise treadmill tests. Patients were randomly assigned to receive extended-release ranolazine 500 mg, 1,000 mg, 1,500 mg, or placebo orally twice daily for 1 week (ranolazine monotherapy is not approved by the FDA, but it was used in this study). Overall, the study had 4 treatment periods, with each patient crossing over to each treatment arm in a random fashion. At the end of each week of treatment, exercise treadmill testing was performed at 4 hours and 12 hours after drug administration, times that correspond to peak and trough ranolazine plasma concentrations.
The average patient was aged 64 years, 73% of the patients were male, and 91% of patients were white. 9 The primary efficacy analysis included 175 patients who completed 3 of the 4 treatment periods. At both times corresponding to trough and peak plasma ranolazine concentrations, all 3 ranolazine dosages significantly increased the exercise duration, time to angina, and time to 1 mm ST-segment depression on the ECG compared with placebo (Table 1). A dose-response relationship was demonstrated for ranolazine on all 3 measures. No clinically significant changes in heart rate or blood pressure were observed at rest or during exercise. The incidence of adverse effects in the ranolazine 500 mg twice-daily group was similar to that in the placebo group (16%). Dose-related adverse effects (dizziness, nausea, asthenia, and constipation) occurred substantially more often in the ranolazine 1,500-mg twice-daily group than in the 1,000 mg twice-daily group. Study withdrawal also was more common in the 1,500 mg twice-daily group than in other treatment groups.
Of the original 191 randomized patients, 143 agreed to participate in an open-label observational study in which extended-release ranolazine 750 mg was given twice daily, with titration to 1,000 mg twice daily over a 1-to 6-week period based on angina relief. 9 The addition of other antianginal agents was permitted. The survival rate was 96.3% in 115 patients who continued ranolazine for 1 year and 93.6% in 100 patients who continued the drug for 2 years in this open-label study. While comparative treatment differences on survival can only be adequately addressed through an appropriate randomized trial, the yearly mortality rate observed in MARISA compares favorably to historical controls (9%) with similar Duke treadmill scores (13.7) at baseline. 9 ss Combination Assessment of Ranolazine in Stable Angina (CARISA) Study The CARISA study was a randomized, double-blind, placebocontrolled, parallel-group trial of 823 adults with symptomatic chronic angina despite treatment with conventional antianginal drug therapy. 10 Eligibility and enrollment criteria were similar to the MARISA study except that patients in the CARISA study were allowed to continue on monotherapy with fixed doses of atenolol 50 mg/day, amlodipine 5 mg/day, or extended-release diltiazem 180 mg/day. Patients were randomly assigned to receive extendedrelease ranolazine 750 mg or 1,000 mg or placebo orally twice daily as add-on therapy for 12 weeks. Sublingual nitroglycerin was allowed. Exercise treadmill testing was performed 4 hours after drug administration after 2 weeks and 12 weeks of treatment, and 12 hours after drug administration after 2 weeks, 6 weeks, and 12 weeks of treatment.
The average patient was aged 64 years, and roughly 3 out of 4 patients were male. 10 After 2 weeks of treatment, the exercise duration and time to angina were significantly increased by both ranolazine dosages compared with placebo at the times corresponding to both peak and trough plasma drug concentrations. The time to 1 mm ST-segment depression on the ECG was significantly increased by both ranolazine dosages compared with placebo only at the time of peak plasma drug concentration ( Table 2). All improvements were sustained over the 12 weeks of therapy.
At the time of trough plasma ranolazine concentrations, the average exercise duration was 24 seconds longer with both ranolazine dosages than with placebo, and the average time to angina was 26 to 30 seconds longer with ranolazine than with placebo. 10 The magnitude of increase in exercise duration or time to angina is comparable with those observed in studies of conventional antianginal agents, although studies directly comparing ranolazine with conventional agents are needed. 12,13 Ranolazine also demonstrated benefits in other clinical end points. At baseline, the average number of angina attacks per week was 4.5. 10 After 12 weeks of treatment, the frequency of angina was reduced to a significantly greater extent by both ranolazine dosages than by placebo. The average number of attacks per week was 3.3 in the placebo group, 2.5 in the ranolazine 750 mg twicedaily group, and 2.1 in the ranolazine 1,000 mg twice-daily group after 12 weeks of treatment. The average number of sublingual nitroglycerin doses used per week after 12 weeks of treatment also was significantly lower in both ranolazine groups (2.1 with 750 mg twice daily and 1.8 with 1,000 mg twice daily) compared with the placebo group (3.1).
Advances in the Management of Stable Angina
Ranolazine demonstrated minimal effects on blood pressure and heart rate. 10 Dose-related adverse effects from ranolazine were similar to those reported in the MARISA study. 10 Tolerance to ranolazine did not develop during the 12 weeks of treatment. 1 Long-term survival rates in patients continuing ranolazine in an open-label extension of the CARISA study were similar to those reported in the open-label extension of the MARISA study. The survival rate was 98.4% in 480 patients who continued taking ranolazine for 1 year and 95.9% in 173 patients who continued the drug for 2 years. 10 ss
Efficacy of Ranolazine in Chronic Angina (ERICA) Study
The multicenter, randomized, placebo-controlled, parallel-group ERICA study involved 565 patients with chronic angina. 2,11 After a 2-week qualifying phase in which an oral placebo was given twice daily along with amlodipine 10 mg/day (the maximum recommended dosage for treating angina), patients with 3 or more anginal attacks per week were randomly assigned to receive extended-release ranolazine 500 mg or placebo orally twice daily for 1 week, followed by titration to ranolazine 1,000 mg or placebo twice daily as tolerated over the subsequent 6-week double-blind treatment phase. Patients randomized to placebo for the first week received placebo for the subsequent 6 weeks (i.e., there was no crossover between treatments). Amlodipine was continued throughout the study in both treatment groups. Sublingual nitroglycerin was used as needed to treat angina episodes. Longacting nitrates were used in conjunction with amlodipine in 43% of patients randomized to placebo, and 46% of patients randomized to ranolazine therapy. 11 The characteristics of the 2 treatment groups (ranolazine and placebo) were similar at baseline. 11 The mean age was 62 years, 72% of patients were male, and 99% were white. 11 Most patients (89%) had hypertension, 80% had a history of MI, 51% had congestive heart failure, 23% were current smokers, and 19% had diabetes mellitus. 11 The average frequency of angina attacks (5.6 attacks per week) and nitroglycerin consumption (4.6 times per week) were similar in the 2 groups despite the use of amlodipine in all patients and long-acting nitrates in nearly half of patients.
At the end of the 6-week treatment phase, the average weekly number of angina attacks had decreased to a significantly greater extent in the ranolazine group (to 2.88) than in the placebo group (to 3.31). 11 A significantly greater decrease in the average number of times weekly that nitroglycerin was used also was observed in the ranolazine group (to 2.03) than in the placebo group (to 2.68). 11 These effects appeared consistent regardless of patient age (less than 65 years versus 65 years or older) and use of long-acting nitrates. 2,11 Stratification of the angina frequency and nitroglycerin use data by baseline angina severity revealed that ranolazine had a greater impact in patients with more angina attacks at baseline. 11 Similar to previous trial experience, ranolazine was well tolerated in the ERICA study, with most adverse effects classified as mild or moderate in severity. 11 The most common increase in adverse effects with ranolazine compared with placebo were constipation (8.9% versus 1.8%), dizziness (3.9% versus 2.5%), nausea (2.8% versus 0.7%), and headache (2.8% versus 2.5%). 14 There were no significant changes from baseline in supine or standing systolic or diastolic blood pressure or heart rate measurements in either treatment group. 11 ss Safety More than 3,300 patients received ranolazine in clinical trials, including nearly 1,200 patients who received the drug in the 3 pivotal phase 3 clinical trials, for a total of 2,710 patient-years of exposure to the drug. 1,9,10,11 In open-label study extensions, 639 patients were exposed to ranolazine for more than 1 year, 578 patients were exposed to the drug for more than 2 years, and 372 patients were exposed for more than 3 years. 1 While the effect of ranolazine on long-term mortality is not known and is the focus of ongoing studies, currently available information does not demonstrate an adverse effect on mortality from the drug. In the CARISA study, the longest of the 3 pivotal phase 3 studies, 3 (1%) of 269 placebo-treated patients, 2 (0.7%) of 279 patients in the ranolazine 750 mg twice-daily group, and 1 (0.4%) of 275 patients in the ranolazine 1,000 mg twice-daily group died during the study. 10 Previously reported survival rate data from the open-label portions of the MARISA and CARISA studies also suggest that ranolazine does not have an adverse effect on survival. 9,10 In controlled studies, the rate of discontinuation of the study drug because of adverse effects was 6% with ranolazine and 3% with placebo. 1
ss Disease and Drug Interactions
Ranolazine interactions with various diseases and drugs are well characterized. Mild, moderate, and severe hepatic impairment (Child-Pugh classes A, B, and C) are contraindications to the use of extended-release ranolazine. 1 The plasma concentrations of ranolazine were increased 1.3-and 1.6-fold in patients with mild (Child-Pugh class A) and moderate (Child-Pugh class B) hepatic impairment, respectively, compared with healthy volunteers. 1 Renal impairment is not a contraindication to the use of extended-release ranolazine. Nevertheless, the drug should be used with caution in this patient population, especially patients with severe renal impairment. The pharmacokinetics of extendedrelease ranolazine (an 875 mg loading dose followed by four 500 mg doses every 12 hours) were evaluated in 8 healthy subjects and 21 subjects with mild-to-severe renal impairment. 14 At steady state, the ranolazine AUC for the 12-hour period after drug administration was increased by 72%, 80%, and 97% in subjects with mild, moderate, and severe renal impairment, respectively, compared with the healthy subjects. Since plasma concentrations of ranolazine may increase by 50% in patients with varying degrees of renal impairment, 1 careful assessment of patient response and tolerability should take place prior to dose titration, and 500 mg twice daily may represent the maximum dose that should be used in this patient population. The pharmacokinetics of ranolazine in patients receiving dialysis have not been evaluated.
Blood pressure should be monitored regularly after initiating ranolazine in patients with severe renal impairment because the mean diastolic blood pressure increased approximately 10 to 15 mm Hg in 6 subjects with severe renal impairment who received 500 mg twice daily. 1 Concurrent use of ranolazine with diltiazem and other potent or moderately potent CYP3A4 inhibitors is contraindicated because ranolazine is metabolized primarily by CYP3A4. 1 These CYP3A4 inhibitors (e.g., ketoconazole, other antifungal agents, diltiazem, verapamil, macrolide antibiotics, protease inhibitors for the treatment of human immunodeficiency virus, grapefruit juice) can substantially increase ranolazine plasma concentrations. 1,15 Ketoconazole 200 mg twice daily increased average plasma ranolazine concentrations more than 3-fold when it was administered concurrently with extended-release ranolazine 1,000 mg twice daily. 15 Diltiazem 180 mg/day and 360 mg/day increased steady-state plasma ranolazine concentrations 1.8-fold and 2.3-fold, respectively, when diltiazem was used concomitantly with ranolazine 1,000 mg twice daily. 1,15 Less potent CYP3A4 inhibitors (e.g., cimetidine) do not increase plasma concentrations of ranolazine and are not contraindicated during ranolazine therapy. 1 Verapamil 120 mg 3 times daily doubled steady-state plasma ranolazine concentrations when used concurrently with extended-release ranolazine 750 mg twice daily. 1 Verapamil is a P-glycoprotein (P-gp) inhibitor as well as an inhibitor of CYP3A4. In vitro studies indicate that ranolazine is a P-gp substrate. 1 Verapamil and other P-gp inhibitors (e.g., ritonavir, cyclosporine) may increase ranolazine absorption and bioavailability. Ranolazine also inhibits P-gp, and the dosage of other P-gp substrates (e.g., digoxin, simvastatin) may need to be reduced when ranolazine is used concurrently. Coadministration of extendedrelease ranolazine (1,000 mg twice daily) and digoxin 0.125 mg/day increased plasma digoxin concentrations approximately 1.5-fold. 1 A 2-fold increase in plasma concentrations of simvastatin and its active metabolite were observed when extendedrelease ranolazine 1,000 mg twice daily and simvastatin 80 mg/ day were used concomitantly. 1 Steady-state ranolazine plasma concentrations were increased 1.2-fold when 20 mg/day of paroxetine, a potent CYP2D6 inhibitor, and extended-release ranolazine 1,000 mg twice daily were used simultaneously. 1 However, no adjustment in extendedrelease ranolazine dosage is required when the drug is used with paroxetine or other CYP2D6 inhibitors because CYP2D6 plays a limited role in ranolazine metabolism. However, ranolazine may inhibit the activity of CYP2D6 and the metabolism of certain other drugs (e.g., tricyclic antidepressants, some antipsychotic agents) by this isoenzyme. Although ranolazine can also inhibit CYP3A4, ranolazine and its most abundant metabolites are not known to inhibit the metabolism of substrates for CYP1A2, 2C9, 2C19, or 2E1. 1
ss QT Interval Prolongation
Ranolazine is contraindicated in patients receiving drugs that prolong the QT interval on the ECG, including class Ia antiarrhythmic agents (e.g., quinidine), class III antiarrhythmic agents (dofetilide, sotalol), erythromycin, and certain antipsychotic agents (e.g., thioridazine, ziprasidone), and in patients with preexisting QT interval prolongation. 1 Ranolazine has been shown to prolong the QT interval corrected for heart rate (QTc) in a dose-and plasma-concentration-related manner. 1 Several agents that cause QT prolongation have been associated with proarrhythmia, specifically torsades de pointes, and sudden cardiac death. 16,17 The relationship between QT prolongation and proarrhythmia has not been studied in patients receiving ranolazine, but the possibility of additive prolongation of the QT interval and a higher incidence of proarrhythmia should be considered when ranolazine is used in a patient who has preexisting QT interval prolongation, is receiving another drug that prolongs the QT interval, or has an elevated risk for torsades de pointes (e.g., uncorrected hypokalemia or hypomagnesemia).
FIGURE 1
The
Torsades de Pointes Perpetuator
A baseline ECG should be obtained before initiating ranolazine. The relationship between change in QTc interval and ranolazine plasma concentration is well established. At ranolazine plasma concentrations up to 4 times higher than those associated with the maximum recommended dosage, the relationship is linear with a slope of about 2.6 msec per 1,000 ng/mL. 1 The slope is steeper in patients with hepatic impairment, with 3-fold greater increases in QTc interval prolongation for each increment in plasma concentration compared with patients without hepatic impairment. 1 In patients without hepatic impairment, the average QTc interval prolongation is 6 msec at the maximum recommended dosage, although a prolongation of at least 15 msec has been observed in the 5% of the population with the highest plasma concentrations.
Drugs that produce QT prolongation immediately raise concern in clinicians that the risk of proarrhythmia may be increased. However, it is well established that not every agent that produces QT prolongation is associated with an elevated risk for proarrhythmia. Although torsades de pointes is associated with some drugs that prolong the QT interval (e.g., dofetilide), the incidence of torsades de pointes is low in patients treated with other drugs that cause QT interval prolongation (e.g., amiodarone). [16][17][18] Thus, a better surrogate measure of the proarrhythmic potential of drug therapies is needed.
Recent work indicates that prolongation of repolarization (as observed by an increased QT interval) alone is not sufficient to increase the risk of proarrhythmia, specifically in this case torsades de pointes. However, risk for torsades is increased when QT prolongation is accompanied by an increase in early afterdepolarizations (EADs) and increased dispersion of repolarization ( Figure 1). 19 The risk of EADs increases as the action potential duration increases. EADs can produce ectopic beats and extrasystoles, serving as the trigger to initiate and then perpetuate torsades de pointes. Dispersion of repolarization refers to the spatial variability among different parts of the ventricular wall (i.e., the endocardium, midmyocardium, and epicardium) in the time to repolarization (i.e., refractoriness). The dispersion is the difference between the longest action potential duration and the shortest action potential duration in different areas. An increase in the dispersion of repolarization can be viewed at the necessary substrate for proarrhythmia, setting the stage for reentry (i.e., abnormal cardiac impulse conduction). Certain class III antiarrhythmic agents associated with an increased incidence of torsades de pointes increase the action potential duration in all parts of the ventricular wall, but they increase it to a much greater extent in the midmyocardium, thereby increasing the dispersion of repolarization. 20 Since QT prolongation must be accompanied by an increase in EADs, as well as an increase in the dispersion of repolarization throughout the myocardium, a more thorough assessment of the electrophysiologic effects of ranolazine is needed to predict the risk of proarrhythmia.
Animal work has demonstrated that ranolazine prolongs the action potential duration and the QT interval, but it suppresses EADs and reduces dispersion of repolarization. 21 In addition, it was noted that ranolazine suppresses the proarrhythmic effects of some drugs that prolong the QT interval (e.g., d-sotalol). 21 Overall, it appears that the electrophysiologic effects of ranolazine are similar to those of amiodarone, which is associated with a very low incidence of proarrhythmia and torsades de pointes. 18,21,22 Because the cellular electrophysiology underlying the effect of ranolazine on the QT interval is fundamentally different from that of drugs known to cause torsades de pointes, ranolazine is not expected to cause torsades de pointes. To date, no cases have been reported in clinical trials with ranolazine. However, adequate postmarketing surveillance will likely be necessary to adequately define the proarrhythmic potential of the agent.
ss Place in Therapy
Ranolazine therapy appears to be useful as add-on therapy in patients with extensive CAD and angina that is not controlled with conventional antianginal agents. Ranolazine may be particularly beneficial for the subset of patients who are not candidates for revascularization and remain symptomatic despite the use of maximum dosages of multiple antianginal agents, or have hemodynamic limitations that preclude initiation or titration to optimal dosages of conventional antianginal agents. As with all drug therapies, the risks associated with ranolazine use (e.g., potential for drug interactions) need to be considered before initiating therapy.
Additional clinical experience will help clarify the place in therapy for ranolazine. The long-term efficacy and safety of ranolazine treatment for up to 12 months will be evaluated in approximately 5,500 patients with non-ST elevation acute coronary syndromes treated with standard therapy in the Metabolic Efficiency with Ranolazine for Less Ischemia in Non-ST Elevation Acute Coronary Syndromes study (also referred to as MERLIN-TIMI 36). This phase 3, international, randomized, double-blind, placebo-controlled, parallel-group study began in October 2004. The primary end point is the time to first occurrence of any element of the composite of cardiovascular death, MI, or recurrent ischemia. Additional end points include exercise tolerance test performance, quality of life, and pharmacoeconomic benefit.
ss Conclusion
Ranolazine has a unique mechanism of action that may be complementary to that of conventional antianginal agents. Ranolazine, when added to conventional antianginal therapy, is effective for the treatment of chronic angina. Ongoing studies will better define the effect of ranolazine on hard outcomes such as mortality, as well as its place in therapy in the treatment of patients throughout the spectrum of CAD. Despite the drug' s proven benefit, providers will need to be familiar with the potential for drug interactions, disease interactions, and defined contraindica-tions in order to use the medication in the safest manner possible. Although ranolazine can prolong the QT interval, it is not expected to cause proarrhythmia because of its overall electrophysiologic effects.
DISCLOSURES
This article is based on a presentation given by the author at a symposium entitled "Emerging Therapies for Management of Patients with Stable Angina: Focus on Clinical Efficacy and Outcomes" at the Academy of Managed Care Pharmacy' s 18th Annual Meeting and Showcase in Seattle, Washington, on April 5, 2006. The symposium was supported through an educational grant from CV Therapeutics, Inc. The author received an honorarium from CV Therapeutics, Inc. for participation in the symposium. He has served as a consultant for CV Therapeutics, Inc. C hronic stable angina limits daily activities and has an adverse impact on quality of life despite the availability of a variety of therapeutic modalities. 1 One in 3 previously employed patients is unable to return to work within 1 year after revascularization. 2 Stable angina has a staggering societal and economic impact. In the United States, the annual direct and indirect costs of angina, including lost productivity and work days, are measured in tens of billions of dollars. 3 The direct costs of chronic stable angina from a societal perspective in the year 2000 were estimated by developing a costof-illness model based on medical utilization data from National Center for Health Statistics databases, national average Medicare reimbursement rates, International Classification of Diseases, Ninth Revision (ICD-9) codes, and databases of medications valued at average wholesale revenues. 4 Because angina is a manifestation of coronary artery disease (CAD) and estimates based on the ICD-9 code for CAD might overestimate costs but estimates based on the ICD-9 code for narrowly defined chronic angina (NCA) might underestimate costs, a range of estimates was calculated. The lower end of this range was based on the ICD-9 code for NCA, and the upper end of the range was based on the ICD-9 code for CAD. The true cost of chronic angina is thought to lie between the lower and upper ends of this range. Because chronic angina is not always the primary diagnosis and limiting the analysis to primary diagnoses might underestimate costs, separate estimates were made for NCA and CAD when they were listed as any diagnosis as well as when they were the primary diagnosis. Medicare reimbursement rates were used because they are readily available, most patients with angina and CAD are elderly, and Medicare is the primary payer for this age group.
The total direct cost of illness was conservatively estimated at $1.8 million for NCA as a primary diagnosis, with $8.9 million for NCA as any diagnosis. 4 Less conservative estimates of $33 million and $75 million were made for the total direct cost of illness when CAD was the primary diagnosis and CAD was listed as any diagnosis, respectively. The largest components of the direct costs of NCA as the primary diagnosis were outpatient visits (38%), hospitalizations (16%), and prescription medications (15%). By contrast, the largest components of the direct costs of CAD as the primary diagnosis were hospitalizations (74%), nursing home stays (22%), and outpatient visits (10%). Hospitalizations contributed a much larger portion to the direct costs of CAD as a primary diagnosis than NCA as a primary diagnosis (74% versus 16%) largely because of the expense of revascularization and treatment of acute myocardial infarction (MI). The average cost of hospitalization per utilization ranged from $3,744 for NCA as a primary diagnosis to $12,024 for CAD as a primary diagnosis.
Estimates by the Health Care Financing Administration (now the Centers for Medicare & Medicaid Services) of the total health care expenditures for CAD ranged from $54 billion to $105 billion (adjusted for 2000 dollars). 5 Another estimate of the total direct expenditures for heart disease in the United States was $71 billion (adjusted for 2000 dollars). 6 The cost of hospitalization was the largest component, accounting for roughly 60% to 75% of these totals. It includes medications and all other therapies provided during a hospital stay. [8][9][10] In one study, the median cost of the initial hospitalization for PCI with planned stent insertion for coronary heart disease involving a single vessel was $10,452 in 2003 dollars. 8,9 Another study compared the costs of routine stent implantation (i.e., primary stenting) with those of provisional stenting (i.e., the insertion of stents during balloon angioplasty only if the results of angioplasty were less than optimal) in patients with single-vessel disease. 8,10 The mean cost of the initial hospitalization was higher ($11,694) for primary stenting than for provisional stenting ($10,681). However, the mean total cost after 6 months was lower ($12,925) for primary stenting than for provisional stenting ($13,285). The investigators concluded that primary stenting improved clinical outcomes at a cost comparable to or slightly less than that of provisional stenting in patients with single-vessel disease. 8,10 ss Multivessel Disease The 2004 review by Nagle and Smith also compared the costs of PCI and CABG in 3 studies of patients with multivessel disease, in 2003 dollars. 8,[11][12][13] The costs of multivessel stenting in 100 patients and CABG in 200 patients who were followed for a median of 2.8 years were compared in a retrospective, matched cohort study. 8,11 The mean initial hospitalization cost was significantly lower (P <0.001) in the multivessel stenting group ($13,454) than in the CABG group ($23,438). The mean total cost after 2 years remained significantly lower in the multivessel stenting group than in the CABG group ($20,088 vs. $27,669), despite a significantly higher need for at least one repeat revascularization procedure in the multivessel stenting group.
Two other longer studies comparing the costs of PCI and CABG in patients with multivessel disease suggest that the cost gap between PCI and CABG narrows over time because of the need for repeat revascularization after PCI. 8,12,13 In a randomized, controlled study, the mean initial hospitalization cost was $6,627 lower in patients undergoing PCI than in patients undergoing CABG. 12 The difference between PCI and CABG in the mean total cost was $5,153 after 3 years, and it decreased to $2,605 after 8 years.
In 2003 dollars, the mean total 8-year cost was $56,343 in the PCI group and $58,948 in the CABG group, a difference that is not significant.
The total lifetime costs for initial angioplasty with primary stenting, initial angioplasty with provisional stenting, CABG with primary stenting, CABG with provisional stenting, and CABG without stenting in patients with multivessel disease were modeled using data from a substudy of the Bypass Angioplasty Revascularization Investigation. 13 The total lifetime costs were similar, ranging from $154,018 to $163,587 in 2003 dollars. 8
Drug-Eluting Stents
The initial treatment costs, follow-up costs, and total 1-year costs were compared in 1,058 patients with complex stenoses in a single coronary vessel who planned to undergo PCI and were randomly assigned to implantation of a drug-eluting stent or a baremetal stent after PCI. 14 The initial treatment cost was $2,856 per patient higher in the drug-eluting stent group compared with the bare-metal stent group, a difference that is significant. (P = 0.001) However, the mean follow-up costs per patient over the subsequent 12 months were $2,571 lower in the drug-eluting stent group than in the bare-metal stent group, largely because of a lower need for repeat revascularization. The total 1-year cost was $309 higher in the drug-eluting stent group than in the bare-metal stent group. The difference is not significant.
The economic impact, from a hospital perspective, over a 5-year period of a proposed change in Medicare reimbursement policy for drug-eluting stents and converting from bare-metal stents to drug-eluting stents was simulated by a computer model. 15 An annual patient volume of 3,112 and the use of drug-eluting stents in 85% of stent implants during the first year were assumed in the model. 15 In 2003 dollars, the model predicted a shift from a $2.01 million annual profit to a $5.41 million loss in the first year and a $6.38 million annual loss in subsequent years. 8,15 Thus, more than $28 million in revenue would be diverted from the hospital over a 5-year period under the conditions of the model (i.e., adoption of the Medicare reimbursement policy for drugeluting stents). The potential for loss of revenue may, in part, explain lower rates of use of drug-eluting stents in some hospitals than in others.
ss Justifying the Costs
In summary, both PCI and CABG are costly procedures. The costs of PCI for single-vessel disease are less than the costs of PCI for multivessel disease. 8 In patients with multivessel disease, the initial costs of PCI with or without stenting are lower than the initial costs of CABG, but the long-term costs of PCI and CABG are similar. Drug-eluting stents have the potential to greatly affect the economics of revascularization, but additional data are needed to quantify the impact.
The long-term benefits of PCI and CABG are unclear and controversial. [16][17][18][19] Although short-term improvements in anginal symptoms and quality of life have been demonstrated with revascularization, these improvements may subside over time. 20 One in 4 patients has recurrence of angina within 1 year after revascularization, and many of these patients require antianginal medications. 20,21 Twenty-three percent of patients undergoing PCI or CABG report their health as poor or fair 5 years after the procedure. 2
ss Comparison with Medical Management
Evidence suggests that revascularization often is considered before medical therapy has been given an adequate trial. 22 Guidelines of the American College of Cardiology and American Heart Association for the treatment of chronic stable angina call for the use of medical therapy unless contraindicated before considering revascularization (see the article by Trujillo in this supplement). 3
ss Meta-Analysis
A meta-analysis of 11 randomized trials comparing PCI with conservative medical treatment in a total of 2,950 patients with stable CAD found no significant difference between the 2 groups in mortality (n = 95 vs. n = 101, respectively), a composite of cardiac death or MI (n=126 vs. n=109, respectively), nonfatal MI (n = 87 vs. n = 66, respectively), the need for CABG (n = 109 vs. n = 106, respectively), or the need for PCI during follow-up (n = 219 vs. n = 243, respectively). 23 There was an increase in relative risk of nonfatal MI by approximately 30% in the PCI group compared with the conservative medical treatment group, largely related to the PCI procedure. The difference between the 2 groups was not significant. A possible survival benefit was seen for PCI in trials of patients with a recent MI. Thus, in the absence of a recent MI, PCI did not offer any benefit in terms of reduced risk of death, MI, or need for repeat revascularization compared with conservative medical treatment.
ss Randomized Intervention Treatment of Angina (RITA-2)
The costs of PCI and medical management were compared in several studies. In the second, the RITA-2 trial, 1,018 patients with stable CAD were randomly assigned to undergo PCI or receive continued medical management. 24 Health service resource use data were collected prospectively over a 3-year follow-up period.
At the end of the 3 years, the incidence of the composite end point of death or MI was significantly higher (P=0.025) in the PCI group (7.3%) than in the medical management group (4.1%), largely due to procedure-related nonfatal MI. 24 The incidence of grade 2 or worse angina was significantly lower (P <0.001) in the PCI group (17%) than in the medical management group (27%) after 1 year of follow up, but there was no significant difference (P = 0.43) between the 2 groups in this end point after 3 years of follow up (20% versus 22%, respectively). After the initial treatment strategy in the RITA-2 study, the number of subsequent PCI procedures was higher in the medical management group than in the PCI group (118 PCI procedures in 102 patients in the medical management group versus 73 PCI procedures in 62 patients in the PCI group), but the number of coronary angiograms was higher in the PCI group than in the medical management group (171 coronary angiogram procedures in 131 patients in the PCI group versus 110 coronary angiogram procedures in 93 patients in the medical management group). 24 The number of CABG procedures was similar in the 2 groups (37 CABG procedures in 37 patients in the medical management group and 38 CABG procedures in 37 patients in the PCI group). As expected, the use of antianginal medications (beta-blockers, calcium channel blockers, and long-acting nitrates) was higher in the medical management group than in the PCI group. The use of community-based resources (general practitioner visits, district nurse visits, and trial research assistants) was similar in the 2 groups.
The average hospital unit cost, which includes medical and nursing staff, standard procedure-related drugs and anesthetics, equipment, consumables, and overhead, was nearly twice as high in the PCI group as in the medical management group. 24 The difference between the total costs of the 2 therapeutic approaches did not diminish over time (Figure 1). The cost of PCI as an initial strategy exceeded the cost of medical management as an initial strategy by 74% over 3 years.
ss Medical, Angioplasty, or Surgery Study (MASS-II) The clinical outcomes and effective costs of medical management, PCI with stenting, and CABG were compared after 1 year in the MASS-II, a randomized study of 611 patients with multivessel CAD and preserved left ventricular function. 25 The baseline characteristics of the 3 treatment groups were similar, except for a higher incidence of previous acute MI in the PCI plus stenting group than in the other 2 groups and a higher incidence of class III or IV angina pectoris in the CABG group than in the other 2 groups. The incidence of death during 1 year of follow-up was similar in the 3 groups: 1.9% with medical management, 4.4% with PCI plus stenting, and 3.9% with CABG. 25 However, significantly larger percentages (P <0.0001) of patients in the PCI plus stenting group (79%) and CABG group (88%) remained angina-free after 1 year than patients in the medical management group (49%). The need for angioplasty was significantly higher (P = 0.0003) in the PCI plus stenting group (8.3%) than in the medical management group (3.5%) and the CABG group (0.5%). The average time to first event (acute MI, need for revascularization procedure, or death) was similar in the 3 groups: 4.6 months in the medical management group and PCI plus stenting group and 3.7 months in the CABG group.
The analysis of effective costs was performed taking into consideration clinical outcomes as well as the costs of treatment over a 1-year period. 25 Expected costs, costs per event-free year of life gained from treatment, and costs per angina-and event-free year of life gained from treatment were determined for all 3 interventions. The expected costs were lowest for medical management, higher for PCI plus stenting, and highest for CABG. The event-free cost of 1 year of life gained with medical management, PCI plus stenting, and CABG was $2,454, $10,348, and $12,404, respectively. The cost per angina-and event-free year of life gained from medical management, PCI plus stenting, and CABG was $5,006, $13,099, and $14,095, respectively. Thus, medical management presented the lowest cost but at the greatest increment increase. The effective costs of PCI plus stenting and CABG were similar when clinical outcomes were considered in the cost analysis. The most stable costs were presented by the CABG group.
ss Trial of Invasive Versus Medical Therapy in Elderly Patients With Chronic Angina (TIME) Study
In the TIME study, the costs and benefits of using either PCI or CABG were compared with those of medical therapy over a 1-year period in 188 elderly patients (aged 75 years or older) with chronic CAD and angina. 26 The primary end point was quality of life and freedom from major adverse clinical events (death, nonfatal MI, or hospitalization for uncontrolled symptoms or acute coronary syndrome, with or without the need for revascularization).
The incidence of major adverse clinical events over the 1-year TIME study period was significantly lower (P <0.0001) in the invasive therapy group (0.38 events per patient) than in the medical therapy group (1.0 event per patient). 26 Angina severity decreased and quality of life improved from baseline in both treatment groups, with no significant differences between the 2 groups after 1 year.
The average cost was significantly higher (P <0.0002) with invasive therapy than with medical therapy during the first 30 days, but the cost in the subsequent 11 months was significantly higher (P = 0.004) with medical therapy than with invasive therapy. 26 The total cost over the 1-year study period was slightly lower in the medical therapy group compared with the invasive therapy group, but the difference was not significant (P = 0.08).
Analysis of the incremental cost to prevent a major adverse clinical event favors the use of invasive therapy instead of medical therapy in this patient population. 26 However, little improvement in quality of life is associated with substitution of medical therapy with invasive therapy.
ss Conclusion
Chronic stable angina is associated with large direct and indirect costs, with a large share of the costs associated with hospitalization and revascularization. Revascularization is sometimes used without an adequate trial of medical management, despite higher costs and a lack of clear evidence of long-term clinical benefits.
DISCLOSURES
This article is based on a presentation given by the author at a symposium entitled "Emerging Therapies for Management of Patients with Stable Angina: Focus on Clinical Efficacy and Outcomes" at the Academy of Managed Care Pharmacy' s 18th Annual Meeting and Showcase in Seattle, Washington, on April 5, 2006. The symposium was supported through an educational grant from CV Therapeutics, Inc. The author received an honorarium from CV Therapeutics, Inc. for participation in the symposium. She discloses no potential bias or conflict of interest relating to this article. 17. Which of the following statements about the initial treatment costs and follow-up costs of bare-metal stents and drug-eluting stents in patients with complex stenoses in a single coronary vessel is correct? a. The initial treatment cost and follow-up costs are lower with drug-eluting stents than with bare-metal stents because of volume discounts. b. The initial treatment cost is lower with drug-eluting stents than with bare-metal stents because of a lower rate of procedure-related nonfatal myocardial infarction (MI), but the follow-up cost is similar with drug-eluting and bare-metal stents. c. The initial treatment cost is higher with drug-eluting stents than with bare-metal stents, but the follow-up cost is lower with drug-eluting stents than bare-metal stents because of a lower need for repeat revascularization. d. The initial treatment cost and follow-up costs are higher with drug-eluting stents than with bare-metal stents because of patent protection for drug-eluting stents.
18. Which of the following statements about the comparative initial and long-term costs of PCI and CABG in patients with stable angina is correct? a. The initial costs for PCI are lower than those for CABG, but the long-term costs of PCI are higher than those for CABG. b. The initial costs for PCI are higher than those for CABG, but the long-term costs of PCI are lower than those for CABG. c. The initial costs for PCI are higher than those for CABG, but the long-term costs of PCI are similar to those for CABG. d. The initial costs for PCI are lower than those for CABG, but the long-term costs of PCI are similar to those for CABG. 19. Which of the following statements best summarizes the findings from the meta-analysis of 11 randomized trials comparing PCI with conservative medical treatment in patients with stable CAD? a. In the absence of a recent MI, PCI did not offer any benefit in terms of reduced risk of death, MI, or need for repeat revascularization compared with conservative medical treatment. b. In all patients, PCI significantly reduced the risk of death, MI, and need for repeat revascularization compared with conservative medical treatment. c. In the absence of a recent MI, PCI significantly reduced the risk of death, MI, and need for repeat revascularization compared with conservative medical treatment. d. In patients with a recent MI, PCI significantly increased the risk of death, MI, and need for repeat revascularization compared with conservative medical treatment.
20. Which of the following statements best summarizes the findings of the RITA-2 study comparing the costs of PCI and medical management over a 3-year period in patients with stable CAD? a. The initial costs were nearly twice as high with PCI as with medical management, but the cost gap narrowed over the 3-year period. b. The initial costs were nearly twice as high with medical management as with PCI, but the cost gap narrowed over the 3-year period. c. The initial costs were nearly twice as high with PCI as with medical management, and the cost gap did not diminish over the 3-year period. d. The initial costs were nearly twice as high with medical management as with PCI, and the cost gap did not diminish over the 3-year period.
To complete this continuing education activity, go to the ASHP Advantage CE Processing Center at www.ashp.org/advantage/ce to access the posttest and evaluation.
|
v3-fos-license
|
2023-01-08T05:05:50.342Z
|
2022-12-31T00:00:00.000
|
255499502
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-2615/13/1/163/pdf?version=1672495373",
"pdf_hash": "99a6f3c93206fc93884e0299e03b3354ed6776e9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44117",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "99a6f3c93206fc93884e0299e03b3354ed6776e9",
"year": 2022
}
|
pes2o/s2orc
|
Effects of Hermetia illucens Larvae Meal and Astaxanthin as Feed Additives on Health and Production Indices in Weaned Pigs
Simple Summary Weaning is a stressful period that reduces digestive capacity and increases oxidative stress and disease susceptibility in piglets. Feed additives can protect the piglets’ health status in a natural way. This study aimed to evaluate the effects of full-fat H. illucens larvae meal (HI) and astaxanthin (AST) supplementation on the growth performance and health status of weaned pigs. HI contains bioactive substances (chitin, antimicrobial peptides, lauric acid) with immunostimulatory, antimicrobial, and anti-inflammatory properties. Astaxanthin is a carotenoid pigment with strong antioxidant and anti-inflammatory capacities. The results showed that astaxanthin supports the inhibition of oxidative stress. In the experiment lasting from 35 to 70 days of age, 48 weaned pigs (about 8.7 kg body weight) were involved. Both supplements were tested separately or combined in feed mixtures. The 2.5% HI and AST supplementation can reduce the susceptibility of pork fat to oxidation. However, a higher concentration of HI (5%) was not beneficial because of the adverse changes in some of the red cell indices and thus should be combined with the antioxidant AST to improve these indices. Both supplements did not negatively affect the piglets’ productivity. Abstract Weaning is a critical period in farming, and therefore, searching for health-promoting feed additives of natural origin is necessary. This study aimed to evaluate the effects of full-fat H. illucens larvae meal (HI) and astaxanthin (AST) supplementation on the growth performance and health status of weaned pigs. The experiment was carried out on 48 pigs (8.7 kg) divided into six groups: I—control; II—2.5% HI; III—5% HI; IV—2.5% HI and AST; V—5% HI and AST; VI—AST. The experiment lasted from the 35th to 70th day of age, and animals were fed ad libitum. The results obtained indicate that HI meal and astaxanthin had no effect on feed intake and utilization, weight gain, or organ weight. Additionally, blood parameters remained within the norms. It seems that astaxanthin supports the inhibition of oxidative stress, which became apparent in the case of some red blood cell parameters. The 2.5% HI and AST supplementation can reduce the susceptibility of pork fat to oxidation (lower adipose tissue TBARS). However, 5% HI in feed was not beneficial because of the adverse changes in some red cell indices, and it should be combined with the antioxidant AST to improve these indices.
Introduction
One of the major problems generating economic losses in pig farming is the weaning period of piglets [1]. There is a very stressful period of the animal's life, involving separation from the sow, environmental and nutritional changes increasing exposure to pathogens and food antigens [2], and a new group hierarchy. Weaning from the sow disrupts the intestinal integrity of piglets, reduces the digestive capacity of the digestive system, and increases
Ethical Approval
All procedures included in this study relating to the use of live animals agreed with the First Local Ethics Committee for Experiments with Animals in Cracow, Poland (Resolution No. 420/2020, date 22 July 2020). Throughout the experimental period, the health status of postweaning pigs was regularly monitored by a veterinarian.
Animals and the Layout of the Experiment
The experiment was conducted on forty-eight 35-day-old post-weaning pigs (barrows) weighing about 8.7 kg (±0.2 kg). The barrows were of the Polish Landrace (PL) breed. The pigs were divided into six groups, with eight pigs in each: group I-control, group II-addition of 2.5% Hermetia illucens (HI) larvae meal, group III-addition of 5% H. illucens larvae meal, group IV-addition of 2.5% H. illucens larvae meal and astaxanthin, group V-addition of 5% H. illucens larvae meal and astaxanthin, group VI-addition of astaxanthin. The Hermetia illucens larvae meal was a full-fat product obtained from commercial sources (HiProMine S.A., Robakowo, Poland). The astaxanthin originated from Haematoccocus pluvialis (Podkowa AD 1905 sp. z o.o., Lublin, Poland) and was added in the amount of 0.025 g per 1 kg (25 mg per kg) of feed mixture. All piglets were fed an iso-protein and iso-energetic diet, meeting the requirements according to the Polish standards of pig feeding [23]. The ingredient composition and nutritive value of the diets are shown in Table 1. Basic chemical analyses of feed mixture samples were performed according to standard methods [24]. The experimental fattening lasted 35 days. The pigs were kept in individual pens and received feed and water ad libitum. The animals were individually weighed on the experiment's first and last day. Daily feed intake and conversion, as well as animal weight gain, were calculated. At the end of the experiment, all pigs were slaughtered. The animals were killed with an approved standard method by simply stunning with a specialized penetrating pin device Blitz (Germany), along with cartridges caliber 9 × 17 mm dedicated to slaughtering pigs. Blood was collected in tubes for biochemical and hematological analysis. Intestine sections, kidneys, stomach, liver, and spleen were collected for weighing. Samples of muscle (longissimus m.) and adipose (backfat) tissue were also taken from the area between the last thoracic and first lumbar vertebrae. The dissected intestine sections (duodenum, jejunum, ileum, cecum, and large intestine) were rinsed, weighed, and measured. The pH of the stomach, duodenum, jejunum, ileum, large intestine, and caecum digesta was measured with a HI 99163 pH-meter (Hanna Instruments Inc., Woonsocke, RI, USA), with automatic temperature compensation from −5 to 105 • C and equipped with a pH/T • FC 232 combination electrode.
Meat and Backfat Sample Collection and Analysis
Samples of meat (longissimus m.) and adipose tissue (backfat) were taken from the area between the last thoracic and the first lumbar vertebrae. Basic chemical analyses (dry matter, crude protein, crude fat, and crude ash) of meat samples were performed according to standard methods [24]. Thiobarbituric acid reactive substances (TBARS) were analyzed in meat and backfat samples after 3 months of storage at -20 • C, using a modified method proposed by Pikul et al. (1989) [26]. In brief, 10 g of shredded sample was homogenized with 50 mL of 4% perchloric acid with butylated hydroxytoluene. After filtration, 5 mL of the filtrate was mixed with 5 mL of 2-thiobarbituric acid (0.02 M). The solution was heated in a test tube for 1 h, in a boiling water bath, and then cooled under running water for 10 min. The measurement was carried out at 532 nm against a calibration curve containing a blank sample.
Statistical Analysis
Data were analyzed by 2-way ANOVA using Statistica ® ver. 13.3 software packages (StatSoft Inc., Tulsa, OK, USA) [27]. The model included two main factors: (1) Hermetia illucens larvae meal share (2.5% vs. 5.0%) and (2) the astaxanthin presence in the feed mixture, and their interactions. Each individual piglet served as an experimental unit (n = 8, per group). Before the data analysis, the normality of the data was tested using the Shapiro-Wilk test and histograms were evaluated. Duncan's test was used to compare differences between averages when the difference was found to be significant (p < 0.05).
Growth Performance
All animals were healthy during the experiment and showed no signs of disease. Indicators of weight gain, feed conversion ratio, average daily gain, feed intake, and parameters collected during dissection are shown in Tables 2 and 3. There were no statistically significant differences between the groups. I, II, III, IV, V, VI-number of groups; Hermetia illucens share of 0% (groups I and VI), Hermetia illucens share of 2.5% (group II and group IV), Hermetia illucens share of 5% (group III and group V) and astaxanthin supplementation (groups IV, V, and VI), without astaxanthin supplementation (groups I, II, and III). I, II, III, IV, V, VI-number of groups; Hermetia illucens share of 0% (groups I and VI), Hermetia illucens share of 2.5% (group II and group IV), Hermetia illucens share of 5% (group III and group V) and astaxanthin supplementation (groups IV, V, and VI), without astaxanthin supplementation (groups I, II, and III).
Blood Indices
The effects of insect meal from Hermetia illucens larvae administered at different doses and astaxanthin on the biochemical blood indices, as well as the interaction between these factors, are shown in Table 4. Lipid profile was not affected by the HI meal, except HDL (p = 0.03) and LDH content (p < 0.01), and not by the astaxanthin supplementation in feed. Analyzing the hepatic/pancreas and the renal and osteological profiles, some varied effects of experimental nutritional factors were observed. HI meal lowered the GLU content (p < 0.05) when added at 5% in the feed, while the astaxanthin supplementation increased the GLU and ALP contents. However, in the case of ALP as well as ALB content, the interaction was statistically significant: these parameters were higher when astaxanthin was added to the feed mixture together with the HI meal. The 2.5% HI meal supplementation in feed increased the p-level (p < 0.01) and decreased the CREA level (p = 0.02) in piglets' blood, while 5% HI meal supplementation lowered the Ca level (p < 0.01). The Mg content in the blood was not affected by the HI meal addition in feed. The astaxanthin increased CREA, Ca, and Mg levels (p < 0.01). The interaction (p < 0.01) between both nutritional factors was noticed in the TP amount in the blood, which was the lowest in piglets receiving a feed mixture containing HI meal without astaxanthin.
The results of the hematological analysis of piglet blood are shown in Table 5. Astaxanthin supplementation did not affect white blood cell counts, while the 5% HI meal increased LYM counts (p = 0.04). Significant interactions indicate that MON and GRA were affected only when both dietary factors were used together, and the highest amount of MON and GRA was observed in piglets fed a mixture containing 5% HI meal along with AST (p = 0.01 and 0.02, respectively). Both HI meal and AST affected red blood cell parameters (p < 0.05), but the interaction was significant for HCT and MCV only. The lowest values of these parameters were read for the groups fed 5% HI meal supplementation (p < 0.01; p = 0.02). Analyzing the main factors, a significant increase in the level of RDWC after the addition of AST and 5% HI was noticeable (p < 0.01). The number of RBCs increased after the addition of AST (p < 0.01) but was not affected by HI meal in the diet. The Fe level was lower in the blood of piglets fed with HI meal (p = 0.01) but was about 30% higher after the addition of astaxanthin (p < 0.01). HGB level decreased after supplementation with AST (p < 0.01) and 5% HI meal (p = 0.02) meal. Both AST and 5% HI meal decreased MCH (p < 0.01). In the case of platelet parameters, the only effect was observed in PDW when 2.5% HI meal was used in the feed mixture, which significantly reduced this value (p < 0.01).
Meat and Backfat Analysis
The effects of astaxanthin and H. illucens larvae meal on the basic chemical analysis of meat are shown in Table 6. The highest dry matter of meat was determined in piglets treated with 2.5% HI meal or 2.5% HI meal together with AST (interaction p = 0.02). The lowest percentage of ash in meat (calculated in dry matter) was determined in the group treated with 2.5% HI meal (p < 0.01) and in groups not treated with AST (p = 0.03). The protein and fat content in meat (calculated in dry matter) were not affected by HI meal nor AST supplementation in feed. The results of measurements of oxidative stability of meat and adipose tissue from pigs fed with a mixture containing Hermetia illucens meal or astaxanthin are presented in Table 6. Both HI meal and AST significantly decreased the TBARS in adipose tissue (backfat) after 3 months of frozen storage (p < 0.01), and the interaction between these factors resulted p < 0.01. In comparison to the control group, the 2.5% HI concentration was more effective than the 5% HI concentration (TBARS decreased by 80% vs. 69%), and the AST was more effective alone or together with 2.5% HI added to the feed mixture (TBARS decreased by about 77%). However, in the case of meat, the HI meal supplementation did not influence the TBARS value, while the AST supplementation increased this parameter (p < 0.05).
Growth Performance
The inclusion of H. illucens larvae meal in the diet did not adversely affect the growth performance of the piglets involved in this study, and no effect of HI meal was observed on the weight of organs and digestive tract sections of piglets (calculated as % of body weight). In contrast, in the experiment of Yu et al. (2020) [28], piglets fed with a mixture containing 0%, 1%, 2%, or 4% of HI meal showed a linear increase in the pancreas and small intestine in response to this diet supplementation. No negative effects on feed intake, feed conversion ratio, or average daily gain were observed. The fact that the presence of HI meal in the feed did not impair the feed intake of the piglets is a favorable result and confirms that insect-originated feed is palatable to these animals. The interest of piglets and their willingness to eat black fly larvae have also been observed by other authors [29]. Conclusions similar to ours were reached by Biasato et al. (2019) [30], who carried out an experiment on weaned piglets fed defatted H. illucens larvae meal. The HI larvae meal was included in increasing amounts (0%, 5%, or 10%) in diets formulated for two feeding phases: I (from day 1 to 23) and II (from day 24 to 61). No significant differences in growth performance were observed, except for average daily feed intake in phase II, which showed a linear response to increasing levels of HI meal. Additionally, no effect was observed on the growth performance of weaned piglets fed diets containing up to 8% full-fat HI meal for 15 days [31]. No differences in piglets' performance were found also by Driemeyer (2016) [32] when fish meal was partially replaced by HI meal. The researcher fed piglets (10 to 28 days of age) on a four-week phase feeding schedule with a diet containing 3.5% HI meal. There were no significant differences obtained for feed intake and average daily gains of the animals. In contrast, in the study by Chia et al. (2021) [33], an effect of H. illucens meal on increased daily weight gain was observed. Carcass weights of pigs fed diets with HI meal as a replacement for a fish meal at 50%, 75%, or 100% (w/w) were higher than those of pigs fed a control diet. In the groups receiving 50% and 100% insect meal in place of fish meal, final body weight was significantly higher than in the control and 25% insect meal-treated groups. In our experiment, no significant differences in final body weight were observed among groups, and no significant differences in feed conversion ratio (FCR) were shown. In contrast, in the experiment with 50%, 75%, or 100% insect meal, FCR was significantly lower than in the control and 25% insect meal groups [33]. In another study [28], crossbred pigs weighing approximately 76.0 kg were assigned to three groups in which they received increasing levels of H. illucens meal (0%, 4%, or 8%). The results showed that the 4% HI diet significantly increased the final body weight and average daily weight gain of the pigs and decreased the feed to gain ratio compared to the 0% and 8% HI diets. There were no differences in average daily feed intake among all three groups. One study [34] was conducted for 40 days to investigate the effect of increasing levels of HI larvae oil supplementation on the growth performance of newly weaned pigs (at 21 days of age) reared in a three-phase feeding program. It was found that supplementation with 0%, 2%, 4%, or 6% of insect oil linearly increased (p < 0.05) body weight on days 14, 21, 25, 33, and 40, but did not affect the feed intake throughout the whole experiment. However, daily weight gains and feed conversion ratios were linearly improved only in the first rearing period from 0 to 14 days of the experiment. When the weaned piglets received a feed mixture containing 5%, 10%, or 20% of HI meal [35], no significant linear effect was observed in weight gain and feed efficiency. Looking at the nutritional factor, which was an insect product from Hermetia illucens, it is conceivable that the variety of results observed in the studies cited above may be due to both the type of product (meal, oil) and the period in which the pigs were included the experiment. This statement is consistent with the observation of a linear improvement in both ADG and FCR when the supplement of HI meal in feed increased from 0%, 1%, 2%, to 4% in the two first weeks post-weaning, whereas no differences were found for a four-week feeding period [36].
The significant effect of astaxanthin supplementation in the amount of 25 mg per 1 kg of feed on the growth performance of weaned piglets was not observed in the present experiment. Similarly [37], the addition of astaxanthin to the pigs' diet (1.5 or 3 mg per kg of feed) did not affect the average daily gain, average daily feed intake, or feed conversion ratio. When analyzing the nutritional factor astaxanthin, it is important to keep in mind the small number of papers describing the effect of AST supplementation on production performance in pigs. Therefore, the discussion must be expanded to include other monogastric species. Ao and Kim (2019) [38] experimented on Peking ducks that were fed astaxanthin originating from Phaffia rhodozyma. A total of 1440 female 1-day-old Peking ducks (approximately 52 g) were divided into three groups: control group-0 mg AST/kg diet, group I-3458 mg AST/kg diet, and group II-6915 mg AST/kg diet. It was found that on days 22 to 42, the inclusion of AST increased weight gain and decreased the feed to gain ratio. Throughout the experiment, weight gain and final body weight were greater in the AST treatment compared to the control group. AST supplementation in the amount of 25 mg per 1 kg of feed, as in the present experiment, did not affect organ weights. In an experiment by Jeong and Kim (2014) [39], 1-day-old male chicken broilers were used to test the effect of AST originated from P. rhodozyma on animal rearing rates.
The birds received a supplement of 0, 2.3, or 4.6 mg AST/kg feed. The inclusion of AST improved weight gain at finishing and throughout the experimental period and reduced the feed conversion ratio at finishing. Thus, it was suggested that AST supplementation may improve weight gain and reduce the feed conversion ratio. Lei and Kim (2014) [40] evaluated the effects of AST derived from Phaffia rhodozyma on the performance and nutrient digestibility of finishing pigs. For this purpose, crossbred pigs (initial body weight of about 58 kg) were treated with 0%, 0.1%, or 0.2% supplementation of P. rhodozyma, in which AST content was 2.305 mg/kg after fermentation and freeze-drying. The results showed that the addition of P. rhodozyma improved feed efficiency and dry matter digestibility. Evaluating the effect of increasing dietary astaxanthin (0, 5, 10, or 20 mg/kg) on late-finishing pig performance [41], it was found that the growth performance of pigs fed the astaxanthin did not differ from pigs fed a control diet. In our study, astaxanthin was derived from Haematococcus pluvialis, which could explain the lack of significant changes between groups compared to work where the source of AST originated from Phaffia yeast. However, as shown in studies [42,43], a diet with 133 or 266 mg/kg of Haematococcus pluvialis algae caused faster weight gain and significantly higher breast muscle mass, and higher feed efficiency in broiler chickens. Perhaps the AST dose used in this study was too low to be effective in the productivity indicators.
Blood Indices
Although statistically significant differences were observed between groups, the hematological and biochemical blood indices were within the physiological norms [44], indicating that the use of HI insect meal and astaxanthin did not adversely affect the health status of the weaned piglets. When studying the interaction between H. illucens meal and astaxanthin on hematological blood indices, attention should be paid to the effects of these factors both together and separately, as the multi-component nature of insect meal and the specific antioxidant and anti-inflammatory properties of astaxanthin will complement or exclude each other. In the groups where lymphocytes levels were higher than in the other groups, the pigs showed no signs of disease and the rearing parameters remained within normal limits. Similarly [30], it was found that the inclusion of H. illucens meal in the diet did not significantly affect the blood and serum indices in pigs, but there was an increase in the number of monocytes and neutrophils as the level of this additive increased. Unexpected in our study was the reduction in hemoglobin level in pigs treated with 5% HI larvae meal. Similarly, in the case of serum iron concentration, the addition of HI meal at both levels resulted in a decrease in this parameter. From a physiological point of view, this is detrimental to the body, as the lower the hemoglobin concentration, the worse the circulation of oxygen in the body, and thus the worse the performance of the animal [45]. The lower serum iron levels in the groups with HI larvae meal only were reflected in the red blood cell distribution width (RDWC; p < 0.01) and mean corpuscular hemoglobin (MCH; p < 0.01). These results contrast with those [45] that showed that replacing 25%, 50%, 75%, or 100% of fish meal with HI meal did not worsen hematological blood parameters, and RBC, HGB, HCT, and RDW were even higher (however, p > 0.05) in groups supplemented with HI meal when compared to the control group. In their experiments, HI meal supplementation significantly decreased the platelets counts, while in the present experiment, this parameter was not affected. The lipid fraction of Hermetia illucens larvae contains lauric acid in the amount of about 38.43% by weight [46]. It belongs to the saturated fatty acids that exacerbate dyslipidemia, and it is lauric acid that raises circulating cholesterol levels contributing to cardiovascular disease [47]. In our experiment, the supplementation of feed with 2.5 or 5% HI meal (36.5 g of lauric acid per 100 g of all estimated acids) did not influence the cholesterol content in the blood. In contrast, in the experiment by van Heugten et al. (2022) [34] where the HI larvae oil (36.5-37.3 g of lauric acid/100 g) was used in the amount of 2%, 4%, or 6% in the feed, the increase in total cholesterol level (by about 17% compared to control group) was the only significant effect observed in piglets' biochemical blood indices. These authors, however, did not notice any effect of lauric acid present in HI oil on the hematological parameters.
One mechanism of cardiovascular disease is erythosis. Some studies have confirmed that lauric acid in human red blood cells stimulates erythosis [47]. In addition, the mechanism that affects erythosis is oxidative stress [48], and this stress, according to the above study, is triggered by lauric acid [47]. Hence, it can be assumed that in the present experiment, exposure to lauric acid, in the form of supplementation of H. illucens meal, resulted in a decrease in the level of selected red cell parameters. Analyzing further results, a beneficial effect of astaxanthin on these parameters (RBC, Fe, HCT, RDWC) is noticeable. Thus, it can be thought that astaxanthin partially prevents excessive oxidative stress contributing to erythosis. The beneficial effect on limiting oxidative stress was confirmed in studies [49] on broiler chickens receiving from 20 to 80 mg/kg of AST, in which increased catalase and superoxide dismutase levels were observed in plasma. Biochemical blood indices were studied by Yu et al. (2020) [36] on weaned piglets receiving 0%, 1%, 2%, or 4% HI meal in a feed. These authors observed that 2% HI meal increased total protein, IL-10, and IgA while decreasing urea and triglyceride concentration. In the present experiment, the concentration of these biochemical indices was not affected by the HI meal supplementation in feed.
Meat and Backfat Analysis
In the conducted experiment, a significantly higher TBARS value for meat (longissimus m.) was noted in the groups receiving astaxanthin, and no effect of HI meal was noticed after storing at −20 • C for 3 months. On the other hand, the astaxanthin added to the feed mixture significantly decreased the TBARS value in adipose tissue (backfat) stored in the same conditions. A significant interaction between the experimental factors was also noted: the highest TBARS value for backfat was in the control group, while the most effective combination of dietary supplements for lowering the TBARS was 2.5% of HI meal together with the astaxanthin. The efficiency of these supplements in improving the shelf life of pork fat was about 80% (2.5% HI meal group) and 77% (AST group and AST + 2.5% HI meal) when compared to the control group. TBARS, expressed as malondialdehyde, is a valuable index of lipid peroxidation and oxidative susceptibility. It reflects the degree of oxidation: the higher the TBARS value, the more intensive oxidation of lipids appears. The beneficial effect of astaxanthin was observed in another study [50] when longissimus m. chops originated from the astaxanthin-supplemented pigs had TBARS values more than 60% lower than chops from control pigs after 7 days of retail exposure. Improvement in meat quality was also noticed [49] in broiler chickens fed with 20, 40, or 80 mg/kg of AST, which developed the antioxidant status in breast meat, reduced malondialdehyde levels, and increased redness and yellowness of meat. These results suggest a beneficial effect of AST against lipid oxidation. The results are consistent with the antioxidant activity of AST, which helps protect membrane phospholipids and other lipids from peroxidation [51]. However, some studies [37] did not confirm any significant effects of 1.5 or 3 mg of AST supplementation in the feed for fatteners on the meat TBARS value, drip loss, meat color, and marbling values. This additive was fed to pigs for 14 days only, which could be too short of a period for significant meat quality and oxidative stability affection.
In the present experiment, there was a significantly lower percentage of crude ash in the meat of pigs treated with HI larvae meal. A similar result was obtained in another study where the concentration of ash in breast muscles in broiler chickens (Pectoralis major) decreased linearly as the proportion of HI larvae meal in the diet increased [52]. The authors attribute this result to the use of full-fat HI larvae meal, which was also used in our experiment.
Conclusions
The results of the present study indicate that the inclusion of full-fat meal from H. illucens larvae and astaxanthin did not adversely affect feed intake and utilization, daily weight gains, and organ weights in weaned piglets. Both factors, separately and in interaction, have no negative effect on biochemical and hematological blood parameters, which remained within the norms. It seems that astaxanthin supplemented even in small amounts supports the inhibition of oxidative stress, which became apparent in the case of some red blood cell parameters. The 2.5% full-fat H. illucens larvae meal and astaxanthin, used in feed mixture separately or together, can reduce the susceptibility of pork fat to the oxidation process and improve its shelf life. It is suggested that the higher concentration of H. illucens meal (5%) should not be used, as the presence of lauric acid can cause adverse changes in some of the red cell indices. However, using the HI meal along with the antioxidant astaxanthin improves these indices. Institutional Review Board Statement: All procedures included in this study relating to the use of live animals agreed with the First Local Ethics Committee for Experiments with Animals in Cracow (Protocol code: 420/2020, date: 22 July 2020).
|
v3-fos-license
|
2020-10-25T13:05:17.801Z
|
2020-10-01T00:00:00.000
|
225058773
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0383/9/10/3369/pdf",
"pdf_hash": "21d71cb911171019d1c388c2ef14a783be371f2f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44119",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "c5ba5bdfa1660fffde637fb2eadfa340da57a8bf",
"year": 2020
}
|
pes2o/s2orc
|
Clinical Application of Virtual Reality for Upper Limb Motor Rehabilitation in Stroke: Review of Technologies and Clinical Evidence
Neurorehabilitation for stroke is important for upper limb motor recovery. Conventional rehabilitation such as occupational therapy has been used, but novel technologies are expected to open new opportunities for better recovery. Virtual reality (VR) is a technology with a set of informatics that provides interactive environments to patients. VR can enhance neuroplasticity and recovery after a stroke by providing more intensive, repetitive, and engaging training due to several advantages, including: (1) tasks with various difficulty levels for rehabilitation, (2) augmented real-time feedback, (3) more immersive and engaging experiences, (4) more standardized rehabilitation, and (5) safe simulation of real-world activities of daily living. In this comprehensive narrative review of the application of VR in motor rehabilitation after stroke, mainly for the upper limbs, we cover: (1) the technologies used in VR rehabilitation, including sensors; (2) the clinical application of and evidence for VR in stroke rehabilitation; and (3) considerations for VR application in stroke rehabilitation. Meta-analyses for upper limb VR rehabilitation after stroke were identified by an online search of Ovid-MEDLINE, Ovid-EMBASE, the Cochrane Library, and KoreaMed. We expect that this review will provide insights into successful clinical applications or trials of VR for motor rehabilitation after stroke.
Introduction
Stroke is one of the leading causes of disability and socioeconomic burden worldwide [1]. Although the age-standardized stroke incidence has decreased in most regions, the growth of aging populations, who are at risk of stroke, may lead to an increase in the crude incidence of stroke [2]. According to a policy statement by an American Heart Association working group, approximately 4% of US adults will have a stroke by 2030 [3]. Stroke-related mortality has shown a remarkable decline due to better management in the acute phase, which means there are more people living with disabilities after stroke [1,3]. In this comprehensive narrative review of the application of VR in motor rehabilitation after stroke, we will cover (1) the technologies used in VR rehabilitation including sensors, haptic devices, and VR displays; (2) the clinical application and evidence for VR in motor rehabilitation in stroke; and (3) considerations for VR application in stroke rehabilitation. We expect that this review will provide insights into successful clinical applications or trials of VR for motor rehabilitation after stroke.
Definition of VR
VR technology can give users the experience of being surrounded by a computer-generated world. With VR, users experience inclusive and extensive surroundings, with vivid illusions of a virtual computer generated environment in which both realistic and unrealistic events can occur. So, users can interact as though they are in a real environment and may not even recognize that they are existing in a virtual environment [19]. Therefore, in VR, participants can be fully immersed in the surrounding virtual environment and interact naturally with virtual objects in the virtual world [20]. Because VR content responds to a user's movements in a natural and valid manner, such as showing the corresponding scene on the display when the user looks at it, the interaction evokes a feeling of In this comprehensive narrative review of the application of VR in motor rehabilitation after stroke, we will cover (1) the technologies used in VR rehabilitation including sensors, haptic devices, and VR displays; (2) the clinical application and evidence for VR in motor rehabilitation in stroke; and (3) considerations for VR application in stroke rehabilitation. We expect that this review will provide insights into successful clinical applications or trials of VR for motor rehabilitation after stroke.
Definition of VR
VR technology can give users the experience of being surrounded by a computer-generated world. With VR, users experience inclusive and extensive surroundings, with vivid illusions of a virtual computer generated environment in which both realistic and unrealistic events can occur. So, users can interact as though they are in a real environment and may not even recognize that they are existing in a virtual environment [19]. Therefore, in VR, participants can be fully immersed in the surrounding virtual environment and interact naturally with virtual objects in the virtual world [20]. Because VR content responds to a user's movements in a natural and valid manner, such as showing the corresponding scene on the display when the user looks at it, the interaction evokes a feeling of existing in a virtual environment, which is referred to as "presence." Moreover, control of the avatar's body movements by those of the user can even induce a feeling of ownership in which the user regards the avatar's body parts as surrogates of their own, a phenomenon called "virtual embodiment" [21]. Based on these factors, users are fully immersed, which allows them to experience where they are and what they do there in a way that is similar to real/lived experience.
Non-Immersive and Immersive VR
Non-immersive VR allows users to experience a virtual environment as observers and interact with the virtual environment by using devices that cannot fully overwhelm sensory perceptions [22], which results in a lesser feeling of immersion in the virtual world. Non-immersive VR systems are mainly characterized by users' ability to control their surroundings while perceiving stimuli around them, such as sounds, visuals, and haptics. Non-immersive VR systems are primarily based on a computer or video game console, flat screen, or monitor and input devices such as keyboards, mice, and controllers. Non-immersive VR systems can also use other physical input devices, such as racing wheels, pedals, and speed shifters, to augment users' realistic experiences. Using various input devices, users can interact with VR content on a display. To enhance the level of immersion, some non-immersive VR systems provide a first-person view for users to associate themselves with their virtual avatar. To allow users to perceive objects as being 3D, stereoscopic vision technology, with which stereo images are provided so that each eye of the user, who wears special goggles, receives the same scene but from a slightly different angle, could be used, which would allow the user to feel the third dimension from a 2D monitor or screen [23].
Immersive VR, on the other hand, improves the feeling of presence, enabling people to feel more like they are actually in the virtual environment, which means that users are more likely to interact with the stimuli given by the computer and related devices providing visual, auditory, and haptic sensations. The main goal of immersive VR is to make it possible for users to experience the illusion of being in the computer-generated environment rather than the real-world environment. By wearing a head-mounted display (HMD), tracking devices, haptic devices, and data gloves and by using wireless controllers, users can be placed in virtual environments and interact with a computer-generated world. However, the real world has a greater variety of senses including smell, taste, the feeling of warm and cold, etc., which may increase the gap between the virtual and real worlds. These could be further covered by complete immersive VR, but the need for a sophisticated artificial stimulator to provide variable sensations may require more space and have a higher cost. HMD-based immersive VR could also be enriched using physical objects or devices placed in the physical space by tracking their positions precisely in relation to where the user stands. By using this paradigm, the user could perceive the texture or temperature of objects without any awkwardness when touching it because the physical object is tracked to be placed at the same position as that in the virtual space; thus, the user touches the physical object when they touch the virtual object. Another issue to overcome is that the user must be placed in a limited space; therefore, their walking area is constrained. Using a VR treadmill allows users to physically walk or run toward any place in a virtual environment by solving two problems: realistic synchronized simulation of the user's walk and no requirement for a large space. The Cave Automatic Virtual Environment (CAVE) has been introduced as another way to provide visual information for immersive VR, instead of using an HMD [24]. CAVE uses six large walls on which scenes are displayed so that the participant can be placed in the CAVE and experience the surrounding virtual environment with a large field of view.
Immersive and non-immersive environments can be better differentiated by their level of immersion. Immersive VR strengthens the level of immersion because less mental effort is required to be immersed in the virtual environment since the hardware systems cover most sensory perceptions. In contrast, non-immersive VR requires more mental effort to be immersed in the virtual environment. Therefore, non-immersive VR may reduce the level of spatial presence, which is defined as "the sense of being in an environment" [25,26].
Technologies for Motion Tracking and Feedback for Virtual Rehabilitation
Virtual rehabilitation is a method of rehabilitation via gamification through synchronization with software content or by providing a motion guide. Various studies have been conducted to investigate the application of VR for upper limb rehabilitation (Table 1).
For motor rehabilitation of limbs, the patient's body part must be captured by motion tracking sensors and synchronously transferred to an object in VR. Sensors to track the patient's motion are mandatory for movement visualization and can be selected from a mouse and joystick, depth-sensing cameras, electromagnetic sensors, inertial sensors, bending sensors, data gloves, and so on. The sensor performance is important to precisely track the motion, but the subjective perception and preferences are also important factors to be considered, in addition to cost [27].
A sensor technology that recognizes motion is essential for virtual rehabilitation. Such technologies are divided into wearable and nonwearable devices that recognize upper limb rehabilitation motions. Nonwearable devices are further divided into those using a vision sensor and those using a robot-based controller or a controller with three degrees of freedom (DOF) either alone or in combination. Wearable devices are usually divided into those using data gloves and those using an exoskeleton. Some studies have used both types together. Sensing using cameras in nonwearable devices has recently changed from tracking markers or color patches using webcams to tracking body or hand signals through depth sensing methods. In this way, the users' movements are sensed within a limited space without obstacles. With wearable devices, the sensor is attached to collect high frequency data and force or torque can be tracked as well as position and movement.
Most studies have primarily used visual and auditory feedback through content and some studies have applied tactile and force feedback (which are haptic feedback). We divided the studies into visuomotor and visuohaptic feedback. Visuomotor feedback provides visual information by applying measured movements through sensors to content in real time. Visuohaptic feedback refers to providing haptic feedback with visual information. Haptic feedback could be divided into tactile or force feedback, depending on whether resistance is present. Tactile feedback provides feedback to users through the sense of touch using vibration, skin deformation, or small forces. Force feedback, or kinesthetic force feedback, simulates real-world physical touch using motorized motion or resistance rather than by fine touch [28]. Research on virtual rehabilitation can be categorized according to the use of fine motor tracking during upper limb rehabilitation, which is distinguished by the use of wearable or nonwearable devices. In the case of camera methods among nonwearable devices, hand tracking is possible with a HMD for VR, and this has been released as a commercial product (e.g., Oculus Quest, Facebook Technologies, LLC, Menlo Park, CA, USA). The sensor types used in previous studies are summarized in Figure 2. In studies using only visual feedback, auditory feedback could possibly be used. Abbreviations: I, immersive; NI, non-immersive; SI, semi-immersive; T, tactile; F, force; V, visual; A, auditory; EMG, electromyography; DOF, degrees of freedom; VR: virtual reality.
Examples of Commercialized VR Upper Limb Rehabilitation Systems
Commercialized virtual rehabilitation devices that can provide gamification through content or rehabilitation guides are similar to those used in VR rehabilitation research, but they are simplified and more focused on ease of use. Sensing methods are divided into wearable and nonwearable methods using cameras, joysticks, and robots ( Table 2).
Examples of Commercialized VR Upper Limb Rehabilitation Systems
Commercialized virtual rehabilitation devices that can provide gamification through content or rehabilitation guides are similar to those used in VR rehabilitation research, but they are simplified and more focused on ease of use. Sensing methods are divided into wearable and nonwearable methods using cameras, joysticks, and robots ( Table 2).
Literature Search
Studies for upper limb VR rehabilitation after stroke were identified by an online search of Ovid-MEDLINE, Ovid-EMBASE, the Cochrane Library, and KoreaMed on 18 June 2020. The search queries are presented in Supplementary Table S1. Titles and abstracts were reviewed for screening by Y.K. and non-English papers, animal studies, commentaries, case series, narratives, book chapters, editorials, nonsystematic reviews, and conference papers were excluded. Duplicated publications between databases were also excluded. A total of 339 studies were included for the full text review and Y.K. and W.S.K. selected systematic reviews and meta-analyses for review. Six meta-analyses were included for our evidence summary [63][64][65][66][67][68].
Clinical Evidence
The general characteristics of the included meta-analyses are presented in Table 3. Two studies included randomized controlled trials (RCTs) and quasi-randomized controlled trials [64,68] and three other studies only included RCTs [63,65,67]. Karamians et al. included RCTs and prospective studies [66]. The number of studies and participants included in each meta-analysis ranged from 21 to 72 and 562 to 2470, respectively. The interventions were VR rehabilitation and the controls were either conventional therapy (dose-matched or not) or no intervention (Table 4). VR rehabilitation included rehabilitations using both custom-built virtual environments and commercial video gaming consoles (e.g., Nintendo Wii or Xbox Kinect) in the selected meta-analyses. Outcomes were usually the composite outcomes of upper limb function or activities and Karamians et al. only included studies using one of the following outcome measures: Fugl-Meyer Assessment (FMA), Wolf Motor Function Test (WMFT), and Action Research Arm Test (ARAT) [66]. Mekbib et al. only included studies using one of the three following outcomes: FMA, Box and Block Test (BBT), and Motor Activity Log (MAL) [65]. Methodological quality of the included meta-analyses was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR 2) instrument by two authors (S.C. and W.S.K.) and was categorized as high, moderate, low, or critically low [69]. Any disagreements were resolved through the discussion for consensus. Most of the included meta-analyses showed moderate to high methodological quality (Table 4).
In one high-quality meta-analysis from 2014, VR rehabilitation showed better improvements in body function (standardized mean difference (SMD) = 0.43, 95% confidence interval (CI) = 0.22 to 0.64) and activities (SMD = 0.54, 95% CI = 0.28 to 0.81) when compared to conventional therapy [64]. However, the commercially available gaming failed to show a significant beneficial effect due to the small number of studies (Table 4). In a recent Cochrane systematic review with high methodological quality, VR rehabilitation for the composite outcome of upper limb function (primary outcome) was not superior to conventional therapy, but upper limb function measured by FMA was significantly improved in VR rehabilitation (SMD = 2.85, 95% CI = 1.06 to 4.65) [68]. When VR rehabilitation was applied in addition to conventional therapy, VR rehabilitation showed significant beneficial effects on the composite outcome of upper limb function (SMD = 0.49, 95 CI = 0.21 to 0.77). Two metanalyses by Aminov et al. [63] and Lee et al. [67] also showed similar moderate effect sizes for upper limb function in VR rehabilitation (Table 4). Mekbib et al. [65] only included RCTs using dose-matched conventional therapy and calculated the mean differences of FMA, BBT, and MAL, which all represent upper limb function. Although VR showed better improvements in all outcomes when compared to conventional therapy, they were less than the minimal clinically important difference [70,71].
HMDs and Motion Sickness
HMDs give users a more immersive experience in a 3D artificial world and allow interaction with virtual objects using motion tracking sensors. Considering the therapy time and active motion during the rehabilitation, the HMD must be light, comfortable to wear, positioned stably on the head, and cool enough during operation (HMD typically generate heat). The HMD may also benefit from being wireless (with enough battery life). Although VR rehabilitation can induce eye strain or physical fatigue during extended therapy, the most common issue to overcome is motion sickness. Motion sickness can be elicited when there is a lag in processing the visual response to user input interactions, resulting in conflicting signals to the brain from the eyes, vestibular systems in the inner ear, and proprioceptive sensory receptors (sensory conflict theory) [72]. Motion sickness can be affected by the system (e.g., head tracking, rendering, field of view, optics); application and user interaction (lack of controlling visual motion, visual acceleration or deceleration, longer duration of VR experience, frequent head movement during VR play); and individual perceptual factors (age, motion sickness history, lack of VR experience). The following approaches can be employed to reduce motion sickness when designing VR rehabilitation programs [73]: "(1) to make patients actively control their view points and be responsible for initiating movement, (2) to avoid or limit linear or angular accelerations or decelerations without corresponding vestibular stimulation, (3) to display visual indicators or motion trajectories, (4) to display visual cues that remain stable as the patient moves, and (5) to perform dynamic blurring of unimportant areas."
Differences in Movements in VR
The movement kinematics of the upper limb in patients with stroke differ between VR and real environments. Viau et al. reported that patients with hemiparesis used less wrist extension and more elbow extension at the end of the placing phase during reaching, grasping, and performing tasks in VR than in a real environment [74]. Similarly, several studies using reaching tasks also demonstrated that the movements in VR using HMDs were slower than those in the real environment and that spatial and temporal kinematics differ between VR and real environments [75][76][77]. Lott et al. reported that the range of the center of pressure during reaching in standing (usually used for balance training) was different between real environments, non-immersive VR with 2D flat-screen displays, and immersive VR with HMDs [78]. Considering the rehabilitation purpose of improving independence in real-world living, these different movement kinematics can affect the transfer of learning in VR to real environments and therefore must be considered when designing a VR-based rehabilitation program.
Transfer of Learning in VR to the Real World
The transfer of improved function after rehabilitation to the performance of activities of daily living is important in upper limb rehabilitation after stroke. Constraint-induced movement therapy (CIMT) comprises repetitive tasks/shaping practice with constraint of the hemiparetic upper limb, emphasizing the transfer package to foster compliance and use of the hemiparetic upper limb in the real world as a key component to improve function following CIMT [79]. Therefore, the transfer of novel rehabilitation therapeutic approaches based on repetitive movements to the real environment, such as robot-assisted arm rehabilitation [80] and VR-based rehabilitation [81], is an important issue to be discussed.
The transfer of learning effects in VR to real environments is inconclusive. Rose et al. showed that the effect of simple sensorimotor task training is comparable between VR and real environments [82]. However, several recent studies have shown that training in VR did not translate to better performance in the real environment [83][84][85][86]. In the virtual BBT simulated using a 2D flat screen and depth-sensing camera, the number of boxes moved in VR showed good correlation (a high correlation coefficient) with that in the real BBT, but the actual number of boxes moved was much less in the VR condition [33]. The weak transfer of effects from VR to real environments may be associated with different sensory-motor symptoms and spatiotemporal organization, especially the differences in depth perception in VR during upper limb rehabilitation (reach, touch, grasp, and release tasks). Although an HMD improves depth perception compared to a 2D flat screen display [87], further improvements in VR depth perception, and thereby fidelity, is needed. Possible strategies include object occlusion; effects of lighting and shadow; color shading; and relative scaling of objects by considering depth, perspective projection, and motion parallax [88]. Other methods to improve the interaction can be visual (e.g., color change) or auditory feedback when touching objects in VR. Haptic feedback can further improve the interaction and thereby the fidelity of the VR training. Ebrahimi et al. demonstrated that the errors and time to complete the task during reaching and pointing tasks using a stylus in immersive VR with a HMD were decreased with the addition of visuohaptic feedback compared to the condition without it [89]. It has also been suggested that matching the VR interaction dimensions with the control dimension of the task in the real world could improve the transfer of the VR rehabilitation effect [90].
Gamification
Gamification has been broadly and clearly defined as the "use of game design principles in non-game contexts" by Deterding et al. [91]. Gamification of VR-based rehabilitation systems can motivate patients to participate in rehabilitation actively with enjoyment, which could lead to more movements of the hemiparetic arm and better recovery [92]. The strategies to apply gamification to virtual rehabilitation design have been thoroughly reviewed by Charles et al. [88] and Mubin et al. [93].
Briefly, the VR rehabilitation system must give the patients clear feedback for meaningful play, such as the therapist's verbal and emotional encouragement, with a clear goal to be achieved during the occupational therapy. The difficulty level or challenge during the rehabilitation game should be adapted according to the patient's ability to facilitate meaningful play and handle failures [94,95] as patients with stroke may experience multiple failures and can be frustrated during upper limb rehabilitation due to motor impairments. Various types of feedback, including visual, auditory, and haptic feedback, can be applied and approaches to possibly promote motor learning should be considered (inducing variability of tasks, amplification of visual errors, and manipulating task physics for implicit behavioral guidance) [81].
Barriers
In addition to the barriers caused by patients (physical and cognitive disabilities, low adoptability, and compliance to technology), it has been suggested that there are also barriers at the therapist level, which can lead to underuse of VR rehabilitation [96]. Glegg et al. recently reviewed the barriers and facilitators influencing the adoption of VR rehabilitation, which include "the ability to grade the degree of training, transfer of training to real life, knowledge about how to operate the VR clinically, therapist self-efficacy and perceived ease of use, technical and treatment space issues, access to the technology, and time to learn practice for VR rehabilitation" [97]. They gave three recommendations to promote the use of VR rehabilitation, which were "(1) enhance collaboration, (2) ensure knowledge transfer interventions are system-and context-specific, and (3) optimize VR effectiveness through an evidence-based approach" [97].
Combinational Approaches with VR in Stroke Rehabilitation
Neuroplasticity is the ability of the human brain to adapt to certain experiences, environments, and extreme changes, including brain damage [98][99][100]. Several novel therapeutic approaches to enhance neuroplasticity can be considered as combinational approaches to VR rehabilitation. The brain-computer interface (BCI) is one method used to improve neuroplasticity after stroke; it is based on motor imagery, which is defined as the mental simulation of a kinesthetic movement. The BCI provides sensory feedback of ongoing sensorimotor brain activities, thereby enabling stroke survivors to self-modulate their sensorimotor brain activities [101]. BCI for motor rehabilitation involves the recording and decoding of brain signals generated in the sensorimotor cortex areas. The recorded brain signals can be used (1) to objectify and strengthen motor imagery-based training by providing stroke patients with real-time feedback on an imagined motor task; (2) to generate a desired motor task by producing a command to control external rehabilitative tools, such as functional electrical stimulation, robotic orthoses attached to the patient's limb, or VR; and (3) to understand cerebral reorganizations of lesioned areas by quantifying plasticity-induced changes in brain networks and power spectra in motor-related frequency bands (i.e., alpha and beta) [102]. A previous meta-analysis reported that BCI had an SMD of 0.79, which represented a medium to large effect size comparable with those of conventional rehabilitation therapy such as CIMT (SMD = 0.33), mirror therapy (SMD = 0.61), and mental practice (SMD = 0.62) [101]. Pichiorri et al. showed that BCI combined with VR may further improve upper limb rehabilitation outcomes and may be used to predict motor outcomes by analyzing brain activity in patients with stroke [103]. A more recent study also demonstrated the clinical feasibility of using a combination of BCI and VR in post-stroke motor rehabilitation and confirmed that this combinatory method may benefit patients with severe motor impairments who have little ability for volitional movement [104].
Another novel strategy to increase neuroplasticity using noninvasive brain stimulation, such as transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS), has also been suggested by various researchers [105][106][107]. Noninvasive brain stimulation methods can be used to (1) enhance the ipsilesional brain activity by high-frequency rTMS [108] or anodal tDCS [109,110]; (2) inhibit contralesional brain activity by low-frequency rTMS [111,112] or cathodal tDCS [113]; (3) produce an additive effect by simultaneously applying anodal tDCS over the ipsilesional area and cathodal tDCS over the contralesional area, referred to as bihemispheric tDCS [114]; or (4) modulate the somatosensory input from nerve fibers to the brain [115,116]. These noninvasive brain stimulation methods have been reported to have acceptable tolerability and safety with no significant adverse effects in various populations, including patients with stroke [117,118]. Several studies have shown a positive effect of combinational approaches (noninvasive brain stimulation plus VR-based rehabilitation) in patients following stroke [119][120][121].
Together with BCI and noninvasive brain stimulation, a telerehabilitation approach may also be important for motor rehabilitation after stroke in terms of better accessibility and prolonged usage at home. Telerehabilitation, by definition, can provide the inputs of multidisciplinary skilled personnel for rehabilitation, including physiatrists, physiotherapists, and occupational therapists, which are often unavailable at home or challenged by transportation restrictions for disabled patients [122]. A recent Cochrane systematic review showed moderate-quality evidence that there was no difference in activities of daily living in patients with stroke between those who received telerehabilitation and those who received usual care (SMD = −0.00, 95% CI = −0.15 to 0.15) [123]. There was also low-quality evidence of no difference in upper limb functions between the use of a computer program to remotely retrain upper limb function and in-person therapy (mean difference = 1.23, 95% CI = −2.17 to 4.64) [123]. Several studies have shown that VR based telerehabilitation can be used for motor rehabilitation of upper extremity functions with improvements in FMA of the upper extremity, Brunnstrom stage, manual muscle test, and action research arm test [124,125].
Summary
VR-based rehabilitation is a promising tool to actively engage patients in the rehabilitation program and can lead to better motor recovery. Although current clinical evidence shows that VR-based rehabilitation is beneficial as an adjunct therapy to conventional rehabilitation therapy, the interventions in the studies included in the meta analyses were heterogeneous and it is unclear who benefits more from VR rehabilitation (e.g., severity of impairment, time since onset of stroke) and what type of VR (e.g., immersive vs. non-immersive) and feedback is more effective. Further research including large well-designed RCTs to find the factors influencing the effects of VR rehabilitation are required.
To improve the efficacy of VR-based rehabilitation, VR rehabilitation is designed to improve the transfer of VR training to real environments, gamification, and feedback to promote active patient participation and neuroplasticity is necessary. The user interface and user experience must be designed to be more user-friendly to patients and therapists, considering both the patient's physical and cognitive impairments and therapists' needs. VR can be integrated into novel therapeutic modalities that can enhance neuroplasticity (e.g., BCI and noninvasive brain stimulation) and is expected to induce better recovery by combinational approaches, which warrant further investigation.
|
v3-fos-license
|
2019-04-22T13:13:07.705Z
|
2018-12-06T00:00:00.000
|
125166109
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "http://ejournal.uin-malang.ac.id/index.php/Math/article/download/5879/pdf",
"pdf_hash": "7b171f7871cbac5da91d68a5b8628db3cc22c7a2",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44122",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7b171f7871cbac5da91d68a5b8628db3cc22c7a2",
"year": 2018
}
|
pes2o/s2orc
|
Geographically Weighted Regression to Predict the Prevalence of Hypertension Based on the Risk Factors in South Kalimantan
Hypertension is one of the disease is not contagious diseases which is a public health problem. Uncontrolled Hypertension can trigger a degenerative disease such as congestive heart failure, renal failure and vascular disease. Hypertension is called the silent killer because his nature the condition is asymptomatic and can cause a fatal stroke. With the increasing prevalence of cases of degenerative diseases, one only hypertension, then the researchers want to predict the variables very big role as one of the risk factors of Genesis hypertension. With clearly know the risk factors that play against genesis hypertension is expected to be used as a reference for the prevention and control so that they can reduce the prevalence of hypertension and prevent deaths from degenerative diseases, especially hypertension. The results of the study showed that the results of the modeling the prevalence of hypertension in South Kalimantan Province using linier regression there is no factor that affect the genesis of hypertension. The prevalence of hypertension spread spatially because there are heterogenitas between the location of the observation that means that observations of a location depends on the observations in another location that the distance is near so do spatial regression modeling with adaptive gaussian kernel function, the result 5 groups. Group I consists of the districts Tanah Laut and Tanah Bumbu; group II, Kota Baru; Group III consists of Banjar, Kota Banjar Baru, Kota Banjarmasin; Group IV on the Barito Kuala Regency and the Group V consists of Tapin, H S Selatan, H S Tengah, H S Utara, Tabalong, Balangan.
INTRODUCTION
A research influenced by aspects of territorial characteristic (spatial) then need to be considered spatial data on the model.Spatial data is data that contains the location information.On spatial data, often observations in a location dependent on observation in other locations near (neighboring).
The first law of geography advanced by Tobler in 1979, stated that all things are related to each one with the other but something close more had the effect of something far [1].The law is the basis of the examination of the problems based on the effect of the location or spatial method.In the modeling language, when the classic regression model used as a tool of analysis on spatial data, then can cause the conclusion that less accurate
Geographically Weighted Regression to Predict the Prevalence of Hypertension Based on the Risk Factors in South Kalimantan
Suroto 141 because of the assumption of the error free to each other and the assumption homogenitas not met [2].Hypertension is one of the disease is not contagious diseases which is a public health problem.Uncontrolled Hypertension can trigger a degenerative diseases such as congestive heart failure, renal failure and vascular disease.Hypertension is called the silent killer because his nature the condition is asymptomatic and can cause a fatal stroke [3].Although not treated, prevention and managements can decrease the occurrence of hypertensi and the accompanying disease.Hypertension is the cause of the death of number 3 after stroke and tuberculosis, namely 6.7 % from the population of death in all age in Indonesia.The problem of hypertension that occurs in South Kalimantan would not escape from the factors causing the hypertension essential where genetic factors, environment, behavior, health services also contribute to cause a high case of hypertension in South Kalimantan Province [4].To identify the multitude of causative factors that affect it may need to be an analysis or the development of the model is spatial [5].
Spatial effects testing done with test heterogenitas and spatial dependencies.If there is a settlement of securities heterogenity is by using point approach.Spatial regression points between the other Geographically Weighted Regression (GWR) with the scale of the measurement of the response variable is the interval and ratio,Geographically Weighted Poisson Regression (GWPR) with the variable data is response count, Geographically Weighted Logistic Regression (GWLR) with the scale of the measurement of the response variable is the nominal [6].
Based on the [7], the prevalence of hypertension in Indonesia that acquired through the measurement at the age > 18 years of 25.8%, the highest in Bangka Belitung securities have totaled 30.9%, followed South Kalimantan 30,8 % , East Kalimantan 29.6% and West Java 29.4%.The prevalence of hypertension in Indonesia that is obtained through the questionnaire diagnosed health workers of 9.4%, diagnosed health workers or is drinking drugs of 9.5%.So there are 0.1% that medication itself.Respondents who have a normal blood but is drinking Hypertension medications 0.7%.So the prevalence of hypertension in Indonesia by 26.5%.
The problem of hypertension that occurs in South Kalimantan would not escape from the factors causing the hypertension essential where genetic factors, environment, behavior, health services also contribute to cause a high case of hypertension in South Kalimantan Province.To identify the multitude of causative factors that affect it may need to be an analysis or development models.With the increasing prevalence of hypertension in a region, then the researchers want to predict the variables very big role as one of the risk factors of Genesis hypertension.With clearly know the risk factors that play against genesis hypertension with GWR approach is expected to be used as a reference for the prevention and control so that they can reduce the prevalence of hypertension.
METHODS
The data used is the Basic Health Research data [7].The data will be analyzed in this research is data genesis hypertension in regency/city of South Kalimantan Province.Based on the results of the analysis of the previous library, hypertension served on the following conceptual framework [8].
Geographically Weighted Regression to Predict the Prevalence of Hypertension Based on the Risk
Factors in South Kalimantan Suroto 142 The response variable Y that is used in this research is the percentage of patients with hypertension are diagnosed per sub-districts in South Kalimantan Province.While the variables predictors Xi used there in Table 1.
Table 1.The variables Predictors [9] Variables Description X1 The percentage of the inhabitants of gender male X2 The percentage of population with education Completed SD/MI X3 The percentage of population with smoking habit every day X4 The percentage of the population of physical activity X5 Percentage of the population who consume the fruits of 7 times in 1 weeks X6 Percentage of the population who consume vegetables 7 times in 1 weeks
X7
Percentage of the population who consume salty food more than 1 times per day
X8
The percentage of the population consuming fatty food consumption/ order/ fried more than 1 times per day X9 The percentage of the population with ownership of health insurance The variables X1 and X2 used to describe the spread of environmental aspects according to their geographic region.Aspects of the behavior described by the variables X3, X4, X5, X6, X7 and X8.While aspects of the Ministry of Health is portrayed through the variable X9.
Description of the Prevalence of Hypertension
Description of this research includes the mean and standard deviation from each of the research variables.Now in detail is presented in the following table.Table 2. shows that the percentage of the highest Hypertension Prevalence on the sub-district Pulau Laut Tanjung Selayar of 29.508 percent and lowest on sub-district Tatah Makmur of 1.471 percent while the average percentage of the prevalence of hypertension by 14.978 percent with standard deviation of 5.764 percent.The percentage of the inhabitants of gender male (X1) highest on the sub-district Danau Panggang (55.556 percent), and the lowest on the sub-district Pulau Laut Tanjung Selayar (32.787).The percentage of population with education Completed SD/MI (X2) highest on the sub-district Pamukan Barat of 69.230 percent and lowest on sub-district Tatah Makmur of 7.320 per cent while the average percentage of the prevalence of hypertension by 33.660 percent with standard deviation of 12.340 percent.The percentage of population with smoking habit every day (X3) has an average of 23.750 percent with standard deviation 7.507 percent.The percentage of the population physical activity (X4) has an average of 86.855 percent with standard deviation 9.988 percent.Percentage of the population who consume the fruits of 7 times in 1 weeks (X5) has an average of 20.289 percent with standard deviation 7.070 percent.Percentage of the population who consume vegetables 7 times in 1 weeks (X6) has an average of 55.470 percent with standard deviation 15.2 percent.Percentage of the population who consume salty food more than 1 times per day (X7) has an average of 45.08 percent with standard deviation 13.170 per cent.The percentage of the population consuming fatty food consumption/ order/ fried more than 1 times per day (X8) has an average of 31.670percent with standard deviation 15.62 percent and the percentage of the population with ownership of health insurance (X9) has an average of 15.002 percent with standard deviation 11.946 percent.
Geographically Weighted Regression to Predict the Prevalence of Hypertension Based on the Risk Factors in South Kalimantan
Suroto 144
Regression modeling the prevalence of hypertension
Before performing the analysis using spatial regression method, done regression modeling double linier first.Regression modeling linear to the prevalence of hypertension and factors suspected to influence using parameter assessment method Ordinary Least Square (OLS) [10] which aims to know the variables significant on the prevalence of hypertension globally.The first step is to detect multicolinearity to know whether or not the relationship between the free variable (predictors), then continued with double linier regression modeling (global) covers the review of the significance of the parameters simultaneously or partially, and residual assumptions IIDN test [11].
Multicolinearity Detection
One of the conditions in the multiple regression analysis with some variables predictors is no cases multikolinieritas or not there is a variable predictors that have a correlation with other predictors variable.Tracing muticolinearity done based on the value of Variance Inflation Factor (VIF).The following is the value of VIF in each of the variables predictors.Based on the Table 3, obtained the information that all variables predictors have a VIF value less 10.This detects that there are not cases multikolinieritas or not there is a variable predictors that have a correlation with other predictors variable.
The Significance of Parameters Linear Regression tests the prevalence of hypertension
The following is the significance test good linier regression parameters simultaneously or partial to know the influence of predictors variables used.The hypothesis to test the significance of the parameters simultaneously on the linier regression is as follows [6].H0 : β1 = β2 = ...= β9 = 0 (the parameters do not affect the significant impact on model) H1 : at least one βk ≠ 0 ; k = 1.2, ... ,9 (at least one parameters that affect the significant impact on model) Table 4., produce F-Statistic value of 2.14 and p-value of 0.030.Based on the level of the significance () of 5 percent and F(0.05;9;142) of 1,.46, obtained the decision Reject H0 because the value of F-Statistic > F(0.05;9;142) or p-value < 0.05.This can be interpreted that there is at least one parameters that affect the significant impact on the prevalence of hypertension.
Next to know the variables predictors anywhere that provides significantly influence, then the test is done the significance of parameters partially presented in Table 5.The following is the hypothesis test the significance of the parameters spatially against linier regression model (global) [6].
Testing the Assumptions of a Residual IIDN
After testing the significance of the parameters simultaneously and partially, then the next step is to test the assumptions residual of identical, independent, and normal distribution (IIDN).
-Test the assumption of identical Residual One of the assumption of the test in the OLS regression is a residual must be homoscedasticitys variance (is identical) or in cases of heteroscedasticity.How to identify the existence of the case of heteroskedastisitas is to create a regression model between a residual and predictor variables.When there are variables predictors that affect the model significantly, it can be said that the residual is not identical or in cases of heteroscedasticity.Testing the assumption of identical residual provides information that there are not cases heteroscedasticity or identic residual with significant () of 0. -Test the assumption of Independent Residual [11] Test the assumption of independent residual used to know whether or not the relationship between the residual periods.The test statistics used is Durbin-Watson.Based on the attachment 4 obtained the value .So that the decision can be taken is to fail to reject H0 because .It shows that there is no relationship between the residual, so that the assumption of independent residual have been fulfilled.
-Test the assumption Normal distribution
The assumption of the normal distribution test is done with the following Kolmogorov-Smirnov test.
H0: Data normal distribution H1: Data not normal distribution Based on the Figure 2 obtained the information that the points of red spread near linear line (normal) which means that the data has been normal distribution.In addition, also can be seen from the value of P-value that is more > 0.15.So that the decision can be taken is to fail to reject H0 on equal significant () of 5%, because the value of the Pvalue is greater than .This means that the data has been meet the assumption normal distribution.Based on the results of the test of assumption, it can be concluded that a residual on the linear regression model (global) meets the independent assumption, and data has been normal distribution, but the assumption is identical not fulfilled.Factors in South Kalimantan Suroto 147
Spatial Regression Modeling the Prevalence of Hypertension
The analysis using the GWR method aims to know the variables that affect the prevalence of Genesis hypertension on each observation location regency/city in South Kalimantan province.The first step is done to get the GWR model is to determine the coordinates of the point latitude and longitude on each location to count the distance euclidean and determine the optimum bandwidth values based on the criteria of Cross validation (CV).The next step is to determine the matrix pembobot with kernel functions: Fixed Gaussian, fixed bi-square, Adaptive Gaussian, Adaptive Bi-Square and assess the GWR model parameters.The matrix pembobot obtained for each location and then used to form a model, so that obtained the model vary in each location of observation [12].
GWR model hypothesis testing consists of two test, test the suitability of the GWR model and test the significance of the parameters GWR model.The following is the results of the hypothesis testing GWR model [6].H0 : ; (There is no significant difference between the linear regression model (global) and GWR model) H1 : at least one 6 shows the comparison of the model with pembobot GWR estimates that vary.Testing the suitability of the GWR model is done by using the difference in the number of residual square GWR model and global regression model.GWR Model will vary significantly with global regression model if can reduce the amount of residual square significantly.Table 6., shows that the value of the smallest AICc is the GWR model with adaptive Gaussian kernel pembobot amounting 956.153.So using the significance level 5 percent so it can be concluded that the GWR model differ significantly with global regression model.This means that the kernel pembobot GWR model with adaptive Gaussian more worthy to illustrate the percentage of the prevalence of hypertension in South Kalimantan Province.
Next is a test of the significance of the parameters GWR model with adaptive Gaussian kernel pembobot partially to know any parameters that affect the prevalence of hypertension in each location of observation.Sub Division that have common
Figure 1 .
Figure 1.GWR Modeling flow in the case of hypertension
Figure 2 .
Figure 2. Probability plots of Residual of the Hypertension prevalence
Table 3 .
The value of their respective VIF Predictors Variables
Table 4 .
ANOVA table the prevalence of hypertension in South Kalimantan Province
Table 5 .
The results of the Test Regression Model Parameters Linear Partial
Table 6 .
2, ....,9 (No difference between significant linear regression model (global) and GWR model) The estimation of GWR on the weight of the Kernel Function
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2012-09-13T00:00:00.000
|
2265779
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-6-295",
"pdf_hash": "0795216a25ca6d28eb262dcf8956f9eff1a804d5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44123",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c76ec3f50b450267316280a86ca8a77af4c48b6e",
"year": 2012
}
|
pes2o/s2orc
|
Primary squamous cell carcinoma of the pancreas: a case report and review of the literature
Introduction Primary squamous cell carcinoma of the pancreas is a rare tumor with poor prognosis and is found in the literature only as case reports. The optimal management course remains poorly defined. We present a case of primary basaloid squamous cell carcinoma of the pancreas metastatic to the liver, which was treated with surgery and systemic chemotherapy. Our patient survived for 15 months: the longest survival reported in the literature to date. Case presentation A 70-year-old Caucasian man presented to hospital with a three-month history of weight loss, pruritus and icterus. Imaging studies confirmed the presence of an operable mass lesion in the head of the pancreas. Following a pancreaticoduodenectomy, histology results led us to make a diagnosis of squamous cell carcinoma. Postoperative restaging showed multiple metastases in the liver. He underwent palliative systemic chemotherapy with cisplatin and 5-fluorouracil achieving partial response and an excellent quality of life. He then went on to start second-line chemotherapy, but unfortunately died of sepsis soon thereafter. Conclusions This case report emphasizes that achievement of a worthwhile objective and symptomatic palliative response is possible using platinum-based chemotherapy in squamous cell carcinoma of the pancreas.
Introduction
Squamous cell carcinoma of the pancreas is rare. Squamous cells are not present in the normal pancreas and hence the pathogenesis of this carcinoma remains uncertain. Even though squamous cell carcinomas arising in other parts of the body are considered to be radiosensitive and chemosensitive, and sometimes have better outcomes, the prognosis of squamous cell carcinoma in the pancreas remains poor, as for other pancreatic carcinomas.
Case presentation
A 70-year-old Caucasian man who was a non-smoker presented to our facility with a three-month history of 12kg weight loss and recent generalized itching and jaundice. Serum biochemistry and ultrasound of the abdomen confirmed obstructive jaundice due to a mass in the head of the pancreas. The lesion was 4.6×4.1cm in size with no evidence of metastatic disease on staging computed tomography (CT) scan. He underwent endoscopic retrograde cholangiopancreatography (ERCP) with a 5cm 10F plastic stent inserted into the common bile duct followed by pylorus-sparing pancreaticoduodenectomy two months after diagnosis and recovered uneventfully.
Macroscopically, there was a 5.5cm diameter, lobulated tumor within the head of the pancreas extending to the superior mesenteric vessel margin. This entire tumor was sampled for histological examination. Microscopically, the tumor was composed of large nests of basaloid cells with areas of central necrosis ( Figure 1) and scattered small foci of squamoid differentiation ( Figure 2). There was no evidence of acinar or glandular differentiation morphologically. There was extensive intravenous and intralymphatic invasion, together with foci of perineural invasion. The tumor invaded into the duodenum, around the base of the ampulla, into the pancreatic duct and widely into peripancreatic fat. Tumor involvement of the superior mesenteric vessel margin was confirmed microscopically. Metastatic tumor (with similar morphology) was present in six of 43 sampled lymph nodes.
The morphological appearances of the tumor were considered to be those of a poorly differentiated (basaloid) squamous cell carcinoma, which was confirmed with diffuse strong immunostaining with cytokeratin 5/6 and p63. Adenosquamous carcinoma and pancreatoblastoma were considered in the differential diagnosis. Adenosquamous carcinoma is a rare variant of pancreatic ductal adenocarcinoma, showing both glandular differentiation and squamous differentiation. However, no glandular differentiation was seen in our tumor, despite extensive sampling. Pancreatoblastoma occurs in childhood, but rare cases have been described in adults. By definition, it shows acinar differentiation and squamoid nests. However, our tumor showed no acinar growth pattern and no immunostaining with trypsin or α-fetoprotein. There were only occasional neuroendocrine cells (immunopositive for synaptophysin and CD56) within the tumor, ruling out a neuroendocrine carcinoma.
Squamous cell carcinoma of the pancreas is extremely rare and, therefore, the possibility was raised that this could be secondary involvement of the pancreas. A repeat staging CT scan was performed postoperatively and revealed possible metastatic spread to the liver, but did not identify any other tumor sites. He underwent an [18F]-2fluoro-2-deoxy-D-glucose (FDG) positron emission tomography-computed tomography (PET-CT) scan one month postoperatively, which was reported to show FDG avid lesions in both lobes of the liver and a solitary FDG avid lymph node anterior to the renal vein. Again, no other occult potential primary site was visible. Two months after surgery, he started systemic palliative chemotherapy with intravenous cisplatin at 80mg/m 2 given on day one and 5fluorouracil 4000mg/m 2 as continuous intravenous infusion over four days (days one to five). On completion of eight three-weekly cycles, restaging CT scan demonstrated what amounted to a partial response in his liver metastases by Response Evaluation Criteria In Solid Tumors (RECIST) scoring. Following this he was then placed on follow-up, achieving an excellent quality of life.
He remained active but developed epigastric pain six months later. A restaging CT scan showed progression in the previously noted metastatic liver disease and a 3.3cm mass in the retroperitoneum, suggestive of local recurrence. As he remained active with performance status of 1 on the WHO scale, he commenced second-line chemotherapy with docetaxel a month later. Four days after the first cycle of docetaxel had been administered he was admitted to hospital with sepsis. Blood cultures obtained on admission grew Klebsiella pneumoniae and Clostridium perfringens sensitive to ciprofloxacin, tazocin, gentamicin and metronidazole. Despite treatment with tazocin and gentamicin he died of sepsis within 48 hours of admission.
Discussion
Primary squamous cell carcinoma of the pancreas is an extremely rare entity with an incidence of 0.5% to 2% of all exocrine pancreatic neoplasms [1]. Cases are described in the literature only as case reports [2][3][4][5][6][7][8][9][10][11]. In view of its rarity, squamous cell carcinoma (SCC) in the pancreas is presumed to be metastatic from another primary site, until proven otherwise [1,8]. However, it should be noted that metastases to the pancreas are also rare. Metastatic spread of primary lung SCC to the pancreas is noted to be common in both surgical and autopsy series with pancreatic involvement usually occurring as part of widespread disease [1]. A case of an isolated metastasis from asymptomatic occult esophageal squamous cell cancer to the pancreas has also been reported to masquerade as primary pancreatic cancer [12].
Our patient was presumed to have operable primary pancreatic cancer. The surprise histological finding of squamous cell carcinoma raised the possibility that this may have been a metastatic deposit and a PET-CT scan was thus performed to attempt to localize a possible occult primary. However, the PET-CT scan showed only multiple liver metastases and no other candidate primary site (Figures 3 and 4). Historically, most of the reported cases used CT scans of head, neck and chest, otorhinolaryngological examination and endoscopic examination of the gastrointestinal tract to search for the primary. To the best of our knowledge, this is the only reported case to have used a PET-CT scan to search for the primary in this context.
We undertook a MEDLINE database search to identify cases of pure squamous cell carcinoma of the pancreas reported in the English literature and their management and outcomes using the following key words: carcinoma, squamous cell, pancreas, exocrine, pancreatic neoplasms. Brown et al. in 2005 identified 36 autopsy/registry cases of pure squamous cell carcinoma of the pancreas and 25 cases diagnosed based on ante-mortem histology results [10]. We have identified (Table 1) an additional 14 cases diagnosed ante-mortem since 2005 and a further case diagnosed ante-mortem in 2000 by Fonseca et al. [11], which was not included in the excellent review by Brown et al. [10]. Including the present case, a total of 40 cases of pure squamous cell carcinoma of the pancreas, diagnosed based on histology findings, have been reported in the English literature.
The normal pancreas is devoid of squamous cells and the origin of pancreatic SCC is uncertain. Various mechanisms postulated for evolution of pure squamous cell carcinoma include malignant transformation of squamous metaplasia, squamous metaplastic change in a pre-existing adenocarcinoma, and differentiation with malignant transformation of primitive multipotent cells [1]. Squamous metaplasia occurs in chronic pancreatitis and following pancreatic/biliary stents. However, our patient had no previous episodes of pancreatitis and the stent was only inserted pre-operatively. Surgery still remains the corner stone in the management of this rare cancer as for the much more common ductal adenocarcinoma of the pancreas. Median survival for pancreatic SCC was noted to be seven months, with a range of six to 16 months, in the seven patients who underwent curative resection [10]. In our literature search we identified three patients whose overall survival was three, 10 and 11 months following curative resection [2,4,7]. We could not establish the overall survival of the fourth patient who had curative resection from the Mayo clinic series [3]. Median survival for those who did not have curative resection was three months (range onequarter of a month to nine months) [3,10]. A total of 17 out of 39 patients had treatment for locally advanced, recurrent or metastatic squamous cell cancer of pancreas. The treatment included systemic chemotherapy and/or radiotherapy. Cisplatin was used in combinations with fluorouracil (three patients) [10,13], etoposide (one patient) [10] and vinblastine (one patient) [10]. Gemcitabine was used in combination with carboplatin (one patient) [8] and fluorouracil (one patient) [10]. The survival in these patients was poor except for one patient who recurred locally after surgery at four months. He received local radiotherapy and had an overall survival of 11 months.
Even though our patient relapsed early after surgery with multiple liver metastases, he appears to have benefited considerably from palliative systemic chemotherapy. He achieved an overall survival of 15 months from the surgery. He tolerated chemotherapy well and had a good quality of life during chemotherapy and for the majority of the seven months between completing chemotherapy and his presentation with symptomatic relapse. We elected to treat our patient with a platinumbased systemic chemotherapy regimen as it is active and used in various combinations for treatment of squamous cell carcinomas arising at different sites including lung, head and neck, cervix and skin.
Conclusions
Pure primary squamous cell carcinoma of the pancreas is a rare carcinoma and, therefore, spread from other primary squamous cell cancers should always be considered and carefully excluded. Surgery remains the corner stone of treatment but unfortunately is not curative with the clinical course complicated by local and/or distant relapses. This current case report emphasizes that achievement of a worthwhile objective and symptomatic palliative response is possible using platinum-based chemotherapy in squamous cell carcinoma of the pancreas. The treatment of these rare cancers is challenging and may only be improved by centralized national registries.
Consent
Written informed consent was obtained from the patient's next of kin for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-06-01T00:00:00.000
|
832762
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/molecules15064261",
"pdf_hash": "8fa4a13450abe92465e501c223fbd590afdcbdf2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44124",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "8fa4a13450abe92465e501c223fbd590afdcbdf2",
"year": 2010
}
|
pes2o/s2orc
|
Synthesis of Bosutinib from 3-Methoxy-4-hydroxybenzoic Acid
This paper reports a novel synthesis of bosutinib starting from 3-methoxy-4-hydroxybenzoic acid. The process starts with esterification of the starting material, followed by alkylation, nitration, reduction, cyclization, chlorination and two successive amination reactions. The intermediates and target molecule were characterized by 1H-NMR, 13C-NMR, MS and the purities of all the compounds were determined by HPLC.
Introduction
The Src family of non-receptor protein tyrosine kinases (SFKs) plays key roles in regulating signal transduction, including cell growth, differentiation, cell shape, migration, and survival, and specialized cell signals [1]. However, c-Src was also identified as a proto-oncogene based on decades of research on an avian RNA tumor (sarcoma) virus. In some abnormal cases, such as mutation of the c-Src or over-expression, these enzymes can become hyperactivated, resulting in uncontrolled cell proliferation [2]. Bcr-Abl, the constitutively activated fusion protein product of the Philadelphia chromosome (Ph) is the principal oncogene underlying the pathology of chronic myelogenous leukemia (CML). Abl shares significant sequence homology with Src, and in its active conformation, bears remarkable structural resemblance with most SFKs. As a result, ATP-competitive compounds originally developed as Src inhibitors frequently exhibit potent inhibition of Abl kinase [3,4]. The second generation
OPEN ACCESS
Bcr-Abl kinase inhibitors target both Src and Abl kinases to combat imatinib resistance, for example the Wyeth drug bosutinib (10, SKI-606, Scheme 1) has been evaluated for the treatment of CML in a Phase III clinical trial [5,6].
Several methods for the synthesis of bosutinib have been reported. Boschelli and Wang [5,7] started the route with protection of the hydroxy group of methyl 4-hydroxy-3-methoxybenzoate. The drawbacks are the facts that the cyclization reaction temperature (-78 ºC) is strict, and only gives a 47% yield. More important is that the overall yield is only 8.3%. Sutherland et al. [8] have designed a novel method for synthesizing bosutinib starting from 2-methoxy-5-nitrophenol. The overall yield was 32.1%, but the raw materials of the route are costly, and the reaction time is long, especially the last steps that required 66 h.
In this report, we present a novel approach to synthesizing bosutinib. Compared with Boschelli's route, it avoids the strict conditions of the cyclization reaction, and the overall yield is higher (21.7%). One the other hand, the starting materials are cheaper than Sutherland's route.
Results and Discussion
Our novel synthesis of bosutinib (Scheme 1) started from 3-methoxy-4-hydroxybenzoic acid (1). This compound was esterified and then alkylated with 1-bromo-3-chloropropane to afford the intermediate 3 in 90.0% yield. Nitration of 3 with nitric acid in acetic acid gave compound 4, which was reduced by powdered iron and ammonium chloride to give compound 5 in satisfactory yield (91.5%). In order to get a better reduction, Pd/C could be used to reduce compound 4 rather than Fe/NH 4 Cl. However, Pd/C gave incomplete conversions (85% yield) even after long reaction times (18 h). Compound 5 was reacted with 3,3-diethoxypropionitrile to obtain compound 6. In this step, 5 was not directly reacted with 3,3-diethoxypropionitrile, but under the catalysis of trifluoroacetic acid, 3,3-diethoxypropionitrile was converted to 3-oxopropanenitrile, and then the 3-oxopropanenitrile was reacted with phenylamine 5 to form a Schiff's base. Because of the p-π conjugation, the Schiff's base would switch to another conformation 6.
Cyclization of 6 with sodium hydroxide and chlorination with POCl 3 afforded compound 8. The final product was obtained after two reactions with different amines. Compared with other methods, this new method is less costly because of the much cheaper starting materials used, consumes less time, and gives high yields. These results reported here provide the possibility of industrial production.
General
All reagents were purchased from commercial sources and used without further purification. Melting points were measured in open capillaries and are uncorrected. 1 H-NMR spectra were recorded in CDCl 3 /DMSO-d 6 on a Bruker Avance 300 spectrometer; chemical shifts (δ) are reported in parts per million (ppm) relative to tetramethylsilane (TMS), used as an internal standard. Mass spectra (MS) were obtained from Agilent 1100LC/MS Spectrometry Services. All compounds were routinely checked by TLC with silica gel GF-254 glass plates and viewed under UV light at 254 nm. The reported HPLC purity is the peak area calculated using Class-VP software on a Shimadzu 2010 instrument. (2). Thionyl chloride (30.0 g, 0.50 mol) was added dropwise at room temperature to a solution of 3-methoxy-4-hydroxybenzoic acid (1, 44.3 g, 0.26 mol) in methanol (500 mL). The mixture was stirred at room temperature for 2 h and the solvent was concentrated in vacuo. The oil formed was resolved in ice-water (50 mL), and pH was adjusted to 7-8 with saturated aqueous sodium bicarbonate solution. The solution was left standing in the refrigerator overnight, then the precipitate was collected by filtration, and air dried to give as brown power (49.0 g, 98% yield, 97.2% HPLC purity); 1 (4). Nitric acid (84.5 mL, 66%) was added dropwise at room temperature to a solution of methyl 4-(3-chloropropoxy)-3-methoxybenzoate (3, 51.6 g, 0.20 mol) in a mixture of acetic acid (150 mL). This mixture was stirred at 60 ºC for 3-4 h. Then the mixture was washed with ice-water (2 × 50 mL). The organic layer was washed with saturated sodium bicarbonate to neutrality. The oil formed was stirred till solidified and then collected by filtration to afford the product as light yellow solid (54.0 g, 89% yield, 98.7% HPLC purity); (5). Powdered iron (5.6 g, 0.10 mol) and ammonium chloride (8.4 g, 0.157 mol) were added to a mixture of methanol (70 mL) and water (30 mL). The resulting suspension was heated at reflux for 10 min, then a solution of methyl 4-(3-chloropropoxy)-5-methoxy-2-nitrobenzoate (4, 9.1 g, 0.03 mol) in heated methanol (100 mL) was added dropwise. The mixture was heated at reflux for 4 h. The catalyst was filtered, and the methanol was evaporated from the filtrate. The residue was air dried to afford the product as white solid (7. (7). A solution of (E)-methyl 4-(3-chloropropoxy)-2-(2-cyanovinylamino)-5-methoxybenzoate (6, 1.5 g, 4.36 mmol) in ethanol (20 mL), the pH was adjusted to 12-13 with sodium hydroxide. And then the solution was stirred at room temperature for 6 h, the solution was adjusted to nurture with water. The solid formed was filtered off and air dried to afford light yellow solid (1.16 g, 85.8% yield, 98.6% HPLC purity); 1 (8). A mixture of 7-(3-chloropropoxy)-4-hydroxy-6-methoxyquinoline-3-carbonitrile (7, 1.0 g, 3.42 mmol) and phosphorus oxychloride (4.7 g, 30.10 mmol) in toluene (10 mL) was heated at reflux for 2 h. The solution was concentrated, and the pH was adjusted to 7 with saturated aqueous sodium bicarbonate.
Conclusions
A novel synthesis of bosutinib starting from 3-methoxy-4-hydroxybenzoic acid has been established. Compared with the existing methods, this new method is less costly because of the much cheaper starting materials used, consumes less time, and gives higher yields. These results reported here offer the possibility of industrial production.
|
v3-fos-license
|
2019-03-08T15:44:58.684Z
|
2019-02-28T00:00:00.000
|
67864026
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-019-39630-3.pdf",
"pdf_hash": "905f72f2a7692ab5563d3151ea12f8d66f4710b0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44125",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "30901ac52f71bf3a510cdeb24fd4519fe055566b",
"year": 2019
}
|
pes2o/s2orc
|
Development and validation of a new standardised data collection tool to aid in the diagnosis of canine skin allergies
Canine atopic dermatitis (cAD) is a common hereditary clinical syndrome in domestic dogs with no definitive diagnostic tests, which causes marked morbidity and has a high economic impact internationally. We created a novel questionnaire for Labrador (LR) and Golden retriever (GR) owners to evaluate canine skin health with respect to clinical signs of cAD. 4,111 dogs had fully completed questionnaires (2,803 LR; 1,308 GR). ‘Cases’ (793) had a reported veterinary diagnosis of cAD, and ‘controls’ (1652) had no current or past clinical signs of cAD and were aged >3 years. Remaining dogs (1666) were initially categorised as ‘Other’. Simulated annealing was used comparing ‘Cases’ and ‘Others’ to select a novel set of features able to classify a known case. Two feature sets are proposed, one for use on first evaluation and one for dogs with a history of skin problems. A sum for each list when applied to the whole population (including controls) was able to classify ‘Cases’ with a sensitivity of 89% to 94% and specificity of 71% to 69%, respectively, and identify potentially undiagnosed cases. Our findings demonstrate for the first time that owner questionnaire data can be reliably used to aid in the diagnostic process of cAD.
Results
In total, there were 4,479 responses to the carefully tailored owner-friendly questionnaire (see Supplementary Table S1 for all questions asked) in the 4 months during which it was active, of which 60 were automatically disqualified, 417 were labelled partial, and 4,002 were labelled as completed. Of the 417 labelled partial, 58 had not completed the registration page, so their responses were deleted, as they provided no consent to store their details. Ninety-four partials had registered their dog but had not answered any of the questions in the questionnaire, leaving 148 responses where the questionnaire was partially completed; with the remaining 114 having gone on to complete the questionnaire in a separate response. Of the 4,002 completed questionnaires, 42 dogs were in the dataset twice with their owner having completed the questionnaire on two occasions for reasons unknown; the first entry for each duplicate was removed, retaining only the second entry. The final dataset consisted of 4,111 responses, of which 3,963 (96%) were complete and 148 (4%) were partially completed.
Subjects. Of the 4,111 useable responses 2,803 (68%) were Labrador retrievers (1432 M:1371 F) and 1,308 (32%) were Golden retrievers (668 M: 640 F). A total of 88% of the dogs were Kennel Club registered, 9% were not and 3% were not known. The majority of dogs (92%) were from the United Kingdom, 5% were from the United States, 1% from Canada, 1% from Ireland and less than 1% from other countries. The majority of responses were from people informed of the project by The Kennel Club (62%), with 15% through Facebook, 6% had simply said they were emailed about it and the remainders were via other sources or didn't answer. The vast majority (84%) of dogs were described as pets, 8% were trained as gundogs, 3% were show dogs, 3% breeding dogs, and the remainder were 'Other' types of dog (2%) or working dogs (1%) (see Supplementary Table S2 for full number breakdowns).
The mean age of dogs in the sample was 6.1 years (SD ± 2.7), with 18 dogs that were less than 1 year of age and 11 dogs aged 17. With regard to neuter status, 66% (2732) of dogs in the sample were spayed or castrated, whilst www.nature.com/scientificreports www.nature.com/scientificreports/ 33% (1346) were intact, 1 was 'Not Known' and 32 were missing responses. The median age at neutering was 12 months.
Diagnostic criteria. Using logic-based criteria ( Supplementary Fig. S2, Supplementary Table S3) 40% of the sample was designated as a Control, 19% as a Case with a veterinary diagnosis of cAD, and 41% Other (Table 1).
Simulated-annealing applied to the Case and Other data identified two new sets of potential diagnostic questions from the questionnaire that could classify the known cases ( Table 2). The full list of 13 identified features contained questions that could only be answered for dogs with a past history of skin problems (full list), whilst retaining only those features that could be answered based purely upon presenting clinical signs left 8 questions (reduced list). As all of the features were binary in nature, a simple sum could be made and the appropriate cut-off for prediction maximising the sensitivity was selected.
The full list classified cases with a sensitivity of 94% and specificity of 69% (overall accuracy 74%) if 3/13 features were answered positively (PPV for a known case rate 42% and NPV 98%). The reduced list, with a cut off of 2/8 features being answered positively, was able to classify cases with a sensitivity of 89% and specificity of 71% (overall accuracy 74%, PPV for a known-case 42%, NPV 97%). The summed scores were linearly associated with a dog being a case or not, with dogs with higher summed scores having increased chances of being a case. For every additional feature answered positively in the full list, the dogs odds of being a case increased by 1.13 (GLM: OR 2.13, Z 29.19, p < 0.001) and for the reduced list by 1.35 (GLM: OR 2.35, Z 27.94, p < 0.001, Fig. 1). Both methods have a 'false positive' rate of just below 6/10, however it is worth noting that only 0.2% Controls were classified a Case by the reduced list, and 0.4% by the full list (out of 1,652). The 'false positives' are therefore not dogs with healthy skin, but dogs with skin problems that resemble cAD that could be thus far undiagnosed cases and would warrant further clinical evaluation.
In order to investigate how similar the non-cases classified as a 'false positives' are to the known cases on all facets of their skin health, the 'false positives' identified by the reduced list (the first that would be used when a dog presents with clinical signs) were then evaluated as a separate group to known cases, controls and dogs with 'Other' skin conditions. We refer to these as Potential Cases, as they had not received a diagnosis of cAD at the time of the questionnaire but have been predicted to be cases.
Comparison of the answers provided for Cases (with existing veterinary diagnosis), Controls, Potential Cases and Others allow us to establish the construct validity of the questionnaire for identifying dogs with and without the disease by ascertaining whether our findings met expectations based upon what is currently known about cAD. For example, Cases would be expected to score highest on the modified-Edinburgh pruritus scale, and Controls the lowest, whilst Potential Cases should score similarly to Cases if they are indeed dogs with undiagnosed cAD. www.nature.com/scientificreports www.nature.com/scientificreports/ improve, and 55.3% said it had not helped. The majority (79.6%) of respondents did not know whether their dogs' relatives had the same or similar skin conditions, with 10.6% saying No and 7.1% saying Yes. A total of 18.5% had not tried steroids, but of those who had, 89.5% said it had seen their dogs skin improve, and 10.5% said it had not helped. 25.3% of owners had not tried medication other than steroids. Of those who had, 89.7% said other medication had resulted in improvement, and 10.3% said it had not helped. In regard to seasonality, 50.4% of Cases had a recognisable seasonal pattern to their skin problems. Of these, the majority (63.2%) were worse in the summer, with 31.2% worse in the spring/autumn and only 5.6% worse in the winter. In terms of age of onset, 81% of Cases reportedly began showing signs when the dog was 3 or younger, and 19% when aged 4 or over. The most common period for cAD to begin was between 6m-3yrs (56% of dogs).
Skin conditions in controls, cases, potential cases and other. Four groups of dogs were compared on their skin health responses: known-cases; controls; potential cases (non-cases classified as cases using the feature lists) and 'other' (dogs with some form of skin condition classified as a non-case using the feature list).
With regards to co-morbidity with cAD, Cases had significantly higher proportion of other diagnosed skin problems than any other group (Tables 3 & 4). Just over half (51%) of Cases had also received a diagnosis of otitis externa (ear infections), compared to 5% of Controls, 42% of Others and 34% of Potential Cases. A quarter (25%) of dogs with cAD and 11% of Potential Cases also had diagnosed food allergies. Considering differences between Potential Cases and dogs with a diagnosis of cAD (Cases), the Potential Cases were significantly less likely than diagnosed Cases to have a diagnosis of food allergies, yeast skin infections, bacterial skin infections, flea allergic dermatitis and otitis externa but more likely to have a diagnosis of 'wet eczema' (acute moist dermatitis) ( Table 3). There was no difference in occurrence of mange between any group. Considering Potential Cases and Others, there was no significant difference in disease occurrence for flea allergic dermatitis and 'wet eczema' , but the Potential Cases were significantly more likely than the Others to be diagnosed with food allergies, yeast skin infections and bacterial skin infections, and significantly less likely to be diagnosed with otitis externa. www.nature.com/scientificreports www.nature.com/scientificreports/ In addition to the diseases described in Table 4, 19 golden retrievers were reported in the free text boxes to have ichthyosis, whilst a further 19 dogs were reported to have dandruff/flaky skin, and 2 further dogs as having their skin turn black.
Skin questions relevant to all groups. Exactly 50% (397) of Cases had been allergy tested, compared to 9% of Potential Cases, 3% of Other and 1% of Controls. Compared to Controls, Cases, Potential Cases and Other dogs were reported as having significantly more gastric issues (frequent loose stools) (GLM: p < 0.001, Wald = 44.59) and vomiting (GLM: p < 0.001, Wald = 23.02), but were not significantly different from each other for either variable. There was a significant difference in modified-Edinburgh Pruritus Scale scores for all skin groups (Fig. 2, ANOVA, df = 3, F = 1517.32, p < 0.001) [although it must be noted that Controls were classified as dogs whose owners answered 'no' to an earlier question indicating that they had had no "current or past signs of abnormal itchiness"]. Potential Cases scored similarly to Cases, but Cases had significantly higher mean pruritus scores (Tukey post-hoc mean difference 0.80, p < 0.001).
Amongst the Cases, 77% were reported as having had abnormal appearing skin now or in the past, compared to 69% Potential Cases and 20% of Others, whilst 88% of Cases and 86% of Potential Cases were reported as having had itchy skin now or in the past, compared to 22% of Others (no controls answered 'yes' here as answering 'no' to these were part of the selection criteria for a control). Cases and Potential Cases showed the highest proportion of allergy signs in terms of whether they could be seen to recurrently and frequently: scratch, paw lick/chew; lick/chew other areas; rub their face; sneeze; have a runny nose; or have watery eyes (Fig. 3). 60-70% of Potential Cases and Cases displayed abnormal levels of scratching, respectively, and paw licking/chewing was seen by 53-63% of these groups. 44-59% displayed abnormal levels of licking/chewing other body areas, and 46% of Cases and 34% Potential Cases repeatedly rubbed their faces. Comparatively, only 2-7% of Controls and 10-19% of Others were seen exhibiting these behaviours. It must be noted that these allergy signs were not used to classify the Potential Cases.
Questions relevant only to dogs with skin conditions. For dogs reported to have had a skin condition, 48% of Cases said their vet had suggested a diet trial, compared to 16% of Potential Cases, and 6% of Other.
With regards to abnormal and damaged skin, Cases and Potential Cases exhibited a much greater proportion of reddened, damaged, rough/scaly skin and bald/thinning fur than did dogs in the Other category (Fig. 4). Other dogs were least likely to exhibit every form of abnormal skin, and Potential Cases exhibited slightly to www.nature.com/scientificreports www.nature.com/scientificreports/ moderately fewer of all abnormalities than Cases with all differences statistically significant saving for 'greasy' skin (Supplementary Table S4).
The main areas of the dogs' bodies that were affected by the skin abnormalities shown in Fig. 5, for Cases and Potential Cases compared to Others were: fore paws (41% and 29% vs. 3%), hind paws (28% and 21% vs 1%), www.nature.com/scientificreports www.nature.com/scientificreports/ armpits (axillae: 36% and 23% vs 10%); underbelly (57% and 36% vs 27%) and elbows (14% and 18% vs 4%) (Supplementary Fig. S4 and Supplementary Table S5). Potential Cases had significantly fewer lesions than Cases in every region except for the elbows, collar region and back, but they had significantly more lesions than Others everywhere except for the inside of their ears, and their back (Supplementary Table S5). Although less affected, Potential Cases showed a similar pattern to Cases in the location of skin abnormalities by body area (Fig. 5).
Of the seven a-priori expectations that were tested in order to provide evidence for construct validity (whether a test measures what it reports to measure) of the questionnaire, all seven were met (Table 5).
Discussion
The aim of this study was to develop a questionnaire for completion by dog owners with accompanying classification criteria that could identify dogs with diagnoses of cAD and other skin conditions, as well as suitable controls, and enable us to predict those dogs that may have thus-far undiagnosed cAD. During the 4-months that the survey was open we were able to recruit over four thousand dog owners, with 4,111 useable questionnaires. Epidemiological evaluation of the questionnaire answers revealed four groups of dogs: Cases (dogs with diagnosed cAD); Controls; 'Potential Cases' which may have undiagnosed cAD and Other (dogs with non-cAD skin conditions).
It must be acknowledged that there was no control over diagnosis in this study. The questionnaire, named the canine atopic dermatitis research and diagnosis questionnaire (cAD-RQ) relies upon owner report of veterinary diagnoses. As there is no standard diagnostic test for cAD, practitioners must rely upon presence and severity of a range of clinical signs and their own experience to make a diagnosis 7 . It is therefore possible that dogs may have been misdiagnosed due to similarities with other similar conditions. Therefore, the cAD-RQ is considered to evaluate cAD sensu lato; in the broad sense, due to the widest range of allergens. Control animals were over the age of 3 years and had exhibited no signs of skin disease.
To establish construct validity of the cAD-RQ as evaluating true cases of cAD, and identifying true potential cases, the results of this study were compared to what would be expected for dogs with clinically sound presentation. All of the findings presented here match well to what would be expected if our group of owner-reported Cases were true cases of cAD, providing support for the construct validity of the cAD-RQ as a tool for evaluating cAD via owner report.
The pattern of affected body areas for Cases and Potential Cases matched well to the pattern that would be expected for dogs with cAD 11 , with the most commonly affected areas being the front and back paws, axillae, inner ears and underbelly/groin. The muzzle and eye area would also be expected to be a commonly affected body region for cAD 11 , but here only Cases exhibited itching in the muzzle area for more than 20% of dogs. As with previous studies, the majority of Cases developed clinical signs when under 3 years of age; 81% of dogs here compared to 78% previously reported 19 , confirming that one fifth of Cases develop cAD when aged over 3. Further, Cases scored significantly higher on the modified-Edinburgh Pruritus Scale than any other group and had a significantly higher cumulative incidence than any other group for associated skin conditions including ear, yeast and bacterial infections and food allergies as would be expected 20 . Based upon these results there is good reason to believe that the dogs reported here as having cAD by their owners do indeed represent the standard phenotypic www.nature.com/scientificreports www.nature.com/scientificreports/ presentation for the disease and can as such be considered true cases. Further, these dogs can be considered representative of the diagnosed population of dogs, showing the full spectrum of the disease, rather than a narrower group that might be included in other case-control studies focusing on an extreme phenotype.
It is expected that prevalence estimates for cAD based upon veterinary diagnosis are underestimated due to mild cases not being taken to the vets and lack of recognition/recording of certain signs as being indicative of cAD 1 . Here, by collecting data directly from owners the cAD-RQ has shown its strength. Further evaluation of the questionnaire answers allowed us to identify two new sets of criteria within the cAD-RQ itself that were able to correctly classify 89-94% of diagnosed cases and exclude 99.9% of controls. The specificity of these criteria mean that 29-31% of dogs identified by them did not, at the time of the survey, have a diagnosis of cAD. However, investigations of the clinical signs reported by the owners of this group of 'false positives' , referred to here as Potential Cases, revealed that they presented very similarly to diagnosed cases, if less severely, and very differently from Other non-cases and controls. It is highly plausible therefore to propose that these 'false positives' could in-fact be considered dogs with an allergic skin disease that have thus far gone undiagnosed and that they truly are potential cases. Reasons for this could be varied, some owners reported in the free text that their dog had only just started showing signs of skin problems and were currently undergoing veterinary exploration, others that their dogs skin problems didn't impact the dog's quality of life or had been considered too mild to warrant veterinary attention. The implications of this are vast when considering future management options for the disease, especially where breeding programmes are concerned, as many dogs may have the condition, but may lack an official diagnosis. Tools such as the questionnaire proposed here could help to identify such dogs. We envisage that the cAD-RQ and the associated diagnostic criteria presented here could be utilised in a clinical setting prior to clinical evaluation to capture skin health information relevant to diagnosis of cAD in a standardised manner. The criteria are suitable for use as a diagnostic aid only once clinical signs have begun to be noticed, so it would not be recommended that it be used to identify a dog as being 'clear' of allergies whilst it is still of an age where it is at risk (i.e. under 3 years of age).
A similar questionnaire to the cAD-RQ was presented in a recent study of Finnish dogs (published after this study had concluded) by Hakanen and colleagues 21 . They were able to show that allergy signs could be meaningfully combined to form an 'allergy index' . In our study, a much higher proportion of dogs with a known diagnosis and Potential Cases exhibited allergy signs than did controls and 'others' , yet allergy signs themselves weren't included in the classification criteria. It is possible that an allergy index similar to that presented by Hakanen and colleagues could be used to further classify the severity of the cases and undiagnosed cases.
An important incidental 'finding' of this study was the strong willingness for dog owners to take part in this research, even when their dog had no skin problems. This allowed us to collect a large amount of skin health data in a relatively short period of time and, in conjunction with the strong construct validity of the questionnaire, acts as a proof-of-concept for the epidemiological evaluation of complex health conditions like cAD via owner report questionnaires.
Whilst this study only included purebred Labradors and Golden retrievers, it is anticipated that the tool could be used for any breed of dog due to the general nature of the criteria identified here and the focus on clinical signs common between breeds, which would allow for intra-breed comparisons. Previous diagnostic criteria have been shown not to be heavily impacted by breed variations 10 although a validation study will need to be conducted to confirm this and make breed-specific adjustments to the cAD-RQ if necessary. The nature of this disease is too complex for any one tool or set of diagnostic criteria to be used in isolation. However, the questions presented here could be of value not only for researching cAD (in the broad sense) and associated skin conditions but for use in the veterinary setting as a method of collecting standardised data on the life history and clinical features of presenting dogs prior to a clinical evaluation. Recorded veterinary histories tend to be functional rather than detailed, and by using this questionnaire, detailed standardised data could be collected regarding skin health
Expectation
Based on Met?
Cases should score highest on the modified-Edinburgh pruritus scale, whilst Potential Cases should score similarly to Cases. Yes Other skin conditions including ear infections, food allergies and yeast infections should have a higher disease incidence amongst Cases and Potential Cases than Controls and Others.
Yes
Acute moist dermatitis should have a higher disease incidence in Golden retrievers than Labradors Wiles et al. 25 Yes
Ear and yeast infections should have a higher disease incidence in Labradors than Golden retrievers Layne & DeBoer 24 Yes
Cases and Potential Cases should exhibit far greater recurrent signs of allergies than Controls and Others Yes Cases should have an itch and lesion distribution focused mainly on their fore and hind paws, axillae, inner ears, muzzle, eyes and ventral abdomen Hensel et al. 11 Yes (although eyes and muzzle were less affected) Approximately 4/5 of Cases should have first developed clinical signs when under 3 years of age Favrot et al. 19 Yes Table 5. Expected associations between Cases, 'false positives' , Controls and other groups if the Cases and 'false positives' were to reflect accepted clinical phenotypes of cAD. References are given where evidence for an expectation exists in the published literature. Construct validity for the questionnaire as identifying true cases of cAD is shown when these expectations are met.
www.nature.com/scientificreports www.nature.com/scientificreports/ recorded directly by the owner prior to their consult; saving time in already time-pressured consultations and assisting in the diagnostic process.
Methods
Questionnaire development. A questionnaire for dog owners was developed to evaluate their dog's skin health with respect to the clinical signs of cAD. Initial questions were asked of all respondents and were used to determine which further questions were asked. All participants were asked to record any skin-related diagnoses that had been made by a veterinarian, as well as any undiagnosed skin conditions. Participants were further asked whether their dog had now or in the past exhibited clinical signs related to pruritic skin conditions and were asked to score their dog on a modified Edinburgh pruritus scale 17 ; modifications were minor wording changes and additional descriptions of itch related behaviour to further define the categories. Participants who answered that their dog had had no diagnosed skin problems or signs of skin disease were sent to the end of the questionnaire after completing the first page of the skin health questions. All other participants were asked further questions presented in an owner-friendly manner, relating to their dog's skin condition, using question logic to ask questions only when relevant (i.e. based upon the owners answer to the previous question, see Supplementary Table S1 for all questions). These further questions were developed based upon dermatological literature regarding the clinical signs and diagnosis of cAD 11,15,22 . Additional questions were also asked regarding the demographics of the dog and living environment, which will be evaluated in a separate publication.
The questionnaire was piloted with owners of six atopic dogs from a local veterinary practice and feedback on the questions' applicability, user-friendliness and clarity was requested to aid question refinement. Further refinement of the questions to aid user-friendliness was conducted via an in-depth review by researchers within the Centre for Evidence-based Veterinary Medicine (CEVM). Distribution. In order to limit the variability in the presentation of clinical signs and to gather a large enough number of responses to help validate the tool, for the purpose of this study only two of the UK's most popular breeds were included; purebred Labrador and Golden retrievers. The final questionnaire (see Supplementary Table S1 and Supplementary Fig. S1 for all questions as they were trialled) was hosted on SurveyGizmo.com and could be accessed via a project specific website (www.itchydogproject.co.uk). The project was advertised through relevant media sources (i.e. Vet Record, the Vet Times and dog magazines), via social media (Facebook and Twitter), and was listed on The Dog Science Group webpage and The Kennel Club BARC site. Further recruitment was conducted via The Kennel Club, who sent direct emails out to registered owners of Labradors and Golden retrievers providing details of the project and inviting them to participate. All owners of these breeds were invited to participate no matter what the condition of their dog's skin, with it being made clear that they did not have to have atopic dermatitis or itchy skin to take part. Owners of dogs aged less than 3 years of age with no skin conditions at all were excluded, due to the possibility that their dog may yet develop cAD so couldn't be considered to be a control. If a dog had a skin condition it could participate at any age. Dogs of breeds other than purebred Labradors and Golden retrievers were excluded upon registration.
The online questionnaire was open for 4 months between March 11 th and July 10 th and the full dataset was downloaded on the 10 th of July as a CSV file for cleaning in Excel.
This project was conducted in accordance with the University of Nottingham's Code of Research Conduct and Research Ethics and has been approved by the University of Nottingham, School of Veterinary Medicine and Science research ethics committee (identification reference 1979,170217). All potential participants were fully informed about the project and were provided with contact details to ask further questions or withdraw from participation. Written, informed consent to take part in the project and for the data to be used was gained upon registration for the project at the beginning of the questionnaire, without which they could not participate. Data cleaning. The data was cleaned by removing disqualified responses, where no consent was given, or the dog was a breed other than purebred Labrador or Golden retriever. Partial responses were compared against completed responses (using the columns containing the owners name and the dog's name) to identify owners who had subsequently completed the questionnaire. Where this occurred, the partial response was removed.
Due to the nature of the 'Date' question in SurveyGizmo.com, people had the option to type in the date their dog was born, which led to great variation in the format the date took, and in many cases, left no way to distinguish whether they had used a British date format (dd/mm/yyyy) or US date format (mm/dd/yyyy). For this reason, exact ages at the time of survey completion could not be calculated, so the last two digits from every date of birth entry were selected to provide the year the dog was born. The year they were born was subtracted from '17' to calculate the dog's age to the nearest year.
The term 'hot spots' is often used by the public to refer to wet eczema, whilst 'acute moist dermatitis' is the veterinary terminology. Where owners did not select 'wet eczema' under skin conditions but went onto report their dog had hot spots or acute moist dermatitis in text boxes associated with the questions "Another skin or ear disease not listed here" or "Undiagnosed skin problems", they were classified as having had wet eczema.
All checkbox answer options were dummy coded within Excel so that a selected answer was scored with a '1' and an unselected answer was scored as a '0' . Any questions that had NA as an option were recorded as NA. statistical analysis. Skin health data was analysed using SPSS v.22 (SPSS Inc., Chicago, IL). Dogs were first classified as a Case, Control or Other using a two-step process according to the logic outlined in Supplementary Table S1. For dogs in the Other group, further skin conditions were identified from information provided in the free text boxes when owners selected that their dogs had either "Another skin or ear disease not listed here" or "Undiagnosed skin problems".
A new set of diagnostic criteria was identified from the questionnaire itself using simulated annealing. Simulated annealing is a data reduction technique that allows the selection of the most relevant subset of predictors from a much larger set of predictors in predictive modelling 23 . For large datasets, predictors can be redundant and noisy, and using the whole dataset can lead to overfitting and hence poor predictions. Therefore, reduction of the number of predictors also known as feature selection is an important step in predictive modelling. Several different heuristic optimisation algorithms exist that can find a subset of features that minimise the prediction errors. Here, the error function selected to minimise was based on the performance of the classification defined as: 1-overall accuracy. The definition of accuracy used was: Accuracy and performance used to produce the cost functions was obtained using a random forest classifier. The partitioning of the classifier was performed using a 5-fold cross validation: the dataset was divided into 5 subsamples, where 4 subsamples were used for training and 1 subsample for validation. The process was then repeated 5 times until each of the 5 subsamples were used once for validation. The cost of the classifier was computed using the average cost function for all the 5 subsamples. A series of question subsets from 5 to 24 were selected, with the maximum number of iterations set to 30, the maximum number of sub iterations set to 10 and the initial temperature set to 10 with a temperature reduction rate of 0.99. The code for the simulation annealing procedure was written in Matlab R2017a (MathWorks Inc.).
Following the simulated annealing procedure, each of the predictors in the subsets obtained were tested for significance against a case/non-case using a Fisher test and the number of times each predictor appeared in a set of features was calculated. The most relevant features were considered to be those that appeared in 5 or more feature lists obtained through simulation annealing, and that were significant to p < 0.05 in the Fisher test. In this way a list of questions that could be used to predict diagnosis of cAD was selected. Since all questions were binary summed scores were tested for classification performance on the whole dataset of 4,111 dogs. Using the summed score on the entire population enabled us to isolate potential undiagnosed cases of atopic dermatitis ('false positives') from the group of dogs with 'Other' skin conditions creating four groups for comsparion 24 :
|
v3-fos-license
|
2023-10-13T14:02:07.072Z
|
2023-10-13T00:00:00.000
|
263912688
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://animalmicrobiome.biomedcentral.com/counter/pdf/10.1186/s42523-023-00271-7",
"pdf_hash": "8e525f3d2b6501bdeb02e04a8fb623c11fb6d3c8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44127",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "ad96a0a17e366b155a2fd067f86952e85d5dc786",
"year": 2023
}
|
pes2o/s2orc
|
Host phylogeny and environment shape the diversity of salamander skin bacterial communities
The composition and diversity of animal-associated microbial communities are shaped by multiple ecological and evolutionary processes acting at different spatial and temporal scales. Skin microbiomes are thought to be strongly influenced by the environment due to the direct interaction of the host’s skin with the external media. As expected, the diversity of amphibian skin microbiomes is shaped by climate and host sampling habitats, whereas phylogenetic effects appear to be weak. However, the relative strength of phylogenetic and environmental effects on salamander skin microbiomes remains poorly understood. Here, we analysed sequence data from 1164 adult salamanders of 44 species to characterise and compare the diversity and composition of skin bacteria. We assessed the relative contribution of climate, host sampling habitat, and host phylogeny to the observed patterns of bacterial diversity. We found that bacterial alpha diversity was mainly associated with host sampling habitat and climate, but that bacterial beta diversity was more strongly associated with host taxonomy and phylogeny. This phylogenetic effect predominantly occurred at intermediate levels of host divergence (0–50 Mya). Our results support the importance of environmental factors shaping the diversity of salamander skin microbiota, but also support host phylogenetic history as a major factor shaping these bacterial communities. Supplementary Information The online version contains supplementary material available at 10.1186/s42523-023-00271-7.
Introduction
Throughout evolutionary history microbial communities have established multiple symbiotic interactions with animals [1][2][3].Different animal organs (e.g., gut, skin) represent distinct and unique microhabitats that enable colonisation of different microbial taxa, some of which establish mutualistic relations with their host [4][5][6].The composition and diversity of these animal-associated microbial communities (microbiomes) are shaped by multiple ecological and evolutionary processes acting at different spatial and temporal scales [7][8][9][10].In many animal groups, closely related host species harbour microbiotas with similar composition [1,4,6], which can be attributed to host phylogenetic effects.However, microbiome similarities are also shaped by host-associated factors, such as immunity and diet [11,12], and environmental factors such as microhabitats and climate [4,5].
The relative influence of distinct factors depends on the nature of the animal-microbial interaction (e.g., organ system) and the physiological and immunological characteristics of the host [3,6,11,12].Unlike gut microbiomes, skin microbiomes are thought to be more strongly influenced by environmental factors due to the direct interaction of the host's skin with the external media.For instance, the skin microbiota of amphibians is largely influenced by large-scale climatic factors (e.g., precipitation, temperature) and microhabitats (e.g., arboreal, terrestrial, or aquatic lifestyles) [9,13,14].Also, host developmental transitions [15] linked to immunological changes [16], innate immunity, and host genetic diversity (e.g., at the major histocompatibility complex [17]) play a major role in shaping these microbial communities [11,18].Skin microbiomes in amphibians function as an extension of the host immune system [19] and could partially explain the variability in susceptibility of amphibian species to emerging pathogens [14].
The role of host phylogeny in shaping the skin microbiota of amphibians remains unclear, but evidence suggests host-phylogenetic history has a significant, albeit weak, effect in shaping skin microbiomes [9,12].However, several climatic factors likely carry a phylogenetic signal due to niche conservatism across the amphibian tree of life, which could be masking underlying host-phylogenetic effects.Furthermore, animal-microbiome interactions vary through evolutionary time [4,6] and the magnitude of phylogenetic effects on skin microbiome assemblages depend on the evolutionary scale being analysed for both hosts (e.g., species within genera) and microbes (e.g., bacterial orders) [4,6].Indeed, several studies have found substantial differences in skin microbial diversity among distinct amphibian families, genera, species, and subspecies [9,[20][21][22][23] and have suggested a more prominent effect of host phylogeny on skin microbiomes.
The contribution of different environmental and hostassociated factors in shaping the skin microbiomes across the amphibian tree of life is not yet fully explored.Most studies addressing these questions have focused on frogs and toads (Anura) [9,12,14,18,24] and less attention has been paid to salamanders (Caudata) [15,22,23,[25][26][27].A global analysis of amphibian microbiomes included over 200 anuran species but less than 30 salamander species [9].Salamanders have two key biological features that allow for a comprehensive assessment of the relative contribution of different factors shaping skin microbiomes: (i) adult salamanders can be either fully terrestrial (e.g., many plethodontids), fully aquatic (e.g., axolotls, hellbenders) or a combination of both where adults move into aquatic habitats for reproduction but juveniles are terrestrial (e.g., newts), whereas only a few frog species are fully aquatic as adults [28,29]; and (ii) salamanders are geographically and climatically more restricted than frogs and toads, mostly inhabiting more temperate climates in the Northern hemisphere [28,29].
To address this knowledge gap, here we assess the relative contribution of host sampling habitat, climate, elevation, and host phylogenetic relationships to the diversity and structure of skin bacterial communities in salamanders.We compiled available 16S rRNA sequence data of salamander skin bacterial communities, including newly generated data by our working group, and used sampling habitat and climatic data to test for host-associated and environmental correlations with bacterial diversity.Finally, we constructed a dated phylogeny for extant salamanders to test for evidence of a phylogenetic signal in the composition of skin bacteria.
Host habitat and taxonomy influence skin bacterial diversity
We used 16S rRNA amplicon sequence data from 21 datasets to compile 1,164 adult salamander samples, across 87 localities, and characterise the diversity and composition of skin bacterial communities in 44 host species; two host species are represented only by captive individuals (Echinotriton andersonii and Eurycea waterlooensis).Most of the sampled localities lay within centres of salamander species richness, especially in North America (Fig. 1).The 44 host species represent five out of the ten currently recognized salamander families (Fig. 2; Additional file 1: Fig. S1), but the majority of sampled species belong to the two largest families Plethodontidae and Salamandridae.Most species are represented by individuals sampled from either terrestrial or aquatic habitats, with the exception of five species of Salamandridae in which individuals were sampled from both terrestrial and aquatic habitats (Fig. 2a).
After bioinformatic processing of the data we obtained Amplicon Sequence Variants (ASVs) that were taxonomically assigned and focused our analyses on bacterial order and family levels (see Methods).We found 223 bacterial orders and 453 bacterial families across all salamander samples.We identified 25 bacterial orders and 23 families shared among all salamander species, irrespective of host sampling habitat or family (Additional files 9 and 10); sixteen shared orders were successfully assigned to recognized bacterial taxa and were used for subsequent analyses.These shared orders comprised 16.6-77.4% of the relative abundances of ASVs across all host species (Fig. 2b).Thirteen bacterial orders had a median prevalence > 80% across host species and five of these orders (Rhizobiales, Sphingobacteriales, Pseudomonadales, Xanthomonadales, and Actinomycetales) kept high levels of prevalence (> 80%) in three quarters or more of the sampled salamander species (Additional file 1: Fig. S2).However, other shared bacterial orders had a more varying prevalence among host species (e.g., Myxococcales).
By implementing Linear Discriminant Analysis Effect Size (LefSe) we identified bacterial orders whose relative abundances explain differences among samples from distinct sampling habitats or salamander families.We found 91 bacterial orders with statistically significant differences between the terrestrial and aquatic sampling habitats, but only 69 could be taxonomically assigned to named orders: 44 with higher abundances in terrestrial habitats and 25 with higher abundances in aquatic habitats (Additional file 11).In turn, we found 18 bacterial orders with differences among host families, out of which 13 could be taxonomically assigned to named orders.All but one of the bacterial orders with differences among families were also identified as differentially abundant among host sampling habitats (Additional file 12).Overall, we found evidence of different abundance profiles solely by sampling habitat in 74 bacterial orders (out of 91), whereas only one order (out of 18) showed higher abundances by host family, specifically in the Cryptobranchidae family.
Climate influences skin bacterial diversity
We fitted a linear mixed model to assess the influence of climatic variables on alpha diversity of the salamander skin bacteria, while accounting for the effects of host habitat and taxonomy.Our model included host sampling habitat, host family, seven bioclimatic variables, elevation, and two monthly variables as fixed effects, while S1).This model showed that bacterial alpha diversity varied the most as a function of salamander habitat and family, yet climatic variables also had a non-negligible influence on alpha diversity (Additional file 1: Fig. S4).The fixed effects of this model accounted for 23.6% of the observed variance (marginal R 2 ); when the random effects were considered (conditional R 2 ) the model accounted for 33.1% of the variance.
In the multivariate context, salamanders sampled from terrestrial habitats showed higher levels of bacterial alpha diversity relative to those sampled from aquatic habitats (the reference level), whereas those belonging to families Cryptobranchidae, Plethodontidae, and Salamandridae exhibited higher levels of alpha diversity relative to those from family Ambystomatidae (the reference level) (Additional file 1: Fig. S4).The model included climatic variables associated with temperature and precipitation and elevation, but only one bioclimatic variable (precipitation of the driest quarter, bio17) showed a significant negative effect on bacterial alpha diversity (Additional file 1: Fig. S4).The model indicated that alpha diversity was lower in samples taken from localities with higher dry season precipitation (while controlling for all other factors), indicating that samples from localities with more pronounced 'dry' seasons tend to have more diverse bacterial assemblages.
To disentangle the relative contributions of climatic and host factors on bacterial beta diversity we performed a distance-based redundancy analysis (dbRDA) using the wUF and uwUF dissimilarity matrices; we fitted models using both climatic and host factors.Our models for both wUF and uwUF included the effects of host sampling habitat and family, nine bioclimatic variables, and two monthly variables (Additional file 1: Tables S2-S3).For wUF, we retrieved 12 statistically significant canonical axes (p value < 0.05) that collectively explained 25.26% of the observed variance in beta diversity across samples; for uwUF we retrieved 16 statistically significant axes explaining 14.45% of the variance.Overall, our models showed that climatic variables had a largest influence relative to host sampling habitat and family on skin bacterial composition (uwUF) and structure (wUF).A PERMANOVA over each variable showed bio2 (mean diurnal range), precipitation, and bio18 (precipitation of the warmest quarter) had the three largest effect-sizes (p = 0.001; df = 1) on both the wUF and uwUF matrices (Additional file 1: Table S2-S3).Host sampling habitat had a significant (p = 0.001; df = 4), but smaller effect-size on both wUF and uwUF distances.
Salamander host phylogeny is correlated with skin bacterial community structure
Based on mitochondrial and nuclear loci, we reconstructed a Maximum Likelihood phylogenetic tree for 580 species of Caudata (~ 82% of extant salamander species) and performed fossil-based molecular dating (Additional file 1: Fig. S1).We assessed the influence of the salamander host phylogeny on bacterial beta diversity by employing Mantel and partial Mantel tests using pairwise patristic distances (here in time units) and bacterial Bray-Curtis dissimilarities at different levels of bacterial taxonomy.To account for topological and branch-length uncertainty in the salamander phylogeny, we estimated Mantel correlations using a sample of 100 bootstrap trees plus the best scoring ML tree (n = 101), resulting in median rM and rMp estimates ranging from 0.04 to 0.35 and 0.01-0.26,respectively, across bacterial taxonomic levels (Additional file 1: Fig. S5).Overall, the Mantel correlation tests consistently revealed a significant positive phylogenetic signal in skin bacterial structure at the bacterial order and family levels (Fig. 4; Additional file 1: Fig. S6; Additional file 1: Table S4), where the dissimilarity of skin bacterial assemblages increased as evolutionary distances increased among host species (Fig. 4a).The partial Mantel tests consistently retrieved significant positive correlations, after controlling for climatic distances among host species, only for the bacterial order level (Additional file 1: Fig. S5).
To test the evolutionary scale at which positive phylogenetic signals were occurring, we estimated Mantel correlograms to assess how rM varied at different temporal scales across the salamander phylogeny.While accounting for uncertainty in the host phylogeny, we found a positive phylogenetic signal of bacterial order composition (Fig. 4b), but only within the first four distance classes that are roughly equivalent to the last 50 million years of salamander evolution (Fig. 4b).This pattern was also observed when using bacterial dissimilarity matrices at the family level (Additional file 1: Fig. S6).
Discussion
Here, we assessed the relative contributions of host habitat, climate, and host phylogenetic relationships to the diversity and structure of skin bacterial communities in salamanders.In agreement with previous studies on amphibian skin microbiota, we found that host sampling habitat (terrestrial/aquatic) [13,30], precipitation, and seasonality play a major role in shaping the diversity of salamander skin bacterial communities [9,14,15,22].We also inferred that host phylogenetic relationships have an important effect in shaping these bacterial communities [22], which contrasts with previous studies in which phylogenetic effects were minor [9,13,26].
A recent study across a wide diversity of animal-associated microbiomes showed that bioclimatic variables related to temperature and precipitation were relevant in shaping host-associated external microbiomes, in contrast with internal microbiomes which are mainly influenced by host diet and phylogeny [11].Specifically for amphibians, two studies on anurans at continental to global scales explored the relative contributions of distinct biotic and abiotic factors and found evidence that skin bacterial diversity is mostly influenced by long-term temperature and precipitation averages [9,14].Our findings agree with these results in revealing an effect of average climate regimes (specifically precipitation seasonality [9]) on the salamander skin bacterial diversity.However, relying on long-term climate averages (e.g., yearly bioclimatic variables from WorldClim [31]) leads to loss of information on local, year-to-year variations in climate; in this case, samples can share the same climate averages but differ in their levels of bacterial diversity due to short-scale temporal variation.Indeed, by incorporating monthly climatic variables into our analyses, we found that precipitation at time of sampling (month) had a significant and positive effect on bacterial beta diversity.This agrees with observations of significant variation in bacterial communities across long-and short-term time scales [8,13,15].We hypothesise that long-term seasonal effects may explain higher bacterial alpha diversity in salamanders' skin due to increased temporal turnover in community composition [9].In addition, short-term increases in precipitation may result in higher bacterial turnover due to increased interchange of bacteria across multiple sources facilitated by rain and water movement across the ecosystem [32].To further explore the temporal and spatial dynamics of amphibian skin microbiomes, researchers should include more precise spatial and temporal data on climate and other environmental factors (e.g., water pH, salinity), and more detailed information on host's life history traits and behaviour at the population level.We found that local-scale host sampling habitat (e.g., terrestrial, aquatic) had a major influence on skin bacterial alpha and beta diversity.Environmental bacteria are considered one of the main sources of microbial diversity for amphibian skin microbiomes [25,26,33,34], and evidence has shown that host habitat is one of the major drivers of anuran and caudate skin microbial diversity [12,26,30,35].Our results showed differences between individual salamanders sampled from aquatic and terrestrial habitats and that specific bacterial orders differed in relative abundances between these habitats.The sampling habitats we included do not reflect the entire set of habitats explored by salamander species and only refer to the habitat where individuals were found.Therefore, the bacterial communities described here are likely a subset of the species' entire bacterial diversity.We also identified a set of bacterial taxa shared among all sampled salamander species, yet only a small proportion of salamander species have been sampled and several salamander families remain unsampled (Additional file 1: Fig. S1).These results should be taken with caution because varying sample effort across salamander hosts likely impacts estimation of bacterial prevalence across samples.Furthermore, the data we gathered revealed that most studies on salamander microbiomes are focused on single host species from either aquatic or terrestrial habitats; out of 21 studies, only four included samples from both habitats, whereas only three included samples from different families.This complicates teasing apart the influence of study design (e.g., differences in sampling or sequencing among studies [36]) from that of biological factors (e.g., habitat or family).
Our results showed that host habitat and family were confounded and some bacterial taxa appeared enriched simultaneously by both factors.Thus, some of the differences in bacterial relative abundances we see across habitats may be related to host phylogenetic history.Indeed, our analyses show that the host family, independent of habitat, is an important factor influencing alpha and beta diversity of skin bacterial communities in salamanders.These results agree with previous inference on a phylogenetic effect when comparing skin bacterial communities between different host orders [20] or genera within families [22,27].However, in other cases the effect of habitat/environment has been stronger in host species within the same genera [26] or genera within the same family [37].Most of these analyses do not use direct measures of phylogenetic distance among species (e.g., divergence times or branch lengths) and instead rely on comparisons among different taxonomic entities (e.g., genera or families).
To tackle the effect of host phylogeny on skin bacterial diversity we constructed a dated salamander phylogeny and directly used branch-length distances among host species (in millions of years).By doing this, we found a significant role of host phylogenetic relationships in shaping skin bacterial composition, even after controlling for climatic differences among host ranges.More specifically, we found positive significant correlations between bacterial community distances (Bray-Curtis) and host phylogenetic distances, where similarity in salamander skin bacterial communities increases with decreasing host phylogenetic distance.These correlations are robust to topological and divergence time uncertainty of the salamander phylogeny.In other words, we uncovered a general tendency where skin bacterial communities of closely related host species resemble each other more than those of host species drawn at random from the same tree.Recent meta-analyses spanning several amphibian families (mainly anurans) have found significant but weaker effects of host phylogeny (relative to other factors) using topological congruence analysis and other proxies of host phylogeny (i.e., nMDS of patristic distances) [9,11,12,14].In these cases, the weaker phylogenetic signal probably stems from loss of statistical power because distances in microbiota compositions based on dendrograms or nMDS and raw (true) distances are moderately to poorly correlated [4].
Furthermore, based on the results of previous studies [20,26,27,37] we believe that the scale at which the host-phylogenetic effect is being analysed might explain some of the discrepancies found on the strength of phylogenetic effects.Interestingly, we observed that the phylogenetic effect on salamander skin bacteria was stronger at intermediate levels of host divergence, even after controlling for climatic distances among host species, roughly corresponding to the last 50 million years of salamander evolution, and that deeper salamander divergences do not correlate to skin bacterial differentiation.We also observed that the phylogenetic signal was significant, albeit with varying strength, when using bacterial dissimilarities at different taxonomic levels (i.e., class, order, family).Although the phylogenetic signal decreased at higher taxonomic ranks, the current 16S data do not allow for a robust test at higher bacterial taxonomic ranks due to uncertainty in taxonomic assignments (e.g., genera).
Host-mediated environmental filtering (through traits unaccounted for in the analyses) may be playing a substantial role in determining skin bacterial composition in salamanders.Phylogenetic signals can be produced by host-mediated ecological filtering, in which host traits selectively filter microbes from the environment [4][5][6].Internal microbiomes (e.g., gut) have been shown to have strong phylogenetic signal [4,6,38,39], whereas superficial microbiomes (e.g., skin) show weaker phylogenetic signals, specifically in amphibians [9].This is thought to be the result of the latter being more prone to the effects of exogenous factors converging across the host phylogeny (e.g., climatic niche preferences).The strength of the phylogenetic signal would depend on the degree of phylogenetic correlation of the specific traits involved in ecological filtering of microbes.In our case, we assessed the weight of climate preferences of hosts in explaining the phylogenetic signal in skin bacterial communities and found that climatic distances explain some of the variance in beta diversity across hosts; however, most of the phylogenetic signal remains unaccounted by these climate factors.
Overall, skin bacterial similarity in salamanders appears to be driven by recent host (and bacterial) evolution [4,39].The evidence of phylogenetic signal across multiple levels of host divergences does not support an overarching effect of ecological filtering through environmental host preferences (climate and habitat), at least at the level of salamander families.However, our findings do not preclude an important role of host-mediated ecological filtering of skin microbes occurring at lower taxonomic ranks (e.g., between salamander genera or closely related species).Here, we argue that phylogenetic signal associated with variation in specific putative traits (e.g., genetic diversity, major histocompatibility complex, antimicrobial peptides) may be important to explain differences in skin bacterial composition, but that these putative traits are probably associated with the evolutionary history of salamander hosts [6].
16S rRNA amplicon sequence data
We gathered available published data on skin bacterial communities from salamander species (order Caudata) (last updated, December 2022) from 20 studies [13,15,17,20,25,27,30,[40][41][42][43][44][45][46][47][48][49][50][51][52]; these include data deposited in the National Center for Biotechnology Information (NCBI), the European Bioinformatics Institute (EBI), the Dryad repository, and additional data obtained through peer-to-peer data requests.We performed an extensive search using the SRA Run Selector tool from NCBI (https://0-www-ncbi-nlm-nih-gov.brum.beds.ac.uk/ Traces/ study/) to select studies with publicly available 16S rRNA sequence data generated with the Illumina platform.We downloaded the SRA sequences and associated metadata from Entrez search results.Most studies had sequence data for the V3-V4 and V4 ribosomal regions, but a couple of studies sequenced the ribosomal regions V2 and V3 (Additional file 2).Metadata obtained from each of the studies included: locality (latitude and longitude), sampling habitat (terrestrial/aquatic), sequencing primers, sequencing technology, collection date, and sample origin (wild/captive).Sampling habitat was determined based on information obtained from the metadata and methods originally provided in the published datasets/papers; in several cases we consulted directly with authors about sample provenance.This variable refers to the place and time where the animals were found and sampled and does not describe the life history trait of the salamander species.We believe that this approach is more suitable to study bacterial assemblages because it has been shown that animals of the same species sampled in different habitats show marked differences in bacterial composition (e.g., Sabino-Pinto et al. [30]).
The publicly available data and metadata we used included 16S rRNA amplicon sequences for 1031 samples from 37 salamander species.We added new bacterial data for seven Mexican salamander species generated by our working group.Specifically, we included 111 samples for six species of Plethodontidae (Aquiloeurycea cafetalera, Chiropterotriton nubilus, Parvimolge townsendi, Pseudoeurycea granitum, Pseudoeurycea lynchi, Pseudoeurycea nigromaculata) and 22 samples of Ambystoma mexicanum (Ambystomatidae).These species were sampled from wild populations except for A. mexicanum, which were sampled from simulated outdoor environments (mesocosms).For these five species, we obtained samples after rinsing the skin with 25 ml of sterile water to eliminate transient microorganisms and then swabbing the skin with a sterile cotton swab.We extracted total genomic DNA from each swab using a Qiagen DNeasy Blood and Tissue kit (Qiagen, Germantown, USA) and amplified the V4 region using barcoded primers (515F-806R).Single-end amplicons were sequenced using an Illumina MiSeq 300 platform at the Dana Farber Cancer Institute.We also extracted and sequenced negative controls (dummy swabs), but these did not amplify during 16S library construction and thus were not included in subsequent sequencing.
The complete dataset includes the following samples taken from captive animals (Additional file 2): Echinotriton andersonii (22 samples), and Eurycea waterlooensis (28 samples).For E. andersonii and E. waterlooensis, captive individuals were taken from indoor environments that are not representative of their native habitat.All samples from captive individuals were included in the estimation of diversity metrics and bacterial relative abundances.The samples for A. mexicanum were taken from animals under simulated outdoor environments (mesocosms) that are located within the species' former native range (Xochimilco Lake).These are exposed to the same climate conditions as the original natural habitat and use water sourced directly from the lake.Considering the above, samples for A. mexicanum were included in all analyses including linear mixed models and Mantel tests.
In sum, the 16S amplicon sequence data comprised 1,164 samples and contained a total of 2,677,200 reads, which were processed using semi-automated pipelines in QIIME2 version 2021.2.0 [53].Prior to importing the sequence data into QIIME2, we aligned and assembled forward and reverse reads using Paired-End reAd mergeR [54] (PEAR) and discarded sequence reads with a quality score > 20 and a length < 100bp using Trimommatic [55].We then processed the filtered data using QIIME2.All samples were rarefied to 2,300 reads per sample.Sequences were grouped by the study of origin and processed using DADA2 [56].To merge sequence reads from different 16S regions, we used the plugins 'featuretable merge' and 'fragment-insertion' implemented with the SILVA database tree in QIIME2 [53].This allowed estimation of alpha and beta diversity indices at the ASV level.
Climate and elevation data.We obtained bioclimatic data for each sampled locality using the corresponding geographic coordinates (latitude and longitude) provided in the original studies.We extracted information for the 19 bioclimatic variables from WorldClim 2.1 [31] at a 30 s (~ 1 km 2 ) spatial resolution; these data represent climate averages over the period 1970-2000 (Additional file 2).In addition, we used the reported collection date (month and year) at each sampling locality to obtain historical climate data from the monthly series available in World-Clim 2.1 [31].These climate series include monthly data for precipitation and temperature (minimum and maximum) over the period 1960-2018.For each sample, we used the GPS coordinates to extract data for the corresponding month/year with a spatial resolution of 30 s; for 193 samples taken in the period 2019-2021, we used the latest data available from 2018.We extracted elevation data for each locality from the GTOPO30 global digital elevation model (US Geological Service' Earth Resources Observation and Science Center) with a spatial resolution of 30 arc-seconds.Data extraction was performed using the raster [57] and sp [58,59] packages in R [60].
Salamander phylogeny
To assess the influence of host phylogeny, we constructed a species-level dated phylogeny for salamanders and estimated evolutionary distances among species.Briefly, we assembled a molecular sequence alignment for Caudata from NCBI's GenBank vertebrate database (last updated May, 2020) using the semi-automated pipeline PyPHLAWD [61]; this automatic pipeline retrieves molecular data from GenBank, generates 'clusters' of likely ortholog sequences using the Basic Local Alignment Search Tool [62], and aligns each cluster with the Multiple Alignment using Fast Fourier Transform (MAFFT) algorithm [63].We queried for 'Caudata' sequences longer than 400 bp, with a minimum sequence identity of 0.2, and a minimum coverage of 0.65.We complemented the molecular matrix with sequence data for Ambystoma marvotium, A. tigrinum, and an unnamed species (Pseudoeurycea sp) with available 16S sequence data, which were aligned using Clustal2.0[64] and AliView [65].The resulting alignment included both nuclear and mitochondrial markers, with a total length of 224,266 bp for 580 living species of salamanders.
We estimated a maximum likelihood (ML) phylogeny with RAxML v.8 [66] using the GTR CAT model, 1000 bootstrap replicates, and substitution parameters estimated for each partition independently.We constructed an initial ML tree to check for 'rogue taxa' and evaluate the overall accuracy of the estimated tree topology.After this initial check, we constructed a final ML tree where we conservatively applied several constraints on the topology of the tree and used three species of Xenopus (Anura) as the outgroup.Overall, the ML tree showed high support values (> 0.85) for relationships at the genus level and above, but lower support for relationships at the species level (Additional file 3).We compiled a set of 38 fossil specimens for Caudata to perform fossil-based molecular dating (Additional file 4); in sum these fossils calibrate 18 distinct nodes in the salamander phylogeny.The relations of fossils to extant species were based on the original assignments of fossils and their phylogenetic relationships were vetted against the proposal by Marjanovic and Laurin [67].We performed the fossil-based molecular dating of the ML tree with a Penalized Likelihood (PL) approach as implemented in treePL [68], with a smoothing parameter of 0.00001 that was determined through cross-validation; we dated the best-scoring tree and 100 bootstrap trees to account for uncertainty in divergence time estimates across the phylogeny (Additional files 5 and 6).For all dating analyses, we set an age constraint for the stem and crown nodes of Caudata of 227-280 Mya and 166.1-280Mya, respectively.
Bacterial diversity analyses
We used QIIME2 [53] to assign bacterial taxonomy to ASVs using the Ribosomal Database Project [69] and estimate the relative abundance of bacterial taxa across all samples.We employed the core-metrics-phylogenetic pipeline in QIIME2 to estimate alpha and beta diversity using the estimates of the relative abundance of ASVs across samples.We estimated bacterial alpha diversity using Shannon Diversity index (Additional file 2) and estimated bacterial beta diversity using the phylogeneticbased weighted (wUF) and unweighted (uwUF) Unifrac dissimilarity indices (Additional file 7).We explored differences in microbial alpha diversity among salamander sampling habitats and families using a Wilcoxon test and a Kruskal-Wallis test, respectively; we also performed pairwise Wilcoxon tests to assess differences among families using a Bonferroni correction for multiple comparisons.We explored differences in beta diversity among salamander habitats and families by performing a nonmetric multidimensional scaling (nMDS) on the wUF and uwUF matrices, followed by a Permutational Analysis of Variance (PERMANOVA) [70].We did not attempt to use stratification by study ID in alpha and beta analyses because most studies were performed on single species from one habitat type.A stratified permutation would not be appropriate because permutations would be limited to within studies [70], leading to permutation of samples with the same grouping variables.All statistical tests were performed using the vegan [71] package in R [60].
We searched for shared bacterial orders and families among all salamander species using their relative abundances; for this we used the taxonomy of ASVs at level four and five of the Ribosomal Database Project, which in general coincide with bacterial orders and families.However, there are exceptions, for which additional subclass levels are included in the taxonomic annotation and thus levels four and five correspond to subclass and order, respectively.Thus, we manually edited the corresponding taxonomy of some ASVs to match levels 4 and 5 with orders and families, respectively.We then estimated the prevalence of shared bacterial taxa within each host species; for each bacterial taxon and host species, prevalence was estimated as the percentage of host samples where the presence of a bacterial taxa was detected.Finally, we assessed whether particular bacterial taxa could discriminate among samples from different salamander habitats or families by employing a Linear discriminant Effect Size analyses (LEfSe) [72] using habitat and family as response variables, separately.The LEfSe analyses were performed using the relative abundances tables at the bacterial-order level and only those bacterial taxa with an LDA scores > 2.0 were considered as informative [35,[72][73][74].For the LEfSE analysis using host families we employed a 'strict' strategy [72] to identify differentially abundant bacterial taxa; here the abundance profile of a feature (taxa) has to be significantly different among all classes tested (families).
Drivers of bacterial alpha diversity
To assess the influence of different factors on bacterial phylogenetic alpha diversity (log-transformed Shannon Diversity Index), we fitted a linear mixed model that included the fixed effects of host sampling habitat, host family, climatic variables and elevation, while controlling for the possible random effects on the intercept across studies; these random effects are aimed to encapsulate differences in levels of alpha diversity among studies due to sampling and sequencing techniques.We implemented a two-step approach to select the least-correlated bioclimatic variables that remained as strong predictors of bacterial alpha diversity: (1) a stepwise forward and backward regression that uses the Akaike Information Criteria (AIC) to select bioclimatic variables with significant effects on alpha diversity; (2) pairwise Pearson correlations among selected variables to identify and discard those with a pairwise correlation higher than r > 0.7.We used the selected variables, together with salamander sampling habitat and family, to fit a linear mixed model using the lme4 [75] package in R [60].The resulting model was further simplified by estimating variance inflation factors (VIFs) of all variables using the performance [76] package in R [60].We identified and discarded variables with a VIF > 10 and fitted a new simplified linear mixed model; the fitted model takes the form (see Additional file 2 for variable names):
Drivers of bacterial beta diversity
To determine the major factors influencing bacterial beta diversity we employed a distance-based redundancy analysis (dbRDA) [77] to evaluate the influence of host family, host sampling habitat, climatic variables, and elevation on bacterial beta diversity; we used the wUF and uwUF dissimilarity matrices as the response variables, separately.Briefly, the dbRDA performs classical multidimensional scaling on a dissimilarity matrix and then conducts a redundancy analysis using the ordination scores to examine how much variation is explained by a given set of explanatory variables [74].Prior to the analyses, we z-scored the climatic variables and performed variable selection as described above for the linear mixed model.We used the selected variables, together with salamander sampling habitat and family, to perform dbRDA using the vegan package [71] in R [60] and employed a permutational approach to test for significance of the effect of individual predictor variables.After identifying and discarding variables with a VIF > 10, the final fitted model takes the form (see Additional file 2 for variable names):
Host phylogenetic effect
We used the dated salamander phylogeny to explore the correlation between host phylogenetic distances and bacterial community distances using the Bray-Curtis dissimilarity index.More specifically, we used Mantel and partial Mantel tests to assess the strength of the correlation between host phylogenetic distances and microbiome dissimilarity, while controlling for climatic distances among host species; in other words, our Mantel tests can be formulated as assessments of whether microbiome distances are structured in 'phylogenetic space' .For this, we obtained a Bray-Curtis dissimilarity matrix by averaging the ASV relative abundances across samples for each salamander species and then used the ASV assigned Shannon Diversity ∼ pre + tm_max + elevation + bio2 + bio6 + bio8 + bio10 + bio17 + bio18 + bio19 + Habitat + Family + (1| Dataset) Beta diversity ∼ pre + tm_max + bio2 + bio8 + bio17 + bio19 + Habitat + Family taxonomy to estimate the relative abundances of bacterial taxa at different taxonomic ranks (Additional file 8).We estimated host phylogenetic distances as pairwise patristic distances among salamander species, which were measured in millions of years since the most recent common ancestor for each pair of species.We estimated evolutionary distances for each of the dated phylogenetic trees obtained with treePL (bootstrap trees and best scoring tree, n = 101) using the adephylo 78 package in R [60].We estimated climatic distances between species using the climatic data extracted for every sampled locality in our database and performed a Principal Component Analysis (PCA) of the climatic variables using the ade4 79 package in R [60]; we summarized the scores for the principal components for each salamander species and estimated the pairwise Euclidean distances between all pairs of salamander species to obtain a climatic dissimilarity matrix.
Finally, we employed Mantel correlograms to evaluate the evolutionary scale at which correlations between host phylogenetic and bacterial community dissimilarities are occurring.The correlogram depicts the variation in the Mantel correlation as a function of phylogenetic distance classes, which are estimated directly from the data; we corrected for multiple comparisons in the correlograms using the false discovery rate (fdr).We used the mpmcorrelogram 80 package in R [60] to estimate Mantel correlograms while controlling for climatic distances (partial correlograms).For all tests, we evaluated significance by performing 999 permutations.
Fig. 1
Fig. 1 Geographic and climatic distribution of localities sampled for salamander skin bacterial communities.a Geographic distribution of sampling localities coloured by salamander family.The size of circles is proportional to the number of samples per geographic location.The colour scale on the map depicts salamander species diversity at a 10 × 10 km resolution obtained from https:// biodi versi tymap ping.org.The location of samples from captive salamanders (representing two species) is not shown.b Annual temperature and c annual precipitation data distribution of sampling localities
Fig. 2
Fig.2Salamander phylogeny, host sampling habitat, and relative abundance of the shared skin bacterial orders.a Species-level dated phylogeny for salamanders showing phylogenetic relationships and divergence times, and the proportion of samples taken from different habitats (aquatic or terrestrial).The tree represents a 'pruned' version of the complete species-level phylogeny that includes species with skin microbiome data (see Additional file 1: Fig.S1).b Relative abundances of the 16 shared bacterial orders for each salamander host species.Orders are arranged from left to right in stacked graphs and from upper-left to bottom-right in the legend
Fig. 3
Fig. 3 Influence of host habitat and taxonomy on the diversity of the salamander skin bacterial communities.a-b, distribution of bacterial alpha diversity (Shannon Diversity Index) by salamander habitat (a) and family (b).Black circles correspond to samples from captive host species.c-f non-metric multidimensional scaling (nMDS) of beta diversity estimated using weighted Unifrac (wUF) (c, d) and Unweighted Unifrac (uwUF) (e, f) distances.Colours are indicative of the corresponding classifications.Circles with black outline correspond to samples from captive host species
Fig. 4
Fig. 4 Association between salamander phylogenetic distances and skin bacterial community dissimilarity.a Bacterial dissimilarity (Y-axis) at the order level as a function of host species evolutionary distances (X-axis) estimated by fossil-based molecular dating of the best-scoring ML tree of extant salamanders.The solid black line represents the slope estimated with a Mantel test between matrices.b Correlogram showing the variation in the Mantel correlation coefficients as a function of host species evolutionary distances (in millions of years).Open circles connected by a solid black line represent the correlations estimated with the best-scoring ML tree.Solid circles represent the correlations estimated with evolutionary distances using the fossil-based molecular dating of the bootstrap trees.Colours are indicative of the corresponding p-values of correlations
|
v3-fos-license
|
2019-05-07T14:15:59.416Z
|
2019-01-01T00:00:00.000
|
146743454
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2019/10/epjconf_up2019_09036.pdf",
"pdf_hash": "bc3217663190045f555e547bca788b1c30eaec49",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44129",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"sha1": "8e3fa90a132903bcfffdfdf4cfbc225ca4e3bdaf",
"year": 2019
}
|
pes2o/s2orc
|
Femtosecond nonadiabatic dynamics in photosynthetic light harvesting
Fast and efficient energy transfer in photosynthetic antennas supports all life on earth. Nonadiabatic energy transfer drives unusual vibrations through tight coupling with electronic motion. Polarization dependent vibrational motion drives polarization independent femtosecond energy transfer.
Introduction
Photosynthesis supports essentially all life on earth. In photosynthesis, light is harvested by antennas containing thousands of light absorbing pigment molecules and transferred to reaction centers, which initiate a series of reactions that store energy by synthesizing high energy molecules. Energy transfer [1] and the primary charge separation [2,3] are ultrafast and extraordinarily efficient, inspiring many investigations of their mechanism. Two-dimensional spectroscopy has revealed unusual quantum beat signatures for both processes, [4] which were initially hypothesized to arise from protein protection of electronic coherence. Recent experiments [5] support an alternative mechanism [1] in which vibrations that are delocalized over more than one pigment are amplified by coupling to electronic motion. We report unusual aspects of such electronically amplified vibrations.
Model and calculation
In the model we have used, intramolecular vibrations of the pigments become delocalized by the energy transfer coupling between excited states. Eq. (1) shows the vibrationalelectronic interactions for the very simplest Hamiltonian of this type, which describes a dimer in which two coupled pigments have unequal electronic excitation energies separated by a gap Δ.
The two electronic states with one pigment excited (A and B) have a coordinate independent coupling J that causes partial electronic delocalization. These delocalized * electronic states are known as Frenkel excitons. The pigment vibrations are also delocalized. The delocalized, anti-correlated vibration of the pigment pair, q -=(q Aq B )/2 1/2 , is coupled into the electronic dynamics because it tunes the pigment energy gap. The strongest coupling occurs when the vibrational frequency is resonant with the energy gap between delocalized electronic states; this causes a breakdown of the Born-Oppenheimer approximation that assumes vibrations and electronic motion are separable. Figure 1 shows the coupled vibrational and electronic dynamics for electronic excitation of the donor. Here, the donor pigment B is excited at t =0.
Results
The parameters for Fig. 1 were chosen to mimic one pair of excitons and their resonant vibration in the highly efficient FMO antenna protein.
[1] Figure 1 shows that progressive changes in electronic character are accompanied by a vibration of progressively larger amplitude. The energy transfer reaches a maximum at about 600 fs, when the electronic character is that of the acceptor A (bright green or purple) and switches sign from |A> to -|A> (purple to green) as the vibrational wavepacket oscillates from side to side with each vibrational half period. Because this model does not include additional vibrations, the energy then goes back to the donor and the vibration dies away. This indicates the ideal damping through vibrational relaxation or vibronic decoherence should occur on a 600 fs timescale. Interestingly, this matches the timescale for vibronic dephasing reported [5] in a recent 2D experiment on FMO at a temperature of 77K. Investigations of this damping process are under way. The vibrational dynamics are polarization dependent. Figure 2 shows that the direction of vibration depends on the polarization of the exciting light in the molecular frame of the dimer. Typically, vibrations are excited through a change in equilibrium bond length or angle upon electronic excitation; this determines the direction and amplitude of vibrational motion (Franck-Condon principle). Figure 2 shows not only that the vibrational amplitude grows in time (as seen in Figure 1), but also that the initial direction of the vibrational motion is polarization dependent. This type of vibrational motion underlies the appearance of vibrational quantum beats in 2D spectra with the double-crossed polarization configuration (-π/4, π/4, π/2, 0). [1,5] Interestingly, although the vibrational dynamics vary with excitation polarization, the electronic dynamics always steadily maximize population on the lowest adiabatic
Conclusions
Signatures of vibrational-electronic resonance have been found in femtosecond 2D spectra for many light harvesting proteins, and also in the reaction center of higher plants. This study has revealed tightly coupled vibrational and electronic motions that generate these signatures at liquid nitrogen temperatures. Fluctuations and damping at physiological temperature [6] are under investigation.
This material is based upon work supported by the National Science Foundation under Grant No. CHE-1405050. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
|
v3-fos-license
|
2022-12-07T17:59:34.629Z
|
2022-12-01T00:00:00.000
|
254327876
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2306-5354/9/12/745/pdf?version=1669875275",
"pdf_hash": "dca6ee5620392279fe3e2071348fdc30091ea844",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44132",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "5ef1a4436828eb1198aaab7626456d4e30dc910e",
"year": 2022
}
|
pes2o/s2orc
|
Enhancing Molecular Testing for Effective Delivery of Actionable Gene Diagnostics
There is a deep need to navigate within our genomic data to find, understand and pave the way for disease-specific treatments, as the clinical diagnostic journey provides only limited guidance. The human genome is enclosed in every nucleated cell, and yet at the single-cell resolution many unanswered questions remain, as most of the sequencing techniques use a bulk approach. Therefore, heterogeneity, mosaicism and many complex structural variants remain partially uncovered. As a conceptual approach, nanopore-based sequencing holds the promise of being a single-molecule-based, long-read and high-resolution technique, with the ability of uncovering the nucleic acid sequence and methylation almost in real time. A key limiting factor of current clinical genetics is the deciphering of key disease-causing genomic sequences. As the technological revolution is expanding regarding genetic data, the interpretation of genotype–phenotype correlations should be made with fine caution, as more and more evidence points toward the presence of more than one pathogenic variant acting together as a result of intergenic interplay in the background of a certain phenotype observed in a patient. This is in conjunction with the observation that many inheritable disorders manifest in a phenotypic spectrum, even in an intra-familial way. In the present review, we summarized the relevant data on nanopore sequencing regarding clinical genomics as well as highlighted the importance and content of pre-test and post-test genetic counselling, yielding a complex approach to phenotype-driven molecular diagnosis. This should significantly lower the time-to-right diagnosis as well lower the time required to complete a currently incomplete genotype–phenotype axis, which will boost the chance of establishing a new actionable diagnosis followed by therapeutical approach.
Introduction
A great proportion of genetic disorders manifest phenotypically by early adulthood, resulting in a cumulative incidence of observed rare diseases between 1.5-6.2% in the general population [1,2]. Due to the wide phenotypic heterogeneity and lack of robust molecular testing strategies, diagnosis is challenging and is often delayed by several years. According to the latest telomere-to-telomere human genome assembly, the size of the human genome is of the order of 3.055 Gbp [3]. Besides the relatively large size of the genome, dysfunctional methylation, histone modifications and RNA expression may also elicit a phenotypic burden; therefore, a systematic approach is needed to correctly address genotype-phenotype correlations [4]. Currently, diagnostically used cytogenetics and molecular biology techniques depict only certain types of human genetic alterations (e.g., due to DNA fragmentation during library preparation for next-generation sequencing (NGS), the detection of short tandem repeats, as trinucleotide repeat expansions are not feasible, and also the detection of structural variants is quite limited [5]. The resolution of G-band karyotyping is mostly limited to 3 Mbp, and the higher resolution providing an array-comparative genome hybridization (array-CGH) approach cannot detect low-level mosaicism, balanced translocations or uncover copy number variations where oligoprobes
Molecular Testing-Timing and Approach
The timing of molecular testing is of essential importance. The molecular testing of germ line cells is limited. In males, germline cells can be tested from a testis biopsy for (1) nucleotide-level variations in a targeted fashion with Sanger sequencing (SS)/NGS or a nanopore-based sequencing (ONT) approach, or (2) structural-level variations in a targeted fashion with fluorescence in situ hybridization (FISH) with ONT. For a wholeexome or whole-genome approach to detect nucleotide variations, NGS or ONT can be applied. To detect structural variations array-comparative genome hybridization (array-CGH), ONT or optical genome mapping (OGM) can be applied [13]. For methylation-based analysis, a methylation-sensitive multiple-ligation-based assay (MS-MLPA), pyrosequencing or ONT can be applied. Therefore, ONT provides a comprehensive genome-testing method whilst preserving native nucleic acid modifications [14]. As for preconceptual testing, carrier screening, for most diseases, showing an autosomal recessive inheritance pattern and is available in many countries at a reasonable price. Preimplantation testing during in vitro fertilization and the application of preimplantation gene testing via the detection of pathogenic structural variants in embryos could be of key importance [15]. For targeted prenatal testing, ONT has been successfully applied for the testing of fetal DNA from maternal blood samples [16]. As for postnatal testing, for whole-genome sequencing and diagnosis, a world-record time of 7 h and 18 min has been achieved by the ONT approach [17].
High-throughput deep-sequencing enables the uncovering of variant frequencies and methylation CpG patterns in heterogenous samples. The recently developed nanopore Cas9-targeted sequencing (nCATS) method has proven to provide the aforementioned quality at a targeted level in a cost-effective way [14].
From a methodological approach, the following two major points need to be addressed to deliver an effective molecular testing strategy: (1) what type of molecular alteration should be detected, and (2) what are the functional consequences of the detected variants?
At the DNA level, molecular alterations could be categorized by nucleotide variants and structural variants. Nucleotide variants are usually DNA sequence variations ranging in size between 1-100 bp that may be of critical etiological value in several inheritable disorders and somatic pathogenic variants. Structural variants, ranging in size from several hundred base pairs up to few Mbp, may also elicit inheritable disorders and tumor predisposition or may be of a benign polymorphism value ( Figure 1). nanopore Cas9-targeted sequencing (nCATS) method has proven to provide the aforementioned quality at a targeted level in a cost-effective way [14].
From a methodological approach, the following two major points need to be addressed to deliver an effective molecular testing strategy: (1) what type of molecular alteration should be detected, and (2) what are the functional consequences of the detected variants? At the DNA level, molecular alterations could be categorized by nucleotide variants and structural variants. Nucleotide variants are usually DNA sequence variations ranging in size between 1-100 bp that may be of critical etiological value in several inheritable disorders and somatic pathogenic variants. Structural variants, ranging in size from several hundred base pairs up to few Mbp, may also elicit inheritable disorders and tumor predisposition or may be of a benign polymorphism value ( Figure 1).
Figure 1.
Reliable detection of genetic variants by size and technique throughput. Different techniques allow the detection of certain nucleotides or structural variants. The throughput of the different techniques also varies. Of note, throughput is also instrument-dependent, and this is not reflected entirely in the figure, as in some cases the instrumentation may change the throughput order represented in the figure. Created with BioRender.com. (accessed on 06 September 2022).
Nanopore Sequencing
Recent advances have led to the rapid and highly efficient deciphering of pathogenic variants using nanopore-based sequencing, allowing rapid clinical diagnosis [18]. ONT allows the uncovering of targeted nucleic acid sequences or whole-genome, epigenome (5-methylcytosine), transcriptome and epitranscriptome (N 6 -methyladenine) analysis [18]. Nanoscale-sized nanopores act as biosensors for detecting ionic current changes in real time during single-stranded DNA or RNA molecules (unwound by a motor protein possessing helicase activity) passing in a step-by-step manner [18]. A typical workflow starts with high-molecular-weight DNA extraction coupled with optional fragmentation or size selection (to remove overrepresented small DNA fragments). For library preparation, a relatively short DNA repair and adapter ligation strategy could be used, followed by a loading step on the nanopore-flow cells and real-time sequencing. By applying hybrid error correction tools, the long-read error rate is nowadays between 1-4% of that of short reads. Four main branches could use the advantages of ONT in clinical settings: (1) to identify the background of genetic diseases, (2) to molecularly diagnose cancer patients (e.g., acute leukaemias, solid tumors where certain molecular alterations may greatly influence the therapy of choice [19,20]), (3) rapid pathogen identification in an infectious disease scenario, and (4) to rapidly sequence the major histocompatibility of genes for recipient-donor tissues in transplantation medicine. The strength of nanopore sequencing relies in resolving long-range information, which is one of the main limitations
Nanopore Sequencing
Recent advances have led to the rapid and highly efficient deciphering of pathogenic variants using nanopore-based sequencing, allowing rapid clinical diagnosis [18]. ONT allows the uncovering of targeted nucleic acid sequences or whole-genome, epigenome (5-methylcytosine), transcriptome and epitranscriptome (N 6 -methyladenine) analysis [18]. Nanoscale-sized nanopores act as biosensors for detecting ionic current changes in real time during single-stranded DNA or RNA molecules (unwound by a motor protein possessing helicase activity) passing in a step-by-step manner [18]. A typical workflow starts with high-molecular-weight DNA extraction coupled with optional fragmentation or size selection (to remove overrepresented small DNA fragments). For library preparation, a relatively short DNA repair and adapter ligation strategy could be used, followed by a loading step on the nanopore-flow cells and real-time sequencing. By applying hybrid error correction tools, the long-read error rate is nowadays between 1-4% of that of short reads. Four main branches could use the advantages of ONT in clinical settings: (1) to identify the background of genetic diseases, (2) to molecularly diagnose cancer patients (e.g., acute leukaemias, solid tumors where certain molecular alterations may greatly influence the therapy of choice [19,20]), (3) rapid pathogen identification in an infectious disease scenario, and (4) to rapidly sequence the major histocompatibility of genes for recipient-donor tissues in transplantation medicine. The strength of nanopore sequencing relies in resolving long-range information, which is one of the main limitations of shortread sequencing technologies [21]. By coupling the unique molecular identifiers (UMI) used in single-cell transcriptomics and genomic long-read transcriptomes, transcriptomes may be sequenced at single-cell resolution with ONT [22]. In Figure 2 we present two methodological approaches for targeted nanopore-based sequencing. of short-read sequencing technologies [21]. By coupling the unique molecular identifier (UMI) used in single-cell transcriptomics and genomic long-read transcriptome transcriptomes may be sequenced at single-cell resolution with ONT [22]. In Figure 2 w present two methodological approaches for targeted nanopore-based sequencing. Principles of targeted nanopore sequencing: adaptive sampling and Cas9-assiste methods. High-molecular-weight DNA (HMW DNA) is extracted from relevant biological sample After quality checking for concentration and purity, two highly potent methods can be applied fo a targeted sequencing approach. The adaptive sampling approach can be used for selectiv enrichment of regions of interest to be sequenced. To enrich prior sequencing during librar preparation with designed sgRNAs, the region(s) of interest can be selectively enriched and loade to the nanopore-based sequencing platform. After quality control assessment, two pipelines can b run, one for methylation pattern analysis, and another for detection of nucleotide and structura variations. This highlights the unique power of nanopore sequencing: parallel detection of bot sequence and methylation pattern. Created with BioRender.com. (accessed on 06 September 2022
(2) Targeted molecular testing in a time-dependent manner for the establishment o an actionable diagnosis, e.g., gene therapy, a modified therapeutical approach an enzyme replacement therapy. Recently, a feasibility study designed to address targete gene analysis from noninvasive prenatal testing (NIPT) samples using ONT has bee High-molecular-weight DNA (HMW DNA) is extracted from relevant biological samples. After quality checking for concentration and purity, two highly potent methods can be applied for a targeted sequencing approach. The adaptive sampling approach can be used for selective enrichment of regions of interest to be sequenced. To enrich prior sequencing during library preparation with designed sgRNAs, the region(s) of interest can be selectively enriched and loaded to the nanoporebased sequencing platform. After quality control assessment, two pipelines can be run, one for methylation pattern analysis, and another for detection of nucleotide and structural variations. This highlights the unique power of nanopore sequencing: parallel detection of both sequence and methylation pattern. Created with BioRender.com (accessed on 6 September 2022).
(2) Targeted molecular testing in a time-dependent manner for the establishment of an actionable diagnosis, e.g., gene therapy, a modified therapeutical approach and enzyme replacement therapy. Recently, a feasibility study designed to address targeted gene analysis from noninvasive prenatal testing (NIPT) samples using ONT has been described [16]. ONT provides a direct construction of haplotypes, through relative haplotype dosage analysis from maternal blood plasma, which is of importance in diseases such as ß-thalassaemia, spinal muscular atrophy, Duchenne's and Becker's muscular dystrophies and Hunter's disease [16]. As is summarized in Table 1, the molecular diagnosis of actionable rare diseases and haematological malignancies is feasible in a rapid and reliable fashion with ONT. Identified pathogenic variants in the GLA (leading to Fabry's disease), GBA (leading to Gaucher's disease) or PAH (leading to phenolketonuria) genes involve a close follow-up, and the affected patients may be eligible for specific enzyme replacement therapy and/ or substrate-reduction therapy [8,33,34]. As for rare disease diagnostics, one of the most common causes of the intellectual disability, Fragile X syndrome, causing STR alterations in the FMR1 gene as well other genes of interest causing intellectual disability, skeletal or inborn heart disorders, has been successfully detected by the ONT platform. One of the key questions in tumor diagnostics is to identify the actionable gene alterations that may modify the therapeutic strategy in a rapid fashion as well to conclude whether the detected alteration is limited to the somatic tumor-associated tissue or is a germline alteration. The second part of Table 1 depicts such gene alterations that are actionable findings in tumoral settings.
(3) Multiomical and high-resolution-based testing for the depiction of heterogeneity and mosaicism. In several in vitro studies involving cell lines and primary cells, ONT has been successfully applied, providing further evidence of nanopore application for variant analysis (Supplementary Table S1).
The current limitations of ONT include the slightly different approach to DNA extraction and library preparation. The input and quality of the sample DNA may limit the sequencing throughput, and as for post-sequencing, the sequence and methylation pattern need different analysis pipelines. Also the calling of SNVs although has been significantly improved (varying between 1-4%) in recent years, it needs further enhancement to be below 1% of the error rate [5]. NA-no available data on the used kit, NGS-next-generation sequencing, SS-Sanger sequencing, SV-structural variation STR-short tandem repeats. Actionable genetic diagnosis: The definition of an actionable genetic diagnosis in conjunction with the ACMG's newest guidelines can be defined as an available medical intervention for certain genetic disorders by reducing morbidity and mortality and enhancing the quality of life [47]. Possible medical interventions arise from a gene-therapeutic point, addressing pathogenic variants at the DNA level in a tissue-specific/tropic fashion. Next, acting at the RNA level, small interfering RNAs or anti-sense oligonucleotides may act at the cellular level to reduce the phenotypic burden. At the protein level, enzyme-replacement therapy is available for quite a few inborn error-of-metabolism disorders, and molecular chaperones enhancing enzymatic activity or substrate-reducing agents with disease-modifying effect are available for selected lysosomal storage disorders. Medical interventions of actionable genetic diagnoses are applied nowadays in the postnatal lifecycle; however, the implementation of disease-modifying therapies in the prenatal period [48] or as a new core concept a possible new era of in utero gene therapy may also arise [49] (Figure 3).
Bioengineering 2022, 9, x FOR PEER REVIEW are available for selected lysosomal storage disorders. Medical interventions of acti genetic diagnoses are applied nowadays in the postnatal lifecycle; howeve implementation of disease-modifying therapies in the prenatal period [48] or as core concept a possible new era of in utero gene therapy may also arise [49] (Figur Figure 3. Actionable genetic diagnosis-identification of certain genetic diagnoses ope pathway for either disease-specific therapy or medical actions that may significantly low burden of phenotype and enhance the quality of patients' life. Created with BioRend (accessed on 16 November 2022).
Genetic Counselling-State of the Art
Genetic counselling reflects the backbone of human genomics analysis by find answer to the following three fundamental questions: (1) Why is it recommended for a genetic test? (2) What is our target nucleic acid sequence that should be analy a suitable test available at that particular timepoint, highlighting the strength limitations of it? (3) How should we critically interpret and obtain insight about th provided by the genetic analysis?
One of the most challenging tasks of genetic counselling is to maximize the c utility and, at the same time, minimize the uncertainty of information [50]. This co enhanced by setting up multidisciplinary professional healthcare teams wh synthesize and collaborate to precisely define and follow up phenotypic spectra,
Genetic Counselling-State of the Art
Genetic counselling reflects the backbone of human genomics analysis by finding the answer to the following three fundamental questions: (1) Why is it recommended to opt for a genetic test? (2) What is our target nucleic acid sequence that should be analyzed by a suitable test available at that particular timepoint, highlighting the strengths and limitations of it? (3) How should we critically interpret and obtain insight about the data provided by the genetic analysis?
One of the most challenging tasks of genetic counselling is to maximize the clinical utility and, at the same time, minimize the uncertainty of information [50]. This could be enhanced by setting up multidisciplinary professional healthcare teams who can synthesize and collaborate to precisely define and follow up phenotypic spectra, which may maximize the uncovering of phenotype-genotype associations. Thus, the clinical pathway of patients can be significantly improved, often influencing the screening strategies and/or therapeutic approach.
Role of Pre-Test Genetic Counselling
To define the genotype-phenotype correlation as precisely as possible, a detailed phenotyping and pedigree building is essential, which can also be enhanced by artificial intelligence (AI). Phenotyping begins with a detailed genetic anamnesis. This should include a preconceptual anamnesis (age of the biological mother and biological father at conception, mode of conception and use of periconceptual vitamins) and a prenatal anamnesis (use of or contact with teratologic agents during the embryonal and/or fetal critical period. TORCH-screening, pathological ultrasound findings, the use of prenatal vitamins, drug use, and prenatal genetic tests such as NIPT, G-banding, chromosomal microarray or whole-exome sequencing could be conducted) [51]. The anamnesis of the perinatal period should cover the mode of delivery, possible hypoxic events, the age of gestation, the weight, height and head circumference at birth, the APGAR score, neonatal cardiorespiratory adaptation, the use of O 2 therapy, newborn hearing screening, newborn feeding difficulties and breastfeeding. With a special emphasis on the advances of developmental milestones and the use of early childhood development therapies, language, social skills and learning ability development should be questioned. At the end of genetic anamnesis, the building of a four-or five-generation genetic pedigree is advised. For the digitalization of pedigrees, different software packages (e.g., Evagene Clinical-a free open-source software available at www.evagene.com (accessed on 6 September 2022) or GenoPro Waterloo, Ontario, Canada-a paid software option available at www.genopro.com (accessed on 6 September 2022)) can be used. Critical questions should be covered during the pedigree assessment such as (1) the spontaneous abortion history of the index patient's mother and maternal grandmother; (2) any known perinatal death in the family; (3) the occurrence of sudden cardiac death in the family; (4) any malignant tumor development before the age of 45 in the family; (5) any recurring deep-vein thrombosis, any pulmonary emboly in the family; (6) any consanguineous marriage in the family; and (7) any infertility in the family.
Next, a genetic physical assessment should include the detailed phenotyping of minor anomalies. Detailed phenotyping is an important way to evaluate the impact of penetrance, the possibility of uncovering a second genetic alteration and of expanding the phenotypic variability of a molecular finding [52][53][54][55]. The use of prenatal data may also enhance the findings of a high-throughput molecular analysis [51]. If a targeted approach is used, the detailed phenotype will determine the test of choice [56]. Guidelines on the standard use of systematic phenotyping are available [57][58][59]. The use of artificial intelligence for the refinement of minor anomalies (e.g., Face2Gene-a free open-source online platform [60]) can also be useful. Systematic phenotyping includes the depiction of facial minor anomalies, which should include the size of the nasal bridge, a nasal tip and nares assessment, canthal folds, endocanthal lengths, interpupillary distance, exocanthal length, palpebral fissure slanting, eyelash length and density, eyebrow thickness and/or conjoined eyebrows, philtrum size, lower and upper lip thickness and slanting, intercommissural distance, enlarged interdental space, a high narrow palate, bifurcated uvula, tongue size and asymmetry, bilateral philtrum-mandibular angle distance, forehead size and protrusion, bitemporal distance, ear set height, ear asymmetry, rotation of the external ear, jaw position and mandibular size. Next, the shape and size of the skull as well as the hairline insertion and hair thickness anomalies should be assessed. In addition, skin alterations (e.g., café-au-lait spots, angiokeratomas, etc.) and hand and feet minor anomalies should be assessed in addition to a regular physical examination. A detailed depiction of minor anomalies and their annotation should be performed according to the human phenotype ontology (HPO) [61]. Next, the objectifying of minor anomalies by precise measurements during minor anomaly depiction should be calculated for age-and gender-specific percentile values. In addition, a detailed depiction of growth curve tendencies by a comparison of percentiles specific to age, gender and diagnosis (e.g., growth percentiles for girls between age 0-2 diagnosed with Down's syndrome) should be conducted. Based on the evaluation of the detailed genetic anamnesis, physical examination and pedigree, the best-suited targeted or high-throughput analysis should be chosen. Detailed information about the usefulness, benefits and limitations of the proposed test should be provided. Then, informed consent should be signed, with special emphasis on the reporting of secondary findings according to the latest ACMG guidelines [47] (Figure 4).
Role of Post-Test Genetic Counselling
The role of post-test genetic counselling is a critical evaluation of the results of the applied genetic test in light of the phenotype. The evaluation of genotype alterations should be in line with the observed phenotype. It is recommendable, when possible, that the positive findings be confirmed by another technique. Of note, if variants of unknown significance have been identified or the methylation pattern has also been determined in addition to sequencing, it could be re-evaluated in the near future as more data on variants of unknown significance start to shed light on either their benign or pathogenic effect. As the genotype-phenotype correlation that has been established the genetic medical report should contain the precise genetic alteration both at the cytogenetic and molecular nomenclature (preferably according to the latest assembly, e.g., GRCh38), the accession number/link of the detected variant, the interpretation of the variant at the cellular level and the next detailed clinical significance should be provided. In the detailed clinical significance, the possible therapeutical approaches and follow-up strategy should be written and discussed with the patient and/or legal guardian by taking into account the national human genetic laws of a specific country.
Concluding Remarks
The precise uncovering of our genetic information of interest should be one of our highest priorities, which should be conducted by providing the most complete molecular
Role of Post-Test Genetic Counselling
The role of post-test genetic counselling is a critical evaluation of the results of the applied genetic test in light of the phenotype. The evaluation of genotype alterations should be in line with the observed phenotype. It is recommendable, when possible, that the positive findings be confirmed by another technique. Of note, if variants of unknown significance have been identified or the methylation pattern has also been determined in addition to sequencing, it could be re-evaluated in the near future as more data on variants of unknown significance start to shed light on either their benign or pathogenic effect. As the genotype-phenotype correlation that has been established the genetic medical report should contain the precise genetic alteration both at the cytogenetic and molecular nomenclature (preferably according to the latest assembly, e.g., GRCh38), the accession number/link of the detected variant, the interpretation of the variant at the cellular level and the next detailed clinical significance should be provided. In the detailed clinical significance, the possible therapeutical approaches and follow-up strategy should be written and discussed with the patient and/or legal guardian by taking into account the national human genetic laws of a specific country.
Concluding Remarks
The precise uncovering of our genetic information of interest should be one of our highest priorities, which should be conducted by providing the most complete molecular landscape in order to establish next-generation genetic counselling and guidance. As technology advances, AI has become more prevalent in molecular medicine, and genetic data has become more prone to being sequenced at larger and larger scales. Timing is of the essence to molecular diagnosis, as the patient will have the highest opportunity to benefit from specific medical care and/or treatment the earlier the diagnosis is conducted. Precise and detailed phenotyping should be performed in order to conclude the correct genotype-phenotype correlation and medical care to be provided as soon as possible for every actionable genetic diagnosis.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2017-04-06T15:15:07.063Z
|
2014-04-21T00:00:00.000
|
4587414
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0095591&type=printable",
"pdf_hash": "6223e4e247b88bf42ee4ec9d1900ffd2e63d2237",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44134",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "6223e4e247b88bf42ee4ec9d1900ffd2e63d2237",
"year": 2014
}
|
pes2o/s2orc
|
Population Growth of the Cladoceran, Daphnia magna: A Quantitative Analysis of the Effects of Different Algal Food
In this study, we examined the effects of two phytoplankton species, Chlorella vulgaris and Stephanodiscus hantzschii, on growth of the zooplankton Daphnia magna. Our experimental approach utilized stable isotopes to determine the contribution of food algae to offspring characteristics and to the size of adult D. magna individuals. When equal amounts of food algae were provided (in terms of carbon content), the size of individuals, adult zooplankton, and their offspring increased significantly following the provision of S. hantzschii, but not after the provision of C. vulgaris or of a combination of the two species. Offspring size was unaffected when C. vulgaris or a mixture of the two algal species was delivered, whereas providing only S. hantzschii increased the production of larger-sized offspring. Stable isotope analysis revealed significant assimilation of diatom-derived materials that was important for the growth of D. magna populations. Our results confirm the applicability of stable isotope approaches for clarifying the contribution of different food algae and elucidate the importance of food quality for growth of D. magna individuals and populations. Furthermore, we expect that stable isotope analysis will help to further precisely examine the contribution of prey to predators or grazers in controlled experiments.
Introduction
Cladocerans in freshwater ecosystems are among the most important biological entities that contribute to the complexity of food web structure and function [1]. They are typically primary consumers that utilize phytoplankton as their food source. Many species of cladocerans are filter-feeders that obtain food from the water by filtration [2], [3], and are sometimes used as control agents against phytoplankton proliferation in a method known as biomanipulation [4], [5]. Some cladocerans consume organic particles or rotifers [6], [7]; however, most species rely on energy obtained from phytoplankton for population growth.
The nutrient content and biomass of prey phytoplankton are important factors in cladoceran growth [8], [9]. Therefore, the quality and quantity of phytoplankton consumed [10], [11] are crucial factors controlling the growth of cladoceran populations. Previous studies have investigated the size and morphology of algal species (e.g., [12], [13]), and Ahlgren et al. [8] provided comprehensive data on phytoplankton nutritional status. These studies suggest that the quality of prey phytoplankton affects cladoceran population growth. Urabe and Waki [14] provided evidence that changes in biochemical composition of the diet clearly affected the growth of herbivorous species such as Daphnia. However, comparisons of algal growth and composition with cladoceran growth are required to quantify the direct contribution of algal intake to cladocerans.
Even if highly nutritious algae are available, algae do not contribute to growth of individuals or populations unless high assimilation rates are maintained, and the majority of this energy resource will be confined to the gut contents and ultimately ejected. Determining the quantitative contribution of prey to consumers is challenging, and few studies addressed this topic [15], [2], [16] prior to the emergence of stable isotope analysis. Phillips and Koch [17] recommended the isotope mixing model, which enables determination of the contribution of the most abundant algal species to growth of an individual consumer. Although the stable isotope signature does not accurately represent assimilation rate, results obtained using this approach can provide information on the quantitative contribution of prey to the consumer, which can be recognized as the assimilation rate. Stable isotope analysis can be used to explain the contribution of algal species to the population growth of cladocerans.
In this study, we experimentally investigated the relationship between a cladoceran species and its prey phytoplankton from the perspective of the algal contribution to offspring characteristics and size of adult individual zooplankton. Two phytoplankton species, Chlorella vulgaris and Stephanodiscus hantzschii, were used as food algae; the zooplankton studied was the cladoceran Daphnia magna. Offspring characteristics and adult individual size were measured. Daphnia magna is one of the most popular herbivorous cladocerans for use in culture experiments and C. vulgaris is frequently used in Daphnia growth experiments. Stephanodiscus hantzschii is an important phytoplankton species, particularly in Far-East Asian countries, where the species proliferates in the winter [18], [19], [20]. Therefore, these species were conducive to understanding the contribution of two algal species to a common grazer. Stable isotope analysis was conducted to quantitatively determine the contribution of food algae to D. magna.
Plankton Subculture
D. magna obtained from the National Institute of Environmental Research (NIER) of South Korea were grown in Elendt M4 medium [21] in a growth chamber (Eyela FLI-2000, Japan) at 20uC, with a 12L:12D light-dark cycle. Subcultures of the green alga C. vulgaris (strain number UMACC 001) and the diatom S. hantzschii (strain number CPCC 267) were maintained in a growth chamber (Eyela FLI-301N, Japan) at 10uC, with a 12L:12D lightdark cycle. S. hantzschii tolerates a wide range of temperatures, but favors relatively low temperatures [22], [23]. In contrast to S. hantzschii, the optimal temperature for C. vulgaris growth is .20uC. Excessive population growth often occurs at optimal temperatures, which may affect the constancy of food algae provision (see experimental protocol). Therefore, we maintained the C. vulgaris subculture at 10uC for the experiment.
We chose approximately 100 D. magna offspring born within 24 h (most ,0.8 mm long) with similar life-history traits (e.g., birth time and size) from the stock culture, and allowed them to reach the offspring-production stage (hereafter referred as SC, sampled culture). Generally, daphniid females that are larger at birth grow more rapidly and are larger at maturity than those that are smaller at birth [24]. To obtain a similar-sized cohort, we first sorted and eliminated extraordinarily larger or smaller maternal Daphnia from the SC (29 individuals were removed). We then randomly selected 30 offspring from the remaining D. magna individuals in the SC by adapting a scaled loupe (unit, mm). The 30 adults were between 3 and 4 mm and contained eggs in their brood chamber. We transferred the 30 D. manga to a new beaker filled with fresh Elendt M4 medium, and provided sufficient food (C. vulgaris) until offspring were produced. In determining the quantity of food algae to provide, we considered the supply level that would be appropriate for zooplankton population growth. Previous research [25] indicated that algal carbon content of approximately 2.5 mg Carbon L 21 (units shown as mg C L 21 hereafter) in a given volume of zooplankton medium would be sufficient for zooplankton survival and population growth. The first reproduction event occurred at 4 to 5 days after selection. However, the number of neonates from the first reproduction was small, and we used the second clutch from the selected D. magna for the experiment. In summary, the initially selected D. magna adults were used to produce the offspring employed in the main experimental procedure. The offspring from the second clutch were collected after birth (within 6 h) and were used for the main experiment.
Experimental Protocol
The overall experimental design is shown in Figure 1. We transferred 150 of the collected offspring into three experimental groups as follows: (1) C. vulgaris only (CHL); (2) S. hantzschii only (STE); and (3) mixed algae (MIX). For each group, we prepared 10 replicates in 500-mL sterilized beakers filled with 500 mL Elendt M4 medium, and five acclimated D. magna individuals were placed in each beaker.
Food algae were supplied at a quantity sufficient to maintain 2.5 mg C L 21 in each beaker throughout the experiment. This quantity can be determined from the relationship between algal density and carbon content [25] when algal cell size is known. Thus, before the experiment, we obtained the size information of C. vulgaris and S. hantzschii by measuring their diameter 50 times, and calculated average size of the algal species (Table 1). Based on the size information, we determined daily food algal supply amount, that the number of cells equal to 2.5 mg C L 21 was 8652 cells for C. vulgaris, and 8802 cells for S. hantzschii. The total daily injection volume was determined accordingly. For example, if daily density of S. hantzschii was 687 cells mL 21 , we injected ca 12.81 mL of S. hantzschii stock (687 cells mL 21 * 12.81 mL = 8800 cells). The density of food algae changed during culture maintenance, so the calculated injection volume was determined daily prior to administering the food supply. We injected food algae between 3 and 4 PM.
Unlike supplying a single algal species, for one treatment (MIX), a mixture of two food algae was used. In the MIX group, careful determination of the food algal supply was performed. To maintain the daily dosage of food algae at 2.5 mg C L 21 for the two different algal species, each species was supplied at 1.25 mg C L 21 . We determined the required quantity of each algal species by daily enumeration of food algal density.
The experiment was conducted using a plant growth chamber (Eyela FLI-2000, Japan). The aforementioned maintenance conditions for D. magna were also applied to the experiment (20uC; photon flux density = 30 mmol?m 22 ?sec 21 ; 12L:12D lightdark cycle). The experiment was terminated when adult D. magna produced offspring from the second clutch. The shortest duration in which the second clutch was obtained was 8 d, which is generally accepted as an appropriate turnover time for the assimilation of carbon and nitrogen from D. magna food [26]. Each day during the experiment, we transferred D. magna to fresh Elendt M4 medium before providing food algae, to maintain the supply of algae at 2.5 mg C L 21 . After termination of the experiment, we randomly collected two adults from each beaker of the three experimental groups (total n = 20 for each experimental group), and measured their body length and counted food algal cells in their guts. We took microscope images (6200 magnification, Axioskop 40, Carl Zeiss Microscopy, Germany) of the adult D. magna and used an image processing program (AxioVision Rel 4.8, Carl Zeiss Microscopy, Germany) to measure body length following the manufacturer's protocol for calibrating image length to actual length.
Gut contents were examined to determine the pattern of food algal consumption, particularly in the MIX group. For this investigation, we eviscerated the guts of D. magna (n = 20) from the MIX group and counted algal cell numbers of each species in the guts of each individual. Food algal cells tended to be broken as they progressed through the gut, which could complicate enumeration of algal cells. From an empirical approach, just after consumption, algal cells resided in the upper part of the gut (approximately 1 mm from the mouth) and were relatively fresh and unbroken. Therefore, we divided the total gut length (approximately 3 mm) into three sections (fore-, mid-, and reargut; each approximately 1 mm), and counted algal cells in the foreguts to minimize enumeration error due to broken cells.
Typically, filter feeders such as daphniids are not selective feeders; therefore, the ratio of the two consumed algal species in the foregut would be maintained during passage through the gut.
To determine offspring size, two randomly sampled offspring from each beaker (total n = 20 in each experimental group) were measured. The size measurement of offspring was based on application of the image-processing program, as performed for the size analysis of D. magna adults. To determine the total number of offspring per adult, we counted the number of offspring in each beaker and divided that number by five (i.e., the number of adults in each beaker).
The density and size measurements of zooplankton individuals and algal cells in the gut samples were carried out using a microscope (Axioskop 40, Carl Zeiss Microscopy, Germany) under 6200, and 6400 magnification, respectively.
Stable Isotope Analysis
The remaining 30 adult D. magna individuals in each experimental group and the two food algae were used for stable isotope analysis. The D. magna samples contained phytoplankton in their guts; therefore, we transferred those individuals into fresh Elendt M4 medium for more than 24 h without provision of additional food algae. This allowed these individuals to eject their gut contents; they were then included in the stable isotope analysis. The 30 D. magna individuals were divided into six groups (n = 5 per group); three of these groups (n = 15 individuals) were used for detection of the carbon signature and three groups were used for detection of the nitrogen signature.
Carbon and nitrogen measurements of D. magna were conducted separately. It is necessary to extract tissue lipids for accurate interpretation of trophodynamics using carbon stable isotope data. The carbon isotope signature depends on protein content in tissue; the presence of lipids can affect the reliability of the isotope analysis. Lipid content varies in accordance with tissue type and is 13 C-depleted relative to proteins. Therefore, tissue samples that contain lipid may produce an unstable carbon isotope signature. In contrast, lipid extraction affects d 15 N. Therefore, we divided the samples into separate groups comprising carbon-and nitrogensignature samples. Lipids were removed only from the carbonsignature samples. Comparison between the two samples was accomplished by d 13 C and d 15 N analyses [27]. The carbonsignature samples were placed in a solution of methanolchloroform-triple-distilled water (2:1:0 For stable isotope analysis of food algae, we prepared 5 mL of algal suspension from each species and analyzed in triplicate. The algal samples were treated with 1 mol?L 21 hydrochloric acid (HCl) to remove inorganic carbon. The samples were then rinsed with ultrapure water to remove the HCl.
The prepared samples (two algal species and D. magna) were freeze-dried and then ground with a mortar and pestle. The powdered samples were maintained at-70uC until analysis. When all samples were collected, carbon and nitrogen isotope ratios were determined using continuous-flow isotope mass spectrometry. Dried samples (approximately 0.5 mg of animal samples and 1.0 mg of algae) were combusted in an elemental analyzer (EuroVector), and the resultant gases (CO 2 and N 2 ) were introduced into an isotope ratio mass spectrometer (CF-IRMS, model-ISOPRIME 100, Micromass Isoprime) in a continuous flow, using helium as the carrier gas. Data were expressed as the relative per-mil (%) difference between sample and conventional standards of Pee Dee belemnite (PDB) carbonate for carbon and air N 2 for nitrogen, according to the following equation: where X is 13 C or 15 N, and R is the 13 C: 12 To determine which of the two food sources (C. vulgaris, and S. hantzschii) was assimilated more readily by D. magna, we calculated two-source isotope mixing models. The carbon isotope values of C. vulgaris and S. hantzschii differed significantly. The model is defined as: where X, Y, and M represent the two food sources and the mixture, respectively; f represents the proportion of N from each food source in the consumer's diet; and D 15 C is the assumed trophic fractionation (i.e., the change in d 15 C over one trophic step from prey to predator) [28], [17]. Trophic fractionation was assumed to be constant, and either 3.4% or 2.4% [29].
Statistical Analysis
For statistical assessment of the experimental groups, we applied one-way nested ANOVA (two-tailed, a = 0.05) to compare the size of adults and offspring. Although we prepared 10 replicates (beakers) for each experimental group, pseudo-replication had to be carefully considered [30]. Therefore, we set the different food algal treatments as the primary factors and the 10 beakers as nested subgroups for every treatment.
Comparison of numbers of D. magna offspring was performed using one-way ANOVA. Student's t-tests (two-tailed, a = 0.05) were used to compare cell numbers of algal species in the guts of the MIX group. Tukey's post-hoc tests were employed to identify groups with different average values. All statistical tests were performed using the package SPSS Statistics ver. 20.
D. magna Response to Different Food Algae Resources
A clear difference in size of adult D. magna was observed between the experimental groups ( Figure 2 and Table 2). Adult D. magna that consumed S. hantzschii were significantly larger than those that fed on C. vulgaris (mean 6 standard deviation; STE: 3.1860.05 mm; CHL: 2.7860.07 mm). Adult size in the MIX group was intermediate (2.9960.07 mm; Figure 2A). Although subgroups (i.e., beakers) showed statistical differences, post-hoc tests revealed a significant difference in average values of the three groups (three comparisons, P,0.001). The size of D. magna offspring also differed significantly between groups ( Figure 2B and Table 2). As for adults, offspring from the STE group (1.1460.05 mm) were significantly larger than offspring from the other groups (CHL: 1.0860.06 mm; MIX: 1.0860.07 mm). The sizes of individuals in the CHL and MIX groups were similar. Post-hoc tests revealed significant differences between CHL and STE, and between MIX and STE (P,0.001), but not between CHL and MIX (P.0.05).
Difference in food algae also resulted in variations in offspring number ( Figure 2C). Adult D. magna that consumed S. hantzschii produced more offspring than did those of the other groups (STE: 26.261.8 ind. per adult; CHL: 11.461.8 ind. per adult; MIX: 18.562.67 ind. per adult). The three groups differed significantly from one another (one-way ANOVA; F = 120.76, P,0.001, d.f. = 2), which was supported by the results of post-hoc tests (all three cases, P,0.001). Gut content analysis showed that D. magna
Stable Isotope Analysis of Food Algae Assimilation
Stable isotope analysis revealed that D. magna depended more on S. hantzschii than on C. vulgaris when they fed on a mixture of these algae (Figure 3). The d 13 C and d 15 N ratio indicated the contribution of prey phytoplankton to D. magna. D. magna adults in the two groups fed only one species (CHL and STE) depended on either C. vulgaris or S. hantzschii, respectively. However, D. magna in the MIX group relied more on S. hantzschii. In addition, when the contribution rates of the two food algal species in the MIX group were calculated from the isotope analyses, the S. hantzschii contribution rate (92%) was higher than that of C. vulgaris (8%) from the two-source mixing model. Therefore, diatom species (S. hantzschii) contributed most to the growth of D. magna individuals and populations.
Contribution of Different Algal Species to D. magna Growth
Of the two algal species studied, the diatom S. hantzschii appears to be the more suitable food item for D. magna; this was true for both population growth and for individual growth. Despite similar consumption rates of the two algae (see Table 1), the size of D. magna individuals increased much more when they utilized S. hantzschii. There are several explanations for why this may occur. Diatoms are commonly regarded as good sources of lipids and serve as a food source for zooplankton [31], [8]. They are known to contain large amounts of eicosapentaenoic acid [32], a fatty acid required in the diet of many animals that may not be able to synthesize the compound [33], [34]. However, green algae are known to contain relatively lower nutrient contents compared to diatoms.
However, a greater availability of food algae (quantity-wise) does not always guarantee increased growth of grazer populations. A second possibility involves the digestive capacity of D. magna. The digestion rate of D. magna differs depending on the phytoplankton species consumed. Van Donk et al. [35] suggested that the cell wall morphology of green algae might reduce their digestibility by Daphnia. Our stable isotope results suggest that the greater dependence of D. magna on S. hantzschii is attributable to more effective absorption of nutrients from the diatom species. Therefore, it may be assumed that a balance between quality and absorbability of food algae is important for individual and population growth in zooplankton. Further research should investigate the characteristics of this balance.
Offspring Characteristics
An interesting finding of this experiment was the changing pattern of offspring size and number according to the species of algae consumed. Both abundance and size of offspring in the STE group were greater than in the other groups. Although the size and number of offspring depend on clutch size and reproductive capability of adults [36], we suggest that D. magna adults may respond flexibly to the quality of the energy sources they capture, resulting in changes in the size and number of their offspring in accordance with different algal species consumed. That D. magna actively responds to food algal quality was not definitively shown, but the response was clear. When algal resources are of low quality (less nutritious algae, such as C. vulgaris in the present study), female Daphnia may limit offspring size to ensure survival. Enhanced nutritional conditions (inclusion of S. hantzschii in the MIX group) resulted in a slight increase in offspring number. We maintained the supported amount of carbon in all of treatments at 2.5 mg C L 21 , but the algal species that comprised this carbon differed. Although availability of carbon allowed survival and reproduction of D. magna, offspring characteristics were further improved when the proportion of S. hantzschii was increased. The provision of more nutritious food algae caused this pattern to emerge. We thought that a semi-restricted diet would result in moderate changes in size and fecundity (i.e., the MIX group) and that a nutritious diet fed to a growing individuals would increase the size and fecundity of that individual, thereby also increasing the population (i.e., the STE group). In previous studies, offspring size was affected by the quantity of food algae and the presence of predators [37], [38]. Although we did not consider the presence of predators, the quality of food algae plays a key role in the population growth of D. magna, at least when food algae are sufficiently abundant.
The quality of food algae may be very important to filterfeeding zooplankton. Recent studies have found that other zooplankton groups (mainly copepods) are not affected by the quality of algae resources during population growth [39]. In one study, egg production was not significantly related to lipid content in six phytoplankton species; the authors suggested that slow transit time in the gut (i.e., increased opportunity to absorb nutrients) could explain this result. In contrast to copepods, Daphnia typically shows relatively fast gut-passage time, which does not allow optimal absorption of nutrients [40]. Therefore, the quality of food algae, as well as its absorbability, is crucial for Daphnia population growth, and food algae that are fully assimilated will result in maintenance of or increases in zooplankton population levels.
Appropriate Food Selection using Stable Isotope Analysis
Based on the results of this study, it is possible to quantify the energy channeled from primary producers to primary consumers, expanding on basic understandings of connectivity. The traditional method of investigating food web structure involves visual inspection of gut contents [41], [42], but recently, DNA barcoding has increased the resolution of prey identification to the species level [43]. From a functional perspective, prey consumption is related to the growth of grazers and predators [44], [45]. Despite apparent evidence, these methods are limited in their ability to quantify the contribution of prey to grazers. Assimilation indicates how grazers utilize prey for growth, and the results of the present study further elucidate microbial food web structure. Consequently, the consumption rate (including qualitative and quantitative aspects) and the contribution rate of food items should be examined simultaneously to further precisely elucidate food web functions. In addition, as more information on multi-species prey and grazer relationships becomes available, understanding of ecological integrity will be improved.
Conclusion
The growth of D. magna individuals and offspring was significantly improved by the consumption of S. hantzschii but not C. vulgaris. Stable isotope analysis revealed substantial assimilation of diatom-derived materials in D. magna, indicating that diatoms are important to the population growth of this species. These results confirm the applicability of stable isotope approaches for clarifying the contribution of different food algae and for elucidating the importance of food quality for D. magna population growth.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2015-09-25T00:00:00.000
|
6702249
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2015.00484/pdf",
"pdf_hash": "3d158dd011d8886cccd64f6dc6bd530ac915c795",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44137",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "3d158dd011d8886cccd64f6dc6bd530ac915c795",
"year": 2015
}
|
pes2o/s2orc
|
Mathematical Modeling of Early Cellular Innate and Adaptive Immune Responses to Ischemia/Reperfusion Injury and Solid Organ Allotransplantation
A mathematical model of the early inflammatory response in transplantation is formulated with ordinary differential equations. We first consider the inflammatory events associated only with the initial surgical procedure and the subsequent ischemia/reperfusion (I/R) events that cause tissue damage to the host as well as the donor graft. These events release damage-associated molecular pattern molecules (DAMPs), thereby initiating an acute inflammatory response. In simulations of this model, resolution of inflammation depends on the severity of the tissue damage caused by these events and the patient’s (co)-morbidities. We augment a portion of a previously published mathematical model of acute inflammation with the inflammatory effects of T cells in the absence of antigenic allograft mismatch (but with DAMP release proportional to the degree of graft damage prior to transplant). Finally, we include the antigenic mismatch of the graft, which leads to the stimulation of potent memory T cell responses, leading to further DAMP release from the graft and concomitant increase in allograft damage. Regulatory mechanisms are also included at the final stage. Our simulations suggest that surgical injury and I/R-induced graft damage can be well-tolerated by the recipient when each is present alone, but that their combination (along with antigenic mismatch) may lead to acute rejection, as seen clinically in a subset of patients. An emergent phenomenon from our simulations is that low-level DAMP release can tolerize the recipient to a mismatched allograft, whereas different restimulation regimens resulted in an exaggerated rejection response, in agreement with published studies. We suggest that mechanistic mathematical models might serve as an adjunct for patient- or sub-group-specific predictions, simulated clinical studies, and rational design of immunosuppression.
A mathematical model of the early inflammatory response in transplantation is formulated with ordinary differential equations. We first consider the inflammatory events associated only with the initial surgical procedure and the subsequent ischemia/reperfusion (I/R) events that cause tissue damage to the host as well as the donor graft. These events release damage-associated molecular pattern molecules (DAMPs), thereby initiating an acute inflammatory response. In simulations of this model, resolution of inflammation depends on the severity of the tissue damage caused by these events and the patient's (co)-morbidities. We augment a portion of a previously published mathematical model of acute inflammation with the inflammatory effects of T cells in the absence of antigenic allograft mismatch (but with DAMP release proportional to the degree of graft damage prior to transplant). Finally, we include the antigenic mismatch of the graft, which leads to the stimulation of potent memory T cell responses, leading to further DAMP release from the graft and concomitant increase in allograft damage. Regulatory mechanisms are also included at the final stage. Our simulations suggest that surgical injury and I/R-induced graft damage can be well-tolerated by the recipient when each is present alone, but that their combination (along with antigenic mismatch) may lead to acute rejection, as seen clinically in a subset of patients. An emergent phenomenon from our simulations is that low-level DAMP release can tolerize the recipient to a mismatched allograft, whereas different restimulation regimens resulted in an exaggerated rejection response, in agreement with published studies. We suggest that mechanistic mathematical models might serve as an adjunct for patient-or sub-group-specific predictions, simulated clinical studies, and rational design of immunosuppression.
Keywords: DaMPs, allo-recognition, ischemia/reperfusion injury, transplant, equation-based model, ordinary differential equations Modeling the dynamics of allo-recognition in organ transplantation Frontiers in Immunology | www.frontiersin.org introduction Solid organ transplantation represents the treatment of choice for end-stage organ failure-associated diseases, and has proved effective at extending and improving the quality of life of patients. Approximately 22,000 patients receive solid organ transplants every year in the United States, according to United Network for Organ Sharing 1 . While 1-year outcomes after solid organ transplantation are excellent, the long-term outcomes are still mediocre, and range from 70% survival rate for kidney transplantation to 40-50% survival for heart/lung and intestine transplantation at 5 years (1)(2)(3). These poor long-term outcomes depend on multiple factors related to both donor and recipient, but are in their vast majority dictated by initial polyclonal, multimodal, and redundant innate and adaptive immune responses of the recipient directed against the allograft (4). These early immune responses occur both locally and systemically, in response to non-specific inflammatory damage-associated molecular pattern molecules (DAMPs) or to allo-antigen (allo-Ag)-specific major histocompatibility complex (MHC)-mismatch. These responses may be triggered by (i) the transplant surgery procedure (5); (ii) the type and the quality of the graft, including the level of ischemia/ reperfusion (I/R) injury (IRI) post-revascularization; and (iii) the level of pre-formed cellular (T cells) allogeneic and heterologous immunologic memory responses (4,6).
inflammation and immunity in solid Organ Transplantation
While most work in the transplant field has focused on the antigendriven immune processes that drive graft rejection, recent work has begun to focus on the interplay between early innate immune mechanisms and subsequent antigen-driven responses (7)(8)(9)(10). In this respect, the transplant community has begun to acknowledge the tightly woven interplay between innate and adaptive immunity that has been recognized in other fields (11)(12)(13)(14)(15)(16)(17)(18)(19)(20). These studies have pointed to multiple intersecting pathways by which early stress or injury leads to activation of innate and adaptive lymphoid pathways. Key among these pathways are those driven by DAMPs, which play intracellular housekeeping roles normally but which are released both locally and systemically upon stress, injury, or infection (21,22). DAMPs activate classical innate immune cells such as macrophages and polymorphonuclear cells (PMN; i.e., neutrophils), but also stimulate dendritic cells (DC) to drive cytotoxic (Tc) and helper (TH) T cell activation/ polarization (23)(24)(25)(26). In addition, non-conventional γδ-T cells, natural killer (NK)-T cells, as well as TH1 and TH17 cells (along with innate cells) provide other points of intersection between innate and antigen-specific (adaptive) immune responses (6,27).
The transplantation procedure involves oxygen deprivation (ischemia) in the recipient host tissues as well as in the donor graft due to the time interval from donor organ removal to its placement in the recipient host. Once the transplant is complete, blood flow resumes, a process known as reperfusion. The I/R 1 http://optn.transplant.hrsa.gov/ event is well-known to cause injury (IRI) to tissues, in addition to any direct tissue damage from the surgical procedure. These injurious events further initiate release of DAMPs, and this abates as IRI resolves (28)(29)(30)(31). However, DAMPs initiate an acute inflammatory cascade involving the early expression of adhesion and co-stimulation molecules, chemokine release, and the inflammatory cytokine production by innate immune cells as well as memory T cells. Briefly, neutrophils respond to DAMPs by extruding highly inflammatory DNA material [neutrophil extracellular traps (NETs)] that trigger monocytes and tissue macrophages to secrete interleukins (IL-) IL-1β, IL-6, and tumor necrosis factor-α (TNF-α). In turn, these pro-inflammatory cytokines stimulate monocyte-derived DC to produce IL-12, a pivotal cytokine for generation of type-1 immunity (6,27,32,33). In addition, activated monocytes can release IL-23, a cytokine critical for recruitment of IL-17-producing γδ-T cells, responsible in turn for neutrophil chemotaxis and activation (34,35). As a result of the innate immune cell cytokine storm, the direct response to DAMPs, γδ-T cells, and memory T cells further contribute to IRI by IL-17 and interferon-γ (IFN-γ) release and costimulatory molecule up-regulation in an allo-Ag-independent manner (27,36,37).
A second layer of effector and inflammatory molecules is released by pre-formed alloreactive memory Type-1 and Type-17 T cells in response to graft mismatched allo-Ag recognition. The levels of T cell pre-sensitization of the recipient to the donor correlate directly with early acute rejection episodes (38). The ensuing inflammation acts as a feedback loop, and may further cause tissue damage that drives additional release of DAMPs and allo-Ags. Resolution of cellular and tissue inflammation triggered by surgery, IRI, and subsequent DAMP release is mediated by innate regulatory macrophages (M2 and Mreg), intrinsic regulatory cytokines [IL-10, IL-4, and transforming growth factor-β1 (TGF-β1)] along with T regulatory cells (Tregs) in animal models of heart, kidney, and liver transplantation (27,(39)(40)(41)(42), while pre-formed alloreactive memory T cells seem less sensitive to regulation by Tregs (43).
These immunologic events may play a significant role in driving the diverse outcomes that accompany organ transplantation in various cases of apparent antigenic mismatch. We use the term "apparent antigenic mismatch" since the response to allo-Ag includes multiple factors, such as (1) actual allo-Ag differences; (2) individual, genetically predetermined thresholds of immune activation in response to a given degree of antigenic mismatch; (3) pre-existing levels of memory T cells; and (4) individual-specific response to immunosuppressive therapy.
Modern organ transplantation has utilized potent strategies to control these unwanted, early immune responses. Specifically, thorough pre-transplant screening of recipient's pre-formed donor-specific allo-antibody reactivity against the donor (cross-match screening for humoral sensitization) is combined with depleting or non-depleting induction therapy at organ implantation and with versatile maintenance immunosuppression (44)(45)(46). All of these methods seek to mitigate the deleterious effects of immunity while allowing regulatory molecules and cells to develop. Notably, these strategies target mostly adaptive immune cells such as T cells, leaving the innate immune players mostly unchecked. Thus, patients with elevated DAMP release and inflammation -due to significant IRI after reperfusion that carry undetected memory T cells to the donor MHC -may experience early rejection episodes despite proper pre-transplant screening, induction therapy, and maintenance immunosuppression. This contrasts with non-sensitized or minimally sensitized patients who experience minimal IRI due to live donation and/or optimal MHC matching, resulting in either indolent subclinical inflammation or in uneventful clinical course with desirable quiescent outcomes. For example, acute cellular rejection (ACR) events in the first 3 months after kidney transplantation occur in 10-12% of patients, while biopsy-proven subclinical rejection occurs in an additional 15-18% of kidney recipients (47).
Deciphering the complexity of inflammation and immunity with Mathematical Models
The foregoing discussion suggests an emerging paradigm in which context and timing matter more than semantic distinctions among immune/inflammatory responses: in essence, inflammation/innate immunity triggers early memory lymphoid pathways that can subsequently become more focused after exposure to specific antigens, while chronic inflammation might be thought of as the chronic restarting of acute inflammation (48). In this context, attempting to define and predict responses under particular circumstances, especially in individuals, becomes almost overwhelmingly complex.
Mathematical modeling provides a key tool by which to study the integrated innate/adaptive response or acute/chronic inflammatory response and thereby untangle some of this complexity (48)(49)(50). Therefore, such models provide a means to drive novel hypotheses with regard to complex immune processes like those involved in the transplantation procedure and can assist in identifying viable -and possible novel -points of control or diagnostic biomarkers. Multiple mathematical models that integrate innate and adaptive immune responses have been developed over the past decade to address diverse questions and disease states (51)(52)(53)(54). However, a comprehensive mathematical model of organ transplantation is as yet lacking, and the complexity of the immune events involved in the procedure reiterates the need for such an approach. Complex systems, especially biological ones, are notoriously sensitive to initial conditions (55,56). Thus, to address the solid organ transplant process comprehensively, we hypothesize the need to model not only the transplant and its antigenic properties, but also the initial conditions relating to the transplant surgery and subsequent IRI as drivers of innate immunity. Indeed, prior mathematical modeling studies have suggested the need to model the underlying process, for example, in the case of the role of underlying trauma in the setting of hemorrhagic shock (57).
The modeling simulations in this present study suggest that surgical injury and graft damage can be well-tolerated by the recipient when each is present alone, but that their combination (along with antigenic mismatch) may lead to acute rejection. An emergent phenomenon from our simulations is that low-level DAMP release can tolerize the recipient to a mismatched graft under specific restimulation settings, while other restimulation regimens lead to an exaggerated rejection response.
results
To examine the early stages of inflammatory/immune responses to an organ transplant, including investigating the role of IRI in transplantation, we developed a mathematical model that includes the inflammatory hallmarks of IRI as well as the immune responses elicited by the apparent antigenic mismatch of the graft. As described above, we use the term "apparent antigenic mismatch" to comprise (1) actual antigenic differences; (2) individual, genetically predetermined thresholds of immune activation in response to a given degree of antigenic mismatch; (3) pre-existing levels of memory T cells; and (4) individual-specific response to immunosuppressive therapy.
The degree of this apparent antigenic mismatch is governed by a parameter, α, wherein a value of zero implies that the graft has 0% apparent mismatch with the host and a value of 1 implies complete (i.e., 100%) apparent mismatch. The model is initiated with a specified level of initial damage to the host and to the graft from the surgery and I/R, and thus the model simulations begin at approximately the time that transplant surgery is concluded (~8 h after the surgery begins), at which time reperfusion would occur.
In order to increase our ability to analyze qualitatively the driving forces behind diverse transplant outcomes, we simplify the number of components considered in the model and aim to create an abstract representation of the processes mentioned above. We focus on the following core scenarios and outcomes: 1. Clinical quiescence: the graft, following transplantation, shows no signs of inflammatory infiltrates. This is represented by model simulations showing little or no graft damage and corresponding to fully or almost fully recovered graft functionality. 2. Acute clinical rejection: the graft, following transplantation, sustains levels of damage from the host response that cause it to lose functionality, occurring in the first 3 months after transplant. This is represented by model simulations showing high graft damage and corresponding poor graft functionality very early after the simulation is initiated (i.e., after the transplant is completed). 3. Subclinical inflammation: the allograft, following transplantation, shows no apparent clinical signs of organ damage, but subclinical levels of inflammation and cellular infiltrates are detected in the protocol biopsies in the first 3 months after surgery. This is represented by model simulations showing either stabilized but diminished graft functionality due to lingering inflammation, or non-stabilized, poor graft functionality due to oscillating inflammatory responses driven by T cells. description of the dynamic model variables and Table 3 in the Section "Materials and Methods" explains the auxiliary model variables. The dynamic model variables are those whose rates change over time and are modeled with an ordinary differential equation (ODE); whereas auxiliary variables are functions of dynamic variables. We first discuss the interactions that are Multiple arrows coalescing into a target variable at the same point indicate that all initiating variables are required to complete that particular induction/activation process. For instance, I and A are both needed to activate TA. Circulating/resting source populations of T cells and innate immune components, T0 and IR, respectively, are required for all processes that induce/activate these into the variables TP or TA and I, respectively. To keep the diagram uncluttered, the source populations are not shown in all of the processes in which they are required. Instead a representative example is given for each, as seen in the activation of IR into I by TP and in the activation of T0 into TP (alternatively, into TA) by TP (alternatively, by TA). The presence of allo-Ag of the graft is indicated with a red cross and represents another excitatory factor of the pro-inflammatory arms of the system as is the DAMP release by damaged tissue. Some activation processes require the presence of allo-Ag and these are represented by a red cross at the initiating (tail) end of an arrow.
The Mathematical Model
pro-inflammatory and then discuss how these processes initiate and/or are inhibited by the anti-inflammatory components, all based on the immunology discussed in Section "Inflammation and Immunity in Solid Organ Transplantation. " The model does not currently take into consideration explicitly the immunosuppressive therapies given before/during the transplantation procedure, though the effect of immunosuppression is in a sense contained in the concept of apparent antigenic mismatch. We envision testing specific immunosuppression mechanisms (e.g., killing of all inflammatory cells vs. specific killing of T cells) in future iterations of this model. The goal of this modeling exercise is to understand the dynamics of the transplant procedure from a more abstract perspective, in which we group multiple components into a single variable. While this level of abstraction will in no way allow a quantitative prediction of specific mediators and cells, this approach does allow for an examination of the overall qualitative dynamics of this system in which excitatory and inhibitory mechanisms interact. The early innate components of the model, denoted by the variable I, incorporate the general pro-inflammatory effects of cells such as tissue-resident M1 macrophages, circulating monocytes, neutrophils, and NK cells as well as cytokines such as TNF-α, IL-6, IL-1β, IL-12, and IL-23. Pro-inflammatory T cells are represented by the variable TP, and incorporate the general properties of γδ-T cells, TH1, and TH17 T cell subsets. Also included are anti-inflammatory components, denoted by A, which include M2 macrophages, IL-10, and TGF-β1. In addition, anti-inflammatory T cells are denoted by the variable TA and are comprised of T regulatory and TH2 T cells. There are also two dynamic variables that track the rate of change of tissue damage: one for host tissue, denoted by the variable D, and another for graft tissue, denoted by the variable DG. These six dynamic variables are modeled with ODEs that describe how the rates of these entities change over time as they interact with one another under different simulation scenarios. The variables have arbitrary units, as we are not aiming to match them with quantitative data but instead examine their dynamic behavior. The time scale is in hours. Whereas some parameters governing the various rates of the interactions are estimated from the literature when possible (e.g., from half-lives of cells and inflammatory mediators), the parameters are largely estimated to constrain the model to display basic biologically feasible behavior; see Section "Materials and Methods" for more information. Figure 1 shows that D and I interact in a positive feedback loop that is inhibited by A. This models the effect of DAMPs released by tissue damaged due to IRI. This process is driven by early innate immune components, resulting in the activation of pro-inflammatory components from a resting/circulating population, IR (5). These activated pro-inflammatory components cause further tissue damage; but the activation is inhibited by anti-inflammatory influences in a "checks-and-balances" manner. However, severe damage can cause an unabated positive feedback loop among these components, resulting in an unresolved response (31,57). In the absence of graft placement (i.e., considering the surgical procedure alone), the innate pro-inflammatory components can also induce pro-inflammatory memory T cell recruitment from a circulating T cell population, T0 (9,58). In the presence of the anti-inflammatory components, A, the innate components, I, can induce Tregs and TH2 cells, represented by TA. Many of these activation/induction processes are inhibited by either A or TA (27,41,(59)(60)(61). This describes the interactions surrounding the surgical procedure and IRI of the host.
When a solid organ is transplanted, we considered that it would have some initial IRI due to the removal and transport procedures. In addition, the organ could subsequently be damaged by the pro-inflammatory components (both innate and T cell-mediated) present at the transplant site, even in the absence of allo-Ag (58). We model graft functionality (percent), G, as a function of this damage, DG (see Table 3 in "Materials and Methods"). Subsequently, we include a parameter (α) governing the mismatch factor to scale the response from innate and T cell pro-inflammatory components in response to an allograft. Figure 1 also shows that graft injury can release DAMPs, which in turn can activate innate immune components as discussed above. Furthermore, the presence of a graft with a positive antigenic mismatch factor, governed by the parameter α, will cause antigen-specific memory T cells to infiltrate and cause further injury to the graft. This process is modeled by a gain to DG. This damage will reduce graft function, G, as illustrated in the inset figure of Table 3, and consequently will reduce the percentage of graft tissue available to harm further.
With a positive graft mismatch factor, the early innate proinflammatory components, such as monocytes and M1 macrophages, through allo-recognition, will provide additional and specific activation via DC of the pro-inflammatory memory T cells, TP (10,58). This process is indicated in Figure 1 by the arrow coming from I into TP, with the apparent host-graft mismatch marker (red plus sign) present at the tail end of the arrow. In keeping with the abstract model representation of these processes, we do not include the DC component directly, yet the process is implicit in the interactions. Additionally, a positive graft mismatch factor will enhance further recruitment/ activation of both pro-and anti-inflammatory T cells, from the source T cell population, T0, by already activated components of these types. Again, various processes are inhibited by A and/ or TA, as indicated in the legend of Figure 1 by an induction arrow that has a particular variable marker sitting atop it in the middle.
In the Section "Materials and Methods," the construction of the model is discussed and the full model is given by Eqs 1-6, with the model parameter descriptions and values used in the simulations given in Table 4. The equations are solved numerically to produce time courses of each of the system variables or states (see Materials and Methods). These resulting time courses are translated to clinical outcomes in the following manner. In general, we define a pre-surgery initial condition for the model variables as (I0, D0, A0, DG0, TP0, TA0) = (0, 0, 0.125, 0, 0, 0), which indicate that all system components are at their background values. This state is referred to as the baseline equilibrium. This setting assumes that there are no underlying immune conditions prior to transplant surgery, which is typically not realistic in the case of transplant recipients. Future iterations of the model could incorporate prior host health conditions. The system can be perturbed from this baseline state, for instance, by setting a Table 4. For D < 4, this outcome is possible. (e-h) Above a certain threshold, initial host tissue damage caused by IRI incites an inflammatory response that does not resolve and results in host health failure. Note that this scenario is not the one we would consider for transplant conditions, but demonstrate the scope of the model dynamics to produce theoretically possible outcomes of traumatic injury. Initial condition for this simulation was (I0, D0, A0, DG0, TP0, TA0) = (0, 4, 0, 0.125, 0, 0) with parameters as given in Table 4. For D ≥ 4, this outcome is possible.
Modeling the dynamics of allo-recognition in organ transplantation Frontiers in Immunology | www.frontiersin.org non-zero initial condition for D and/or DG, which indicates the presence of damaged tissue to host and/or graft, respectively, due to IRI. The rates at which system variables change as a function of time are governed by the Eqs 1-6. A simulation in which the variables' time courses return to the background levels, after a brief transient increase away from this state due to perturbation, is translated as a healthy outcome. Figures 2A-D display a basic healthy outcome scenario in terms of host health. On the other hand, an unhealthy outcome is presumed if the departure away from the healthy equilibrium is not transient but instead causes the variables to approach a different equilibrium that has elevated levels of the variable states. The unhealthy equilibrium implies host health failure and, when a graft is considered, graft failure as well. Alternatively, one could define a level of cumulative damage that could be considered as irreparable, rather than defining non-recovery only by the system's long-term behavior; we did not explore this possibility in the present study. simulation: ischemia/reperfusion injury Without graft Placement (i.e., G = 0) FigUre 3 | simulation results of the inflammatory cascade following transplant surgery and non-allo-ag graft placement (i.e., α = 0). Combined initial host and graft IRI can synergize to incite an inflammatory response that (a-D) cannot resolve, causing graft failure or (e-h) transiently decrease graft function significantly. (a-c) present a series of simulations in which (a) a moderate level of initial surgical IRI in the host is considered with no corresponding graft IRI associated with the placement, (B) no initial surgical IRI in the host is considered with a low level of initial graft IRI, or (c) the moderate level of initial surgical IRI in the host of simulation (a) is coupled with the low level of initial graft IRI of simulation (B). In (D), the graft functionality curves corresponding to simulations (a-c) are shown. The "Graft function for C" time course in (D) displays the synergy to severely affect graft function such that the graft fails, shown as functionality decreasing to and remaining at 12%. Similarly, panels (e-g) display outcomes for (e) a low/moderate level of initial surgical IRI in the host with no corresponding graft IRI associated with the placement, (F) no initial surgical IRI in the host with a corresponding moderate level of initial graft IRI, or (g) the combination of the low/moderate initial level of surgical IRI in the host from simulation (e) with the moderate level of initial graft IRI from simulation (F). In (h), the graft functionality curves corresponding with (e-g) are shown. The "Graft function for G" time course in (h) displays the synergy to significantly affect graft function, but only transiently after which the graft functionality fully recovers. Initial conditions for simulation: ischemia/reperfusion injury with graft Placement But with no apparent antigenic Mismatch (i.e., α = 0) The next iteration of simulations considers not only the IRI to the host from surgery but also the IRI associated with the graft due to the processes of harvest from donor and transportation to the recipient host. We assume that initial graft functionality starting at a percentage lower than 100% is a result of IRI due to the harvest and transport procedures, and not an indicator of the functionality that it had when still intact in the host from whom the graft was harvested. Thus, 100% in our model would mean 100% of the total functionality exhibited by a given organ pre-transplant. Presumably, organs harvested for transplant were functioning "normally," such that they did not have existing damage affecting this normal function. However, this value could be lower if an organ were harvested from an older or less healthy donor (a scenario we did not explore explicitly). For this simulation set, we assume that the graft and host are identical, and therefore do not consider any interactions that involve allo-recognition due to mismatch (i.e., the parameter governing mismatch intensity is set to zero: α = 0). The model also displays feasible qualitative behavior for possible outcomes when considering ranges of injury severity. In Figures 3A-D, we show that initial host damage combined with initial graft damage can synergize to result in graft failure, whereas each of these challenges separately did not. Figures 3E-H show synergy as well, but in a less extreme manner, wherein the graft does not fail and recovers fully. However, as seen in Figure 3H, the time course for "Graft Function for G" shows that the negative effects on graft function from IRI reduce graft function by 60% at one point in the simulation. This result suggests that the non-specific, detrimental effects of inflammatory processes initiated by IRI may make the graft that much more vulnerable in cases where host-graft mismatch is considered. We explore mismatch scenarios in the next two sections.
simulation: ischemia reperfusion injury with graft Placement and Varying apparent antigenic Mismatch levels (i.e., α > 0) In this next simulation set, we consider varying levels of host-graft mismatch, and thus the interactions shown in Figure 1 involving allo-recognition come into play. We use the initial condition (I0, D0, A0, DG0, TP0, TA0) = (0, 2, 1, 0.125, 0, 0) as in Figure 3G, and set α to different values within the interval [0,1] in the multiple simulation runs. Figures 4A-D display four qualitatively different outcome scenarios corresponding to ranges of the mismatch parameter, α. Each figure panel displays the graft functionality results of multiple simulation runs for values of α within the specified ranges. In these various scenarios, we observe outcomes corresponding to the clinical scenarios mentioned earlier at the beginning of Section "Results. " Clinical quiescence is represented in Figure 4A, where there is little or no graft damage and full or nearly full graft functionality is achieved and retained. Acute clinical rejection is represented in Figure 4D, where poor graft functionality is seen very early after the simulation is initiated (i.e., after the transplant is completed), and failure is predicted to occur within less than a month's time. The subclinical inflammation outcome is represented in Figures 4B,C. In Figure 4B, we interpret the smaller oscillations as subclinical chronic inflammation predicted to resolve on the order of 1-3 months (shown for up to 1000 h ~ 42 days), since the recovery behavior is different from, and takes longer than, the graft tolerance recovery scenario of Figure 4A. Furthermore, since in Figure 4B the damped oscillations are such that (1) graft health does not decrease too often nor too greatly below the original graft health level; and (2) an acceptable recovery is seen eventually (i.e., graft health is greater than 95%), we interpret this behavior as subclinical. In other words, the graft is in comparable or better condition than when it was first transplanted, but it is not maintaining optimal function until much later. Note that Figure 4A could also be classified as subclinical, but the length of time in which graft health is not ideal is much shorter relative to the scenarios in Figure 4B. Thus, we do not classify Figure 4A as a chronic scenario. In Figure 4C, the oscillations are larger and do not resolve as in Figure 4B. We equate this outcome with long-term rejection since a high and steady level of graft function is never observed as T cells cause inflammation and subsequent damage to flare up and subside repeatedly. This prediction points to a scenario leading to graft failure, even though there are times when there is only subclinical inflammation, and a good level of graft function is observed. Table 2 displays a summary of minimal initial graft functionality percentages (corresponding to an initial value of DG) from which outright graft failure (i.e., ending graft functionality of 12%) is avoidable, given a particular value of α. For ~0.032 < α < ~0.3, the healthy stable equilibrium is replaced by a suboptimal healthy stable equilibrium. Higher α values outside this range give rise to oscillations that indicate worsening graft function, with the minimal graft functionality of the oscillatory range reaching 27% as α approaches 0.7. For α > ~0.75, outright graft failure is the only outcome and the ending graft functionality equilibrium value is 12%.
simulations of Preconditioning scenarios
In some simulations, an initial level of host tissue damage can act as a preconditioning factor in promoting graft survival. While the release of DAMPs from injured tissue incites pro-inflammatory components, the cascade also involves induction of antiinflammatory mediators. If the pro-inflammatory levels from this initial surgical DAMP release are below some threshold, and the corresponding anti-inflammatory cell/mediator levels are above some threshold at the time the additional DAMP release happens from an IR-injured graft, then an attenuated damage response may be possible. We depict one such simulation experiment of this preconditioning phenomenon, shown in Figure 5. This type of preconditioning, in which the response to a second insult is lower than that for the first, is called "tolerance" and has been reported widely in multiple settings of acute inflammation (62,63). Indeed, a similar tolerance phenomenon was reproduced in a mathematical model of the host immune response to repeated endotoxin challenge (64). That study also demonstrated that repeated endotoxin challenges that were not timed carefully displayed potentiation of the inflammatory response, another manifestation of preconditioning typically known as priming (65). The analogous potentiation feature was seen in the present model in Figure 3D even with no mismatch factor present. We interpret this outcome to be similar to the scenario in which a graft is rejected, and the patient undergoes repeat transplantation. The outcomes in this setting are known to be poor (44,66). Thus, the timing of the excitatory and inhibitory mechanisms involved in the entire transplant process is important to understand in order for therapeutic strategies to positively synergize with these events.
Discussion
The integrated nature of inflammatory and antigen-specific immunity that underlie the response to organ transplantation has largely defied a synthetic understanding. This complexity can often be observed in the form of emergent phenomena that cannot be predicted based on an understanding of the component parts of the immune system, and may be at the root of the need for life-long immunosuppression post-transplantation. We suggest that the development of novel treatment strategies for organ transplantation can be aided greatly by mechanistic mathematical models such as the one presented here, because inevitably, independent mechanisms must be integrated in order to predict higher-order system properties in a clinically relevant manner. We regard a mechanistic model as one that describes "rules" for how the individual model components interact and evolve with time. We use the term "mechanistic" to distinguish this type of model from statistical or data-driven models, in which quantitative associations are defined, rather than abstracted mechanisms.
The past decade has witnessed such a synthesis in the form of simplified (reduced-order) computational models of acute inflammation, which have yielded useful insights into the mechanisms and pathophysiology of critical illness (64,(67)(68)(69). However, such models are at best only capable of general, high-level predictions, which are not sufficiently specific so as to be testable in individual patients or in in vitro/in vivo experiments. Alternatively, modeling biological systems in a realistic fashion often necessitates complex, large-scale models describing the underlying system dynamics (54,70,71). An important advantage of such mechanistic models is that they can allow for quantitative predictions (48,49,56,(72)(73)(74)(75)(76) and clinically translational connections of molecular mechanisms to pathophysiology (77), with the ultimate goal of improving the drug development process (78).
The unmet need for new treatments and diagnostic modalities allowing ultimately for long-term graft survival with low or no immunosuppression in organ transplantation is acute. While decades of work have led to many novel insights from the molecular to the physiological level, the net result has remained centered around life-long immunosuppression. We suggest that this is not because the effort has not been worthwhile or because promising candidate approaches were not pursued. Rather, it is our contention that what has not taken place is the process of synthesis of these insights into a larger whole. Computational modeling is a promising avenue for such synthesis; however, the current approach is based purely on statistical tools by which to associate multiple variables to outcomes.
In the present study, we created a mechanistic mathematical model based on ODEs that describe key mechanisms of innate and adaptive immunity and that span the full process of transplantation. This model focuses on the very early inflammatory events linked to the surgery, IRI, and memory T cell attack, events cross-modulated by each other and which translate into significant subclinical and clinical manifestations in only a subset of organ transplant recipients. However, these complex, early inflammatory events, as they do occur, may set the tone for either excellent or poor long-term allograft and patient outcomes. Thus, key outputs of our model include the prediction of that surgical injury and I/R-induced graft damage can be well-tolerated by the recipient when each is present alone, but that their combination (along with antigenic mismatch) may lead to acute rejection, as seen clinically in a subset of patients (38,47). An emergent phenomenon from our simulations is that low-level DAMP release can tolerize the recipient to a mismatched allograft, whereas different restimulation regimens can drive an exaggerated rejection response. This former prediction is in agreement with published studies showing that preconditioning with the DAMP high-mobility group box 1 (HMGB1) can reduce the severity of inflammation and damage in the setting of graft IRI (99).
Limitations of this mechanistic mathematical model reside in the fact that the induction therapy and the maintenance immunosuppression are not considered in the model, and this is an area for expansion and augmentation of our modeling work. Moreover, this mechanistic mathematical model that predicts early innate and adaptive immune events is a generic one: each organ may have its own distinctive signature of early immune events. Thus, further augmentation of our model would involve making organ-specific variants. Additional limitations include the fact that this is a relatively abstract model, in which multiple mechanisms are lumped into single variables. As such, this model cannot be directly verified in a quantitative manner, other than as concerns the relative timing of various events. One key area where this limitation is apparent concerns the aforementioned emergent tolerization behavior as a function of prior exposure to damaged graft tissue, which we hypothesize as being due to DAMPs such as HMGB1 (99). Given tolerization is a manifestation of similar mechanisms to those that drive injury, and that HMGB1 can drive hepatic injury through activation of DCs (100), it is tempting to speculate that DCs are a key cell type in this process. Thus, future modeling work focused on examining this tolerization mechanism (or alternative mechanisms) in the context on organspecific environments is warranted. In addition, a greater in-depth mathematical analysis can be done to gain deeper insights into the dynamics, which becomes especially helpful when the models are more closely tied to experimental and clinical data.
Despite these limitations, this model was capable of reproducing a rich set of biological and clinical behaviors. Simulations of this model under various initial conditions of IRI, graft injury, and degree of antigenic mismatch yielded a broad spectrum of outcomes from nearly complete graft function to outright (acute or chronic) rejection. Importantly, this model also yielded behaviors such as tolerization (durable unresponsiveness to donor-antigens) through preconditioning, as well as the harmful alternative outcome of more severe graft failure upon retransplantation. Future iterations of this model could address these limitations and additionally explore the effects of variability that would naturally exist from patient to patient with respect to host health and immune function (94). Consequently, mathematical/engineering control methodologies could be employed on the models to suggest early therapeutic intervention strategies for this complex immune system (101).
In conclusion, we suggest that this model is a stepping stone toward further insights, not only into the response to allotransplantation but also for other disease states. Several diseases with or without an immunologic trigger have been recently determined to have inflammation as a common fingerprint. Therefore, understanding diseases according to their common biological mechanism and using systems biology, mathematical modeling, and bioinformatics/data-driven modeling methods to interrogate the immune response before, during, and after perturbation will help not only to predict clinical outcomes but also guide prompt and precise targeting of new therapies (46,102).
Materials and Methods
We formulate the model by building upon the approach and principles of prior modeling work to provide the foundation for the current model (64,69). In this prior work, an abstract, four-equation model of the acute inflammatory response to bacterial pathogen and to Gram-negative bacterial endotoxin was developed. The approach considered various subsystems as a way to tractably analyze and calibrate the qualitative behavior of parts of the larger system to gain a greater understanding of which entities governed certain dynamic properties in the larger system. We refer to this modeling process as a "subsystem modeling approach. " The Reynolds et al. 's model displayed rich qualitative behavior that corresponded to multiple clinical outcomes seen in cases of severe systemic inflammation due to bacterial pathogen and experimental studies of endotoxemia and tolerance. The general dynamical components of this prior model, when considered without a pathogenic or endotoxin insult, also correspond well to an abstract representation of the immune response to traumatic insult. Thus, the current model adopts a similar strategy and mindset for the development of the current model of immune responses in transplantation. All model simulations and analysis were performed with XPPAUT (103). To create Figures 2-5, the numerical data produced from the XPPAUT simulations were exported to MATLAB ® (R2013b, The Mathworks Inc., Natick, MA, USA). Additional calculations were performed with MAPLE (2015, Maplesoft™, Waterloo, ON, Canada). The complete mathematical model given by the ODE system (1)-(6) was analyzed using the subsystems approach mentioned above wherein the dynamics of a few interacting variables are examined prior to combining the equations altogether. Parameter values used in this section can be found in Table 4. In the subsystems we discuss throughout this section, of most interest is the number and stability properties of equilibria and how these change with parameter value changes. Equilibria of a system of differential equations occur at the intersections of nullclines which are the equations resulting from setting each differential equation to zero and solving the resulting system of algebraic equations. The points that satisfy this are naturally the system states at which there is zero rate of change e.g., dx dt = ( ) 0 , indicating an equilibrium state or fixed point. The dynamics of the ODE system are organized around these special points. For a system of two variables, the nullclines are especially useful for a geometric analysis of the system states and to observe how the shapes and positions of the nullclines change with changes to parameters or functional forms of the equation terms. Small perturbations of the system away from an equilibrium that cause the system solutions to return to the equilibrium as t→∞ define a locally asymptotically stable (or simply stable) equilibrium. If, on the other hand, the perturbation causes solutions to move away from said equilibrium, then we call the equilibrium unstable.
We only concern ourselves with biologically feasible equilibria which are those in the positive orthant. The variables of the system are necessarily formulated to remain positive for all time and all parameters are positive as well. For more details regarding the terminology and mathematical analysis used, consult for instance (104).
DTotal/I subsystem: Total Damage and early innate components
We will first consider a subsystem that examines the dynamics of tissue damage and associated DAMP release with the early innate components of interest herein, as described in Table 1.
In (69), it was shown that a similar subsystem involving damage and early pro-inflammatory phagocytes contained a stable healthy equilibrium as well as another stable equilibrium corresponding to elevated damage and elevated immune components. We build upon the structure developed there to construct our subsystem here and discuss the resulting analysis afterward. We note that the terms contained within the ODEs that we formulate are based on the principle of mass action kinetics. For instance, Table 5 provides the system of reactions involving the resting/circulating innate components, IR, and the activated innate components, I. Table 3 then provides the details on how we use a quasi-steady-state assumption to reduce the IR/I system to a single equation, based on the rapid nature of the activation process.
For the analysis of the DTotal/I subsystem, we model the activation of resting/circulating pro-inflammatory innate components as described in Section "Deciphering the Complexity of Inflammation and Immunity with Mathematical Models" but ignore for now any inhibitory effects from anti-inflammatory components or additional activation by pro-inflammatory T cells and thus arrive at Eq.7.
T0
Population of inactivated memory T cells from which the T cell subsets, TP and TA, are produced. The T0 population is also assumed to be in quasi-steady state and the result is incorporated into the equations in which T0 appears. (arbitrary units: T0-units) derived from assuming that the following equation is in quasi-steady state: Graft health/functionality; measured as a percentage with 0 indicating 0% functionality and 1 indicating 100% functionality. Graft health is defined as a function of associated graft damage, DG:
+
The total tissue damage can be modeled by combining tissue injury caused (a) to host tissues from the early innate components responding to DAMP release and (b) to the graft, G, by either early innate components, I, or by pro-inflammatory T cells, TP, the latter of which is ignored for the analysis of the DTotal/I subsystem. Thus, we formulate Eq. 8, where a decay term of the total damage is also incorporated to account for a combination of tissue repair and regeneration. Graft health, G, is a function of graft damage, DG as discussed in Table 3. Note that since Eq. 8 is for total damage and not just graft damage, the parameters kgdg and xgdg have a slightly different meaning in this subsystem than they will in the full system, where the DTotal equation is separated into two equations: one to represent the damage to the host, D, and another to represent the damage to the graft, DG. This separation is done later in order to distinguish between damage done in general and graft-specific damage. Additionally, the inhibitory effects of anti-inflammatory components, A and TA, are later incorporated as is the additional damage to graft tissue by activated pro-inflammatory T cell subsets, TP.
As in (69), we assume that the ability of the innate immune components to create damage saturates when these components are very large relative to their baseline levels. We also incorporate the Hill-type function given as f(x) under Eq. 8 with a hillcoefficient of 6. We note that the choice of a hill coefficient in Reynolds et al. was made to ensure that the healthy equilibrium of the subsystem has a reasonable basin of attraction. Using the parameter values given in Table 4, this modified system behaves as in the prior work, with the I and DTotal nullclines intersecting at (0,0) and at two additional points in the positive quadrant. The (69). present in the DTotal/I subsystem we developed here. This means that the system has the ability to display different outcomes, depending on the initial conditions of the variables that we test. These outcomes are then translated qualitatively into clinical scenarios as discussed in Section "Results. " Additionally, we know from the prior results that the incorporation of the anti-inflammatory component when treated as a constant will yield a loss of this bi-stability when the level of the anti-inflammatory component exceeds a value of 0.6264 and only the healthy equilibrium remains stable. Therefore, when we incorporate the analogous dynamic anti-inflammatory component, A, into the full model, we wish to make sure to calibrate any additions to A such that the maximum level of A does not exceed the 0.6264 threshold, since this would produce unreasonable (i.e., non-biological) behavior. For instance, if this threshold were exceeded, the DTotal/I subsystem would be incapable of reaching an unhealthy equilibrium while other components of the model, such as activated pro-inflammatory T cells or graft damage (when separated from total damage), would remain elevated. The conditions for bi-stability noted above will not be changed when we combine subsystems at the end.
The I/TP system has one or two non-negative equilibria depending on the parameter values. If we fix the values for the parameters that appeared in the DTotal/I system, the following parameters govern the number and stability of the equilibria: kitp, st0, μt0, ktpi.
The point (I,TP) = (0,0) is always an equilibrium and is stable for kitp = 0.01, st0 = 1, μt0 = 0.05, and ktpi = 0.01. Since we have an estimate for the half-life of activated T cells (unpublished work) which translates to a rate of 0.03/h, we estimate the half-life of inactivated memory T cells to be slightly longer than this at 0.05/h. Also we fix the source term, st0, to a value of 1 and then determine the values of kitp and ktpi such that the (0,0) equilibrium is stable and that the rate at which trajectories approach this equilibrium is not unduly slow, which is related to the position of the nullclines. For simplicity, we let kitp=ktpi since there is a lack of In other words, if their values are set to 0.01, for example, then (0,0) will be unstable; however, we wish for this subsystem to have (0,0) stable under these parameters so that neither I nor TP will drive sustained TP or I levels, respectively. Therefore, when connected to the damage equations, when each is sustained at an elevated equilibrium, this will depend on feedback from the damage they incite, rather than just each other.
The Dg/TP (and G) subsystem For the DG/TP subsystem that includes the auxiliary variable G, the same type of functional form used in modeling damage to host (D) via innate cells (I) is employed to model the graft damage, DG, created by pro-inflammatory T cells, TP. The parameter values are set according to Table 4. Bi-stability is not a feature of this system, but when there are no TP T cells, then (0,0) is always stable; and for low mismatch factor (i.e., α ≤ 0.074), (0,0) is stable. As α increases through this, (0,0) becomes unstable and a new equilibrium of interest is born and is stable (spiral). For values close to 0.075, the approach to the equilibrium is quite slow away from the stable manifolds of the equilibrium. When α = 0.08, the positive equilibrium is a stable spiral which establishes the presence of damped oscillations in this subsystem. Naturally, as T cells destroy graft tissue, there is less tissue to destroy, but as the tissue regenerates, the T cells can then destroy this regenerated tissue. Also, as T cell numbers increase the source for new ones is depleted until the turnover/death of existing activated T cell subsets allow for the activation of more (literally the way the source/recruitment term is modeled) -this could be interpreted as a wait time for replenishment of the T cell source from the bone marrow. Understanding the tissue repair process and time scale relative to T cell behavior could help calibrate this aspect better. For instance, tissue repair/ regeneration may be hindered significantly in disease states and therefore may depend on the existing level of damaged tissue.
The Dg/TP/i (and g) subsystem The DG/TP/I subsystem which includes the auxiliary variable G is given by Eqs 13 and 14 and displays bi-stability for the parameters listed in Table 4 (with α = 0). Note that the DTotal\I subsystem is partially contained in this 3-variable subsystem. Initial graft damage values in which DG(0) > 0.095 lead to graft/host failure. Recall that this behavior is in the absence of any anti-inflammatory Modeling the dynamics of allo-recognition in organ transplantation Frontiers in Immunology | www.frontiersin.org references inhibition; so very little graft damage can lead to failure in this subsystem even without a positive mismatch factor. For very low initial graft damage [e.g., DG(0) = 0.08 or ~2% graft damage] and for low graft mismatch (e.g., α = 0.01), survival is possible. Though the ranges of pairs of values of initial graft damage and α that produce survival outcomes is limited, the presence of bistability exists and the presence of inhibitory components added later allow this range to increase. For some DG(0) and α value pairs [e.g., DG(0) = 0.08 and α = 0.02], graft functionality remains very high (~99%) for ~230 h (~1 week) after which it decreases rapidly to its ending steady-state functionality value of 13% by ~300 h. If considering the presence of activated memory T cells at time zero [i.e., TP(0) > 0], the time to graph failure greatly decreases. For example with DG(0) = 0.08 and α = 0.02, when TP(0) = 1 functionality decreases to 13% by 50 vs. 300 h without an initial population of activated memory T cells. A similar result occurs when there is an initial population of activated innate inflammatory components, I. For example, with DG(0) = 0.08, α = 0.02, and TP(0) = 0, when I(0) = 0.01, graft functionality decreases to 13% by 165 vs. 300 h without an initial population of activated innate inflammatory components.
where id G i i i tp P
R k D k I k T
where di f x x x x ( ) ; = +
anti-inflammatory effects
The parameter values for the anti-inflammatory components, A, were set as in Reynolds et al. where applicable and the additional parameters in this category were estimated to calibrate the baseline responses. For instance, the contribution of TA to A was calibrated such that maximum TA levels would not allow A to exceed its threshold value of 0.6264 as discussed previously. Additionally, in the case of severe initial tissue damage, it is possible that this positive feedback between DAMP release caused by tissue injury and inflammation causing further tissue injury may not resolve, and thus lead the way to multi organ failure and death. In the current state-of-the-art, the transplantation procedure and donor graft condition are such that the surgical procedure and associated I/R are typically not the cause of organ failure. However, theoretically, this scenario is possible and helps to calibrate the extreme cases of the model such that complete resolution is not the only outcome possible regardless of initial conditions and parameter values. Thus, the inhibitory effects of
|
v3-fos-license
|
2016-10-31T15:45:48.767Z
|
2014-12-04T00:00:00.000
|
13927839
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1152/jn.00347.2014",
"pdf_hash": "d78168dd9ea1ada8c02175cc2bf2b44bdfd3f67c",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44138",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "d78168dd9ea1ada8c02175cc2bf2b44bdfd3f67c",
"year": 2014
}
|
pes2o/s2orc
|
Blocking Central Pathways in the Primate Motor System Using High Frequency Sinusoidal Current
Total number of words: 5297 (exc refs and legends) 21 22 For submission to: 23 ABSTRACT 28 Electrical stimulation with high frequency (2-10 kHz) sinusoidal currents has previously been shown 29 to produce a transient and complete nerve block in the peripheral nervous system. Modelling and in 30 vitro studies suggest that this is due to a prolonged local depolarization across a broad section of 31 membrane underlying the blocking electrode. Previous work has used cuff electrodes wrapped around 32 the peripheral nerve to deliver the blocking stimulus. We extended this technique to central motor 33 pathways, using a single metal microelectrode to deliver focal sinusoidal currents to the corticospinal 34 tract at the cervical spinal cord in anaesthetized adult baboons. The extent of conduction block was 35 assessed by stimulating a second electrode caudal to the blocking site, and recording the antidromic 36 field potential over contralateral primary motor cortex. The maximal block achieved was 99.6%, 37 similar to previous work in peripheral fibers, and the optimal frequency for blocking was 2 kHz. 38 Block had a rapid onset, being complete as soon as the transient activation associated with the start of 39 the sinusoidal current was over. 40 High frequency block was also successfully applied to the pyramidal tract at the medulla, ascending 41 sensory pathways in the dorsal columns and the descending systems of the medial longitudinal 42 fasciculus. 43 High frequency sinusoidal stimulation produces transient, reversible lesions in specific target 44 locations and could therefore be a useful alternative to permanent tissue transection in some 45 experimental paradigms. It could also help to control or prevent some of the hyperactivity associated 46 with chronic neurological disorders.
This technique has been applied to pudendal nerve for the treatment of detrusor sphincter dyssynergia (Bhadra et al. 2006;Tai et al. 2004) and to vagus nerve for treatment of obesity (Camilleri et al. 2008).Optimal frequencies for cat pudendal nerve were in the 6-to 10-kHz range.Bhadra and Kilgore (2005) investigated blocking of sciatic nerve in rats; the optimal frequency was 10 kHz, the lowest tested.In macaque monkey median nerve, the most effective frequency was 20 -40 kHz (Ackermann et al. 2011b); frequencies around 10 kHz produced tetanic activation, not blocking.Across different subjects, Ackermann et al. (2011) described a positive correlation between nerve diameter and block threshold.The higher optimal frequencies found in their study may therefore be related to the larger nerve diameter in monkey (3-4.1 mm) compared with the smaller nerves in other species.
There is some debate over the mechanism of HF blocking; however, the effect is known to be restricted to the area close to the electrodes delivering the HF sinusoid, since stimulation of a position more distal on the nerve results in normal muscle activation (Kilgore and Bhadra 2004).Modeling studies suggest that the stimulus produces a steady-state depolarization of a broad section of membrane directly underneath the blocking electrode (Bhadra et al. 2007;Kilgore and Bhadra 2004;Williamson and Andrews 2005).Tai et al. (2005) proposed that the most important effect is the relatively high activation of potassium channels consequent on the depolarized membrane potential.
Thus far, most attention has been focused on HF blocking of peripheral nerves, but similar approaches could find widespread utility in central pathways.There is evidence that deep brain stimulation of the subthalamic nucleus for treatment of Parkinson's disease acts mainly by blocking endogenous activity, and not by augmenting activity via stimulation (Bellinger et al. 2008;Jensen and Durand 2009).However, the stimulus frequencies (ϳ130 Hz) and waveforms (biphasic square pulses, 60-to 150-s duration per phase) are very different from the sinusoidal stimuli in the kilohertz range used by work on peripheral nerve.Elbasiouny and Mushahwar (2007) used computer modeling to describe the effect on motoneuron firing of a HF sinusoidal current delivered through a nearby microwire electrode.Blocking was achieved with frequencies comparable to those used in peripheral nerve (5 kHz); the mechanism appeared to be sustained depolarization of the initial axon segment.
Spinal cord stimulation via a chronically implanted device is routinely used for the treatment of chronic pain.Such devices normally stimulate at low frequencies (between 30 and 100 Hz), but recently there has been interest in using kilohertz range stimuli.A large clinical trial has demonstrated promising effects by delivering 10-kHz stimuli to electrodes with tips in the epidural space of the thoracic segment (Al-Kaisy et al. 2014).The authors report a significant and sustained reduction in pain scores over a 24-mo period in the majority of patients.In contrast, however, results from basic science studies carried out in rats are conflicting (Shechter et al. 2013;Song et al. 2014).
To date, few studies have investigated the potential of HF sinusoidal stimuli to block conduction in central axon tracts.Aside from possible biophysical differences between central and peripheral axons, an important methodological difference is the type of electrode used to deliver the stimulus.All studies of peripheral nerve use cuff electrodes, which ensheath the nerve, focusing the current along the axon fascicles.By contrast, stimuli to central axon pathways are typically delivered via metal electrodes, insulated except for the tip.This allows stimuli to be delivered to deep target tracts, located via stereotaxic coordinates and electrophysiological response signatures, without damage to overlying neural structures.Activity block has been achieved with the use of tungsten microwire electrodes placed directly within the rat sciatic nerve (Ackermann et al. 2010b), but it remains unclear whether HF currents delivered through fine-tipped electrodes will be capable of producing sufficient blocking of central pathways to be useful, either clinically or experimentally.
In this study, we characterized the effect of HF sinusoidal stimuli on the primate corticospinal tract.Stimuli were delivered through a single sharp metal electrode and a distant reference.We found that near-complete blocking of fast corticospinal conduction could be obtained, with a rapid onset; the effect was reversible, also over a rapid timescale.We further demonstrate these findings in other primate central pathways and suggest that this method could have widespread uses, both to generate transient reversible lesions in animal studies and potentially to treat neurological disease caused by excess activity in a defined central tract.
METHODS
Anesthesia and surgical preparation.The main experimental series was performed in five anesthetized healthy adult male baboons (Papio anubis; 22.5-26 kg) as part of longer studies unrelated to the present report.All animal procedures were approved by the local ethics committee of the Institute of Primate Research, Nairobi, Kenya.Animals were initially anesthetized with intramuscular injection of ketamine (10 -12 mg/kg) and xylazine (0.5-0.75 mg/kg).After intubation and insertion of an intravenous line, deep general anesthesia was maintained with inhaled halothane (1-2% in 100% O 2 ) and continuous intravenous infusion of fentanyl (1-4 g•kg Ϫ1 •h Ϫ1 ).The animals were artificially ventilated using a positive pressure ventilator.Initial surgical preparation included a tracheotomy (which replaced the originally inserted endotracheal tube, providing more stable long-term airway protection) and insertion of a central arterial line for continuous blood pressure measurement via the carotid artery on one side.Methylprednisolone (initial loading dose of 30 mg/kg, followed by infusion of 1-7 mg•kg Ϫ1 •h Ϫ1 ) was administered to reduce cerebral edema, and Hartman's solution (1.5-6.5 ml•kg Ϫ1 •h Ϫ1 ) was administered to ensure fluid balance.The urethra was catheterized to prevent urinary retention, and temperature was maintained using a heating blanket supplied with thermostatically controlled warm air.Anesthetic monitoring included heart rate, arterial blood pressure, pulse oximetry, end-tidal CO 2 , and core and peripheral temperatures.
The head was fixed in a stereotaxic frame, and a craniotomy over the left motor cortex (M1) was made to give access for epidural field potential recording.A laminectomy was performed to expose spinal segments T1-C5, and the spinal dura was removed to allow access to the cord.The vertebral column was clamped at the high thoracic level.In two animals a second craniotomy was opened on the right side and a small piece of M1 removed for an in vitro experiment unrelated to the present report.
Anesthesia was then switched to a combination of midazolam (0.4 -2.4 mg•kg Ϫ1 •h Ϫ1 ), ketamine (0.1-0.8 mg•kg Ϫ1 •h Ϫ1 ), and fentanyl (2.9 -11.2 g•kg Ϫ1 •h Ϫ1 ) for the electrophysiological recordings, because we have previously found that this yields stable anesthesia but leaves central nervous system circuits more excitable.Slow rising trends in heart rate or blood pressure, or more rapid rises in response to noxious stimuli, were taken as evidence of lightening anesthesia; by 10.220.33.4 on June 26, 2017 http://jn.physiology.org/supplemental doses were then given and infusion rates adjusted accordingly.During spinal stimulation protocols, neuromuscular blockade was initiated by giving an intravenous bolus of atracurium (10 -15 mg per injection); this was repeated approximately every hour as required to maintain block.
At the end of each experiment, the animals were killed by overdose of anesthetic; where tissue was to be harvested for histological analysis, the animal was perfused through the heart with phosphatebuffered saline followed by fixative.
Further experiments were carried out in one adult male macaque monkey (17.7 kg), with ethical approval from the Newcastle University Animal Welfare and Ethical Review Body and under appropriate licenses from the UK Home Office.This animal had previously been used for a chronic series of experiments on visual pathways, but his motor system remained undisturbed.The animal was initially anesthetized with an intramuscular injection of ketamine (10 mg/kg), and surgical procedures were carried out under sevoflurane (2-3.5%) with an additional infusion of alfentanil (12-15 g•kg Ϫ1 •h Ϫ1 ).A laminec- tomy was performed and the spinal dura removed to facilitate access to the T1-C5 segments.In addition, we made bilateral craniotomies over sensorimotor cortex for the purposes of recording cortical potentials.Physiological measures were monitored throughout the procedure as described in the experiments above.During recording, the anesthetic regimen was changed to an infusion of midazolam (0.9 mg•kg Ϫ1 •h Ϫ1 ), ketamine (0.6 mg•kg Ϫ1 •h Ϫ1 ), and alfentanil (12 g•kg Ϫ1 •h Ϫ1 ), and neuromuscular blockade was achieved with atra- curium (0.7 mg•kg Ϫ1 •h Ϫ1 ).At the end of this experiment, the animal was killed by overdose of anesthetic.
Electrophysiological recordings.In four animals, we investigated the properties of HF blocking stimuli delivered through electrodes positioned in the spinal cord.
"Hatpin" electrodes were made by joining a sharpened stainless steel electrode (MicroProbes, Gaithersburg, MD; order code MS501G; shaft diameter 256 m, tip diameter 3-4 m, estimated exposed surface area 310 m 2 , parylene-C insulated, tip impedance ϳ10 k⍀) to Teflon-coated seven-strand stainless steel wire.This was insulated with epoxy adhesive, leaving an ϳ3-mm insulated length of electrode protruding from the flat surface of the epoxy.One such electrode was inserted manually into a caudal section of the exposed spinal cord (approximate segmental level C6) on the right side, targeting the dorsolateral funiculus.Small adjustments were made using forceps to maximize the antidromic response observed over motor cortex following stimulation through this electrode, verifying its location within the dorsolateral funiculus, and it was then fixed in place using tissue glue.A standard stainless steel microelectrode (Microprobe SS30030.1A10;tip impedance Ͻ0.1 M⍀, tip diameter 2-3 m, estimated exposed surface area 180 m 2 ) was then inserted into a more rostral region of the spinal cord (around C5 segment) with the use of a three-axis stereotaxic manipulator, allowing us to examine how effects from this electrode depended on the tip location.
A schematic diagram of the experimental setup is shown in Fig. 1A.Stimuli were delivered to the caudal fixed electrode using an isolated constant-current stimulator at 2 Hz (AM Systems, Carlsborg, WA; biphasic pulses 0.2 ms per phase, intensity 500 A or 1 mA).The blocking stimulus was provided by a device that converted a voltage command signal to a constant-current isolated output (DS4 stimulator; Digitimer, Welwyn Garden City, UK).This device was calibrated using a 20-k⍀ load resistor across the two output contacts; output voltage was then measured on an oscilloscope and confirmed to be as expected.Sinusoidal currents had a frequency between 2 and 10 kHz, intensity of 200 A to 1 mA, and duration of 0.53 s.Throughout this report, intensity is given as the peak amplitude, i.e., the maximal positive or negative excursion relative to baseline, which equates to half the peak-to-peak amplitude.An epidural cortical recording was made over M1 with silver ball electrodes resting lightly on the dura (gain 5,000, bandpass 300 Hz-10 kHz).Command waveforms and stimulus delivery were controlled by Spike2 software and a Mi-cro1401 interface (CED, Cambridge, UK), which also sampled waveform data to disk at 25 kHz, together with markers indicating stimulus occurrence times.
Antidromic corticospinal responses in M1 were elicited following stimulation through the caudal spinal electrode (fixed "hatpin") at constant stimulus intensity.Stimulation through this electrode alone was interleaved with stimulation combined with HF sinusoidal stimulation through the rostral (movable) electrode.In most experiments, stimuli were given ϳ10, 250, and 500 ms after the onset of the sinusoidal current.We averaged together responses to the various stimuli given at 250 and 500 ms after verifying that both yielded similar results.Recordings were carried out with the rostral electrode at multiple depths to determine the spatial properties of HF block in the spinal cord.At each recording site, an intensity and frequency series for the HF block was recorded.We also measured the extent of overlap between corticospinal fibers that could be activated from the two electrodes by measuring responses to stimulation of each alone (biphasic square current pulses, as described above) and both together (occlusion).
In one baboon, we tested the ability of HF stimulation to provide information on pathways contributing to a response of unknown origin.Stainless steel electrodes as described above were inserted into the right-side medial longitudinal fasciculus (MLF) and left-side pyramidal tract (PT) in the medulla, using the double-angle stereotaxic method described by Soteropoulos and Baker (2006).Final electrode placement optimized the responses in field potential recordings from the surface of M1 and the cervical spinal cord.Responses in the spinal cord were then recorded following trains of three stimuli to the PT; the effect on these responses of delivering sinusoidal blocking currents to the MLF was tested.
We further tested parameters of HF block in one macaque monkey.These experiments used metal microelectrodes (as previously described) that were implanted into the pyramidal tract (positioned as above) and dorsal column of the spinal cord (hatpin electrode, positioned to optimize somatosensory evoked potential in sensorimotor cortex).In addition to HF sinusoidal stimuli, we also tested HF square-pulse stimuli delivered to the PT; these were produced using a standard isolated constant-current stimulator (model 2100; AM Systems).This animal also had a bipolar nerve cuff implanted around the median nerve in the upper arm to allow direct stimulation of the nerve.
Measures to reduce sinusoidal stimulus artifact.Delivering continuous sinusoidal stimulation posed a problem when making simultaneous recordings of electrical activity, as recorded signals were almost always contaminated by a large sinusoidal stimulus artifact.To reduce the impact of this, we designed the timing of test stimuli to be delivered at opposite phases of the sinusoidal current in successive trials (see Fig. 1B). Figure 1C shows an example of overlain single sweeps of recordings, which make clear the extent of the sinusoidal contamination and also illustrate the opposite phases of successive sweeps.Figure 1D shows an average of these waveforms; the artifact was much reduced by cancellation (note the scale bar is one-tenth of that in Fig. 1C).However, some residual contamination was still present.
Two signal processing methods were used to reduce the artefact further.First, we estimated a template for the sinusoidal contamination by constructing a cyclical average of the baseline (prestimulus) region.This averaged successive 1-ms-long sections of waveform; the cyclic average was then replicated over the entire average time course, both before and after the stimulus (red trace, Fig. 1D).We used a 1-ms cycle time for this process, because all sinusoidal frequencies tested were integer multiples of 1 kHz.The artifact template was then subtracted from the actual average (Fig. 1E).Part of the success of this method came from the fact that the same microprocessor-based system controlled the sinusoid generation and data capture, meaning that there was no drift between successive cycles of sinusoid and the data acquisition clock.Finally, we digitally low-pass filtered the response, using a cutoff frequency of 1.7 kHz, which was below the lowest stimulus frequency that we used (2 kHz).The filtered trace is shown in Fig. 1F.
By applying these processing methods, it was possible to extract clear recordings of responses whose amplitude could be reliably measured.The effectiveness of the processing was validated by the numerous instances when for whatever reason sinusoidal block was ineffective, and we recovered responses after artifact correction very similar to control waveforms (see for example Figs. 2, 3B, and 4D).All data analysis was performed in the MATLAB programming environment.
RESULTS
Effect of frequency and intensity of sinusoidal current applied to the dorsolateral funiculus.High-frequency sinusoidal stimulation was able to block conduction within the cortico-spinal tract of all animals tested.In each case, we found a frequency-and intensity-specific effect of blocking cortical field potentials recorded over M1.An example data set is shown in Fig. 2. The most effective stimulus frequency was 2 kHz, which was able to block the majority of the antidromic response at the highest intensity in three of the four animals (93.5, 99.6, 47.2, and 73.2% of response blocked at maximum intensity for baboons M, L, U, and N, respectively).There was a positive relationship between blocking efficacy and HF stimulus intensity; however, this grew weaker as frequency increased.
Spatial extent of block and comparison with stimulation.The depth profile illustrated in Fig. 3 shows how blocking changed as the movable electrode was advanced into the spinal cord of baboon L, keeping the intensity and frequency of the sinusoidal stimulus the same (1 mA, 2 kHz).A clear increase in conduction block was apparent as the electrode was advanced into the cord, although this appeared to have two distinct phases.The first phase produced a peak in block at a depth of 2 mm; this is consistent with the tip lying in a central region of the dorsolateral funiculus.As the movable electrode was advanced deeper into the cord, there was a second, more pronounced period of block at 3.5 mm.We suggest that this could reflect stimulation of a medial area of the white matter that borders the intermediate zone.This region has recently been shown to be densely populated with corticospinal fibers that are coursing into the gray matter to synapse onto interneurons (Rosenzweig et al. 2009).
Figure 3 makes clear that HF blocking is only effective within a limited distance of the electrode tip.We were therefore interested in comparing the ability of a sinusoidal current to block a population of axons with the ability of a square current pulse through the same electrode to stimulate them.We estimated the latter using an occlusion test, illustrated in Fig. 4A.The response to stimulation of both rostral and caudal spinal electrodes simultaneously was subtracted from the response to the rostral electrode alone, yielding the additional fibers activated from the caudal electrode.This was compared with the response to stimulation of the caudal electrode alone.If there was no overlap between the two populations of stimulated fibers, these two traces would be the same.If there was complete overlap, the subtracted trace would show no response.Measuring the amplitude of the subtracted trace as a percentage of the amplitude of the response to caudal electrode alone thus quantified the extent of overlap, i.e., how many of the fibers activated by the rostral electrode could also be activated by the caudal electrode.
When comparing overlap of activation produced by stimulation with extent of block, we found two distinct patterns (each seen in 2/4 animals).For the experiment illustrated in Fig. 4, C-F, there was limited occlusion.The rostral electrode seemed capable of activating only a small fraction of the same corticospinal axons as the caudal electrode (Fig. 4, C and D, 11.3% at 1-mA intensity).However, a 2-kHz sinusoidal current passed through the rostral electrode could block around 50% of the response activated by the caudal electrode, even with a sinusoidal amplitude as low as 400 A (Fig. 4, E and F).By contrast, for the experiment illustrated in Fig. 4, G-J, there was substantial occlusion.Stimulation through the rostral electrode was capable of activating 65.4% of the same response as the caudal electrode at the maximum intensity tested of 800 A (Fig. 4, G and H).Sinusoidal currents passed through the rostral electrode blocked the majority of conduction at this amplitude (Fig. 4, I and J).In all animals, we found that HF stimulation typically blocked a greater proportion of fibers than could be activated by stimulation at the same intensity (Fig. 4B).
Onset time course.We examined the time course of the onset of conduction block in one animal; results are shown in Fig. 5.For this experiment, the moveable electrode was placed at the site of maximal block and a 2-kHz, 1-mA sinusoid was used throughout.Stimuli were delivered at different intervals after the onset of the sinusoidal current.For each interval to be tested, stimuli were tested on alternate trials at this interval and an interval 0.25 ms longer (half a cycle at 2 kHz) to ensure maximal cancellation of the sinusoidal artifact in averages (Fig. 1).As well as the stimulus artifact, the onset of the sinusoidal current elicited a large physiological response in the M1 recording (Fig. 5A).The response to sinusoidal current alone was subtracted from each sweep before the antidromic response produced from the caudal electrode was measured.
There appeared to be two distinct phases to the time course of blocking, as demonstrated previously in peripheral nerve.The first phase had an immediate onset and was reduced even when going from the 2-to 4-ms interval.This is likely to reflect activation of corticospinal fibers around the onset of the HF sinusoid.The elicited volley will show occlusion with that produced by the caudal stimulating electrode and then leave axons in a refractory state and unable to conduct.It is therefore perhaps slightly misleading to refer to this phase as "block," since the fibers were activated by the sinusoidal current and simply could not be activated again.Intervals Ͼ6 ms produced a more sustained phase of reduced response that is likely to represent true conduction block of corticospinal fibers.
We did not examine the time course of recovery from block in detail.However, our protocol delivered the first "control" stimulus 500 ms after the offset of the HF sinusoid; the response to this stimulus was the same as that to later control stimuli, suggesting that recovery was already complete by this time, similar to previous findings in the peripheral nerve (Bhadra and Kilgore 2005).
HF sinusoidal stimulation of other central pathways.We examined whether HF stimulation is also useful for blockade of other central neural pathways.First, we tested whether it was possible to block the corticospinal tract over its intracranial course by applying the blocking stimulus directly to the pyramidal tract at the medulla.As before, we generated an antidromic response in M1 by stimulating the cervical cord.We then applied HF sinusoidal stimulation (Fig. 6A) to the pyramidal tract.At 1-mA, 2-kHz stimulation, this was capable of blocking 85.7% of the antidromic response in M1.
Some laboratories may lack isolated constant-current devices capable of delivering sinusoidal stimuli.We therefore Fig. 2. Blocking effects of HF sinusoidal stimulation on antidromic corticospinal potentials recorded over M1.HF sinusoidal stimuli were applied to the spinal cord rostral electrode with different combinations of frequency (rows; 2-10 kHz) and intensity (columns; 200 -1,000 A) at the same time as stimulation of the caudal electrode (500 A, biphasic pulses).In each panel, the red line shows the response in the M1 epidural recording to the caudal electrode alone and the black line shows the response obtained during delivery of the sinusoidal blocking stimulus.Arrows mark the time of caudal stimulus delivery; gray boxes indicate the region over which the amplitude of the antidromic response was measured.Plots at the end of each row and column show how amplitude varied with intensity or frequency for fixed frequency or intensity, respectively.Amplitudes are expressed relative to the response to the caudal electrode alone (100%; solid red line).Error bars and dotted red lines indicate means Ϯ SE.Data were recorded in baboon M. also tested whether block could be induced using square pulses.These were generated with the use of a standard experimental stimulator, set to deliver biphasic pulses with the width of each phase equal to half the cycle time.As shown in Fig. 6B, such square-pulse HF stimulation was also able to generate substantial block.However, whereas for sinusoidal stimuli the most effective frequency was the lowest tested (2 kHz), this was not the case for square-pulse HF stimulation.Pulses delivered at 5 kHz appeared to yield greater block than those at either 4 or 7 kHz (Fig. 6B).
We also tested whether HF stimulation was capable of blocking central sensory pathways.Stimuli delivered to the median nerve evoked a somatosensory evoked potential over sensorimotor cortex (Fig. 6C, inset, red).When HF sinusoidal stimuli were applied to the dorsal column of the spinal cord, a substantial reduction in the somatosensory evoked potential was seen (Fig. 6C).This was marked even at the lowest intensities, demonstrating a particular sensitivity of this pathway to disruption from HF stimulation.
As noted above, one frequent use of experimental lesions is to reveal which pathway underlies a neural response.To demonstrate that HF block is effective when used in this way, we recorded from the dorsal surface of the cervical spinal cord in one animal following stimulation through an electrode implanted in the PT at the medulla.As shown in Fig. 6D (red), when we delivered a train of three stimuli to the PT, each stimulus was followed by a large short-latency corticospinal volley on the cord dorsum.In addition, the second and third shocks elicited later responses, which grew from second to third shock.These responses are not likely to result from current spread to other central pathways, because we have previously verified that a 1-mA stimulus to the PT in a macaque only just spreads to the adjacent contralateral pyramid (Soteropoulos et al. 2011, Fig. 3B); spread is even less likely in the larger brain of a baboon.Rather, the late responses presumably reflect transsynaptic processes originating from stimulated corticospinal fibers, but there are many potential pathways.Possibilities include recurrent activation of corticospinal neurons in M1 by the antidromic volley (Jackson et al. 2002), activation by corticospinal collaterals of C3-C4 propriospinal interneurons (Isa et al. 2006) or reticulospinal neurons (Keizer and Kuypers 1989), and activation of segmental spinal circuits (Riddle and Baker 2010).We tested one of these possibilities by placing a second electrode within the MLF, which carries many reticulospinal axons, and applying HF block through this electrode.
Figure 6D shows results of this experiment, comparing results from trains of three stimuli to the PT electrode alone (red) and during HF block of the MLF (black).Whereas the early corticospinal volley was unchanged, there were clear reductions in components of the later potentials.This is made clearer in Fig. 6E, which presents the difference between the two traces of Fig. 6D.The results demonstrate the existence of a reticulospinal volley following stimulus trains to the PT in primate, in agreement with previous reports in cat (Edgley et al. 2004).
Long-duration block.The results presented above concerned brief HF stimuli lasting no more than 0.5 s.In a further experiment, we also tested whether block would continue during a longer stimulus and whether such stimulation would have effects that outlasted the application of HF current.Figure 7A shows somatosensory evoked potentials obtained before (left), during (middle), and after (right) 30 s of continuous HF sinusoidal stimulation to the dorsal column (1 mA, 2 kHz).Note that the somatosensory evoked potential had two phases: the initial, negative phase was very consistent between sweeps in the control period and likely reflects the earliest synaptic input to the cortex; the later, positive phase was more variable during the control period and probably reflects later cortical processing that depends on exact background state.Block of the first part of the response was maintained throughout the stimulation period, although there was a small decrease after 10 s had elapsed; this decrement did not seem to reflect any adverse effect, because the response returned to the control level rapidly after HF stimulation was terminated.However, during an even longer 5-min period of HF stimulation (Fig. 7B), we observed considerable variability that could reflect the onset of tissue damage.The first 10 s of prolonged HF stimulation produced a block equivalent to that shown in Fig. 7A, but this was reduced markedly thereafter.Furthermore, when HF stimulation ceased, the early part of the somatosensory evoked potential did not recover to baseline level, at least not over the 30-s recovery period that we monitored.
DISCUSSION
We have shown that transient near-complete block of pathways in the central nervous system can be achieved with the use of a HF sinusoidal stimulus delivered through a single sharp metal microelectrode.This is a significant development because it demonstrates that principles carefully identified in the peripheral nervous system also apply to central pathways.As in previous work, we term the phenomenon a "conduction block."Alternative explanations for the reduction in responses might involve synaptic depletion or antidromic collision.Synaptic depletion cannot explain the observed effects on antidromic potentials, and the fact that block could be sustained for 30 s but then recover rapidly suggests that the blocking electrode was not continually stimulating fibers.The technique has potential to be exploited for both experimental purposes and clinical benefit.Possible therapeutic applications include reducing symptoms such as pain and spasticity, which are otherwise notoriously difficult to treat and represent a high burden for both patients and society.
HF block of the corticospinal tract.Our results demonstrate a range of conduction block using HF sinusoidal stimulation.The maximum achieved was 99.6%, similar to the complete block reported in studies of peripheral nerves.However, in two of four animals we achieved Ͻ75% block of antidromic fast corticospinal conduction.Although under optimal conditions a complete block can be achieved, it is important to be aware that variation in electrode placement or inter-individual differences may lead to incomplete blocking in some cases.
Blocking depended on stimulus frequency; in all animals, we found that a 2-kHz sinusoid (the lowest frequency tested) was most effective.This contrasts with some previous work in peripheral nerve, which reported higher frequencies to be optimal (Tai et al. 2004); in particular, for primate peripheral nerve much higher frequencies (20 -40 kHz) are required than for rodents and cats (Ackermann et al. 2011b), although the precise optimal frequency varies between individual animals (Bhadra et al. 2006).We were unable to test such high frequencies in the primate central nervous system because of the limited frequency response of our current delivery system.However, previous authors also have shown that block threshold increases with stimulus frequency between 1 and 30 kHz (Bhadra et al. 2006;Bhadra and Kilgore 2005;Joseph and Butera 2011;Kilgore and Bhadra 2004;Gaunt and Prochazka 2009), in agreement with our finding of reduced block as frequency increased.By using square rather than sinusoidal HF currents, we were also able to generate block, although the dependence on frequency appeared to show subtle differences, with 5-kHz stimuli being more effective than either 4 or 7 kHz; 2 kHz was also effective (Fig. 6B).
We did not test frequencies below 2 kHz because in preliminary recordings lower frequencies led to sustained activation, visible in animals not under neuromuscular blockade as repeated twitches.Even at 2 kHz there was a substantial transient activation of motor pathways (see Fig. 5A), leading to a large twitch if the animal was not maintained under neuromuscular block.Previous work in peripheral nerve has attempted to reduce the transient activation by using slowly increasing sinusoidal amplitudes, but without success (Miles et al. 2007).A paradigm that commences with a high-frequency (30 kHz) and high-current stimulus and then transitions to lower frequency with lower current (10 kHz) has been shown to minimize onset response in rat peripheral nerve (Gerges et al. 2010).Alternatively, HF stimulation can be combined with direct current (DC) to reduce the onset transient (Ackermann et al. 2011a;Franke et al. 2013), although in central pathways this would raise concerns about producing permanent lesions from the DC stimulus.It is possible that similar approaches could be used for onset transient suppression in central pathways, but there is likely to be a very sensitive dependence on the precise biophysics of the axons involved, and hence it would be necessary to tune the paradigm specifically for each application.In many cases, the existence of an onset response does not compromise the utility of the technique (e.g., Fig. 6D).Examination of the time course of blocking onset was made complex by the interaction with the onset transient response.However, it was clear that blocking was maximal as soon as this response started to decline (Fig. 5).
As well as testing central rather than peripheral axons, our study differed from previous work because HF current was delivered between a metal microelectrode insulated except for its tip and a distant reference, rather than a bipolar cuff electrode surrounding the nerve allowing focal current flow.Gaunt and Prochazka (2009) recently compared mono-and bipolar HF stimulation in cat pudendal nerve.Although bipolar stimulation required lower currents for block, blocking could still be achieved in the monopolar arrangement.Metal microelectrodes are typically used for experimental stimulation of the brain or spinal cord; the demonstration that HF block is possible using this configuration will allow straightforward integration of the technique into many studies.
Our main assay of conduction block was the antidromic field potential recorded over M1 following corticospinal stimulation in the cord.This will depend on only the fastest conducting fibers; we cannot comment on the impact of HF block on axons with slower conducting fibers, which are much more numerous (Humphrey and Corrie 1978).The somatosensory evoked potential that we also assessed is likewise dependent on fastconducting fibers in the dorsal columns.In peripheral nerve, Liu et al. (2013) demonstrated that conduction in the largest fibers was blocked at lower threshold, and recovered later, than in smaller fibers.It is likely that a similar bias toward blocking fast fibers will occur in central pathways at frequencies Յ10 kHz.In addition, slowly conducting C-fibers in peripheral nerve show a nonmonotonic variation of block threshold with frequency, with a second blocking region at frequencies Ͼ40 kHz (Joseph and Butera 2011).Delivering currents at these higher frequencies may thus provide a means of selectively blocking only slow fibers, which could be valuable in some studies.
Future use of HF block.High-frequency conduction block has the potential to become a useful experimental method.Unlike surgical lesions, it is quickly reversible.Protocols can therefore be repeated numerous times within one animal, leading to a reduction in animal numbers required for a given study.Although we see no reason why the method should not work in any central axon tract, it would be important for future studies to confirm the optimal frequency and intensity in the targeted structure, rather than assuming that the parameters that we have found to work in primate corticospinal tract and ascending sensory pathways are universally applicable.Although sinusoidal waveforms have been best investigated in peripheral nerve, rectangular waveforms (Bhadra and Kilgore 2004) and square wave pulses may also be effective (Fig. 6B), possibly opening the method to laboratories lacking equipment for arbitrary isolated current waveform delivery.
It is important to consider the safety of this technique as it is developed for further applications.Damaging effects of longterm electrical stimulation on nerve fibers have been reported (McCreery et al. 1995), andLiu et al. (2013) recently cautioned that HF stimulation can have long-lasting effects on nerve conduction, with the potential for nerve damage if used inappropriately.However, stimulus durations used in that particular study were substantially longer (5-10 s) than the 0.5 s used in most experiments here.Often for experimental use as illustrated in Fig. 6, D and E, only the briefest period of blocking is required, thereby lessening the chances of long-term damage.During our extended periods of HF stimulation of the dorsal column, we found complete recovery of somatosensory evoked potentials from a 30-s application but not following 5 min of block, underlining the potential for damage to central pathways following excessive stimulation.Gaunt and Prochazka (2009) showed stable blocking thresholds measured over 230 days when recording from cats implanted with cuffs on the pudendal nerve, suggesting that so long as blocking is kept within limits it has no cumulative long-term effects.
High-frequency conduction block is useful in vitro or for anesthetized whole animal preparations, where movements associated with the powerful onset transient activity can be blocked pharmacologically and there is no conscious perception of potentially unpleasant sensory activation.It would also be desirable to use the technique in the conscious state, either to make experimental lesions in behaving animals or as a therapy in patients, but onset activation is likely to present a severe limitation.Such activity has been reported only once in the literature in an awake cat (Gaunt and Prochazka 2009); the authors described a "mild aversive response" to HF stimulation of the pudendal nerve.Although this may be tolerable, the consequences of onset responses associated with stimulating central pathways are likely to be much more unpleasant, precluding use in awake subjects unless approaches to minimize onset response can be shown to work effectively in central axons (Ackermann et al. 2010a(Ackermann et al. , 2011a;;Franke et al. 2013;Gerges et al. 2010).For long-term chronic use, Gaunt and Prochazka (2009) described a system capable of delivering the sinusoidal current to implanted electrodes without the need for transcutaneous connectors or indwelling electronics.With minimal implanted material leading to reduced chance of postsurgical infection and a low risk of technical failure consequent on the simple design, this could have a number of uses as a clinical intervention if the problem of unwanted onset activity can be solved.
Fig. 1 .
Fig. 1.Experimental design.A: schematic showing the stimulation and recording sites used in the experimental protocol.HFS, high-frequency sinusoid; M1, motor cortex; S1, somatosensory cortex; C5 and C6, spinal segments.B: stimulus timing used to assist cancellation of stimulus artifact.Stimuli were delivered in phase (black) or out of phase (red) with the sinusoidal stimulus.C: overlain single sweeps (n ϭ 8) of responses recorded from M1 following stimulation through the caudal electrode (500 A) during sinusoidal stimulation through the rostral electrode (2 kHz, 1 mA).Note the large sinusoidal contamination, which is at opposite phases in successive sweeps.D: black line shows the average of traces in C; red line shows an estimate of the residual sinusoidal contamination produced by a cycle-triggered average from the prestimulus period.E: difference between black and red traces in D. F: trace shown in E after application of a digital low-pass filter (cutoff 1.7 kHz).Note the different scale bars in C and D-F.
Fig. 4 .
Fig. 4. Relationship between block and stimulation.A: illustration of the method used to calculate overlap between fiber populations stimulated by rostral and caudal spinal electrodes.The response to stimulation of the caudal electrode alone was subtracted from the response to stimulation of both rostral and caudal electrodes simultaneously; this was compared with the response to the rostral electrode alone.B: scatter plot showing the amount of blocking vs. the amount of occlusion.Each point represents data from a different stimulus intensity and electrode location; different symbols show data from each of the 4 animals tested.Data points below the solid line indicate locations where the proportion of fibers blocked was more than that of those occluded.C: variation of occlusion with stimulus intensity to the rostral electrode.Points show the amplitude of the antidromic potential in the subtracted average as a percentage of the amplitude of the response to the rostral electrode alone (100%; solid red line).D: example traces related to C; black trace is with 1-mA sinusoidal stimulation and red trace is control.E: variation in amplitude of antidromic potential elicited from stimulation of the caudal electrode with amplitude of sinusoidal current at the rostral electrode.Amplitudes are expressed as a percentage of those seen without sinusoidal stimulation (100%; solid red line).F: example traces related to E; black trace is during 1-mA sinusoidal stimulation and red trace is control.Amplitude of the caudal electrode stimulus was 500 A, and sinusoidal block was 2 kHz throughout.Results were recorded from baboon U. G-J: same as C-F, but for baboon L. Error bars and dotted red lines indicate means Ϯ SE.
Fig. 3 .
Fig. 3. Spatial profile of HF conduction block.A: the amplitude of the antidromic corticospinal potential recorded from M1 elicited by the caudal spinal electrode during HF stimulation (1 mA) through the rostral electrode at different depths below the surface of the spinal cord.B-E: example traces recorded at the depths indicated.Red lines show responses to caudal electrode alone and black lines to stimulation during HF block.Data were recorded in baboon L.
Fig. 5 .
Fig. 5. Block time course.A: average of the M1 epidural response following onset of the HF sinusoid (arrow).B: amplitude of the antidromic potential elicited from the caudal spinal electrode as a function of the time of the stimulus after the onset of sinusoidal stimulation to the rostral electrode.Amplitudes are expressed as a percentage of the response to stimulation of the caudal electrode alone (100%; solid red).A 1-mA stimulus was given through the caudal electrode and a 1-mA, 2-kHz sine wave through the rostral electrode throughout.Data were recorded in baboon U.
Fig. 6 .
Fig. 6.HF blockade of different pathways in the central nervous system.A: blocking protocol applied directly to the pyramidal tract (PT) through an implanted metal microelectrode.Stimulation was applied to the dorsolateral funiculus (DLF) at C5, and recordings were made via intracortical microwire electrodes.Inset plot shows averaged responses obtained with 2-kHz, 1-mA blocking stimuli; red trace shows stimulation of DLF alone and black trace shows stimulation of DLF during HF sinusoidal block of PT.B: repeat of PT block with square wave pulses at the same stimulus frequencies and intensities.C: blocking applied to the ascending dorsal column pathway within the spinal cord.Measurements were taken from somatosensory evoked potentials (SEPs) recorded in S1, elicited by stimulation of the contralateral median nerve (2-mA stimulus).Inset plot shows averaged responses obtained with 2-kHz, 1-mA blocking stimuli; red trace shows stimulation of median nerve alone and black trace shows stimulation of median nerve during HF block of dorsal columns.Data in A-C were recorded in macaque.D and E: use of HF blocking to reveal pathway contributing to a response of unknown origin.D: average cord dorsum response following stimulation of the PT at the medulla (1 mA, 3 shocks).art, Stimulus artifact; cst, corticospinal volley; ?, later transsynaptic response of unknown origin.Red trace shows response to stimulation of PT alone, and black trace shows stimulation of PT delivered during HF sinusoidal block of the medial longitudinal fasciculus (MLF; 1.5 mA, 2 kHz).E: difference between the red and black traces in D. The late response is reduced during HF block of the MLF, indicating that fibers passing through the MLF contribute to it.Data in D and E were recorded in baboon P.
Fig. 7 .
Fig. 7. Application of long durations of HF block.A: 30 s of continuous HF sinusoidal block applied to the dorsal columns during recording of SEPs from the sensorimotor cortex.Left, average SEPs elicited by a 2-mA stimulus to the median nerve in successive epochs of 10 s (biphasic pulses, 0.2 ms per phase, 4 Hz, n ϭ 40 stimuli).Middle, averages of responses corresponding to 0 -10 s (cyan), 10 -20 s (red), and 20 -30 s (green) after block onset.Right, averages of responses corresponding to 0 -10 s, 10 -20 s, and 20 -30 s after block offset.Arrows indicate time of median nerve stimulus.B: 5-min period of HF sinusoidal block applied to the dorsal columns.Left, initial average SEPs in response to median nerve stimulation, presented in successive 10-s epochs.Middle, average SEPs recorded at the start (cyan), middle (red), and end (green) of the 5-min period of HF block (numbers indicate the time in seconds following the start of HF stimulation).Right, average SEPs in 10-s blocks recorded after the HF sinusoidal stimulation was stopped.Each average corresponds to 40 stimuli, delivered at 4 Hz.Data were recorded in macaque.
|
v3-fos-license
|
2021-08-27T17:08:18.972Z
|
2021-01-01T00:00:00.000
|
240610886
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://gexinonline.com/uploads/articles/article-jcnrc-167.pdf",
"pdf_hash": "5c1e52809c71895c1283d7861c0336882a6142fc",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44140",
"s2fieldsofstudy": [
"Education"
],
"sha1": "c3a787a8fb19c04a2e599cee6afc3eb0a559410d",
"year": 2021
}
|
pes2o/s2orc
|
Nursing and Social Work Students' Knowledge and Attitudes Toward Patients with HIV/AIDS Journal of Comprehensive Nursing Research and Care
Students' Knowledge and Attitudes Abstract Patients living with HIV/AIDS are marginalized in health care, despite the core values of the profession. Knowledge and attitudes of nursing and social work undergraduate students at a Historically Black College and University (HBCU) toward patients living with HIV/AIDS was the objective of this exploratory quantitative study. A convenience sample of 142 undergraduate nursing and social work students at a public HBCU in the US Mid-Atlantic region completed a self-administered questionnaire. Data collected were analyzed with the latest version of SPSS statistics. Results showed for matriculating students in the two majors; inadequate knowledge of HIV/AIDS, no significant difference in the level of knowledge, and that attitudes regarding the chronic disease phenomenon were generally positive without an observable change. Implications of the study indicates a critical gap in the undergraduate curriculum for nursing and social work students. Developing an elective course in HIV/AIDS for both disciplines would alleviate the knowledge gap.
Introduction
AIDS, the advanced stage of the HIV infection, is a global health problem with no effective cure. According to the Centers for Disease Control (CDC) and Prevention, HIV Surveillance Report [1]there were about 1.2 million new cases of HIV in 2017 worldwide. About 36.9 million people are living with HIV around the world [2]. Nurses and Social Workers are essential part of the patient care team, caring for a variety of patient population. Social work and Nursing educational curriculum are competency-based practice professions that play major roles in the care of people living with HIV/AIDS (PLWHA). The curriculum of both disciplines integrates theoretical and clinical training processes. The theoretical knowledge and clinical skills are progressive at different levels of education based on course level objectives.
Through the situated cognition approach of clinical experiential learning, students in both disciplines are actively involved in the learning experiences in simulated laboratory and real-world patient care settings a community of practice that focuses on learning by doing. In this way, learning occurs occurs from the interaction of the learner in an environment specifically constructed to facilitate the experiences [3]. Similarly, [4] affirmed that situated learning occurs with social interactions and iterative critique by instructors and peers.
Theoretical and situated clinical learning experiences in both disciplines must equip students with the knowledge, skills, and abilities that entry-level baccalaureate graduates will need to deliver high quality, safe, effective, holistic, and focused patient care. Upon graduation, while nurses provide direct care to restore patients to health, teach health promotion, and participate in disease-prevention activities [5]. The NASW Policy on HIV and AIDS clearly states "the social work profession should take an active stand to mitigate the overwhelming psychological and social effects, including the inequality of access to health and mental health care and the lack of education and prevention in the United States and internationally." (NASW, 2009(NASW, -2011 [6]. Nurses are positioned to contribute to and lead the transformative changes that are occurring in healthcare by being a fully contributing member of the inter-professional team [7]. Today's nursing and social work students are among the future health care professionals of PLWHA and consequently, their knowledge about the disease and their attitudes toward PLWHA can have an impact on their effectiveness in the delivery of care and services. In its fourth decade, HIV/AIDS has shifted from a death sentence to chronic and manageable disease due to remarkable advancement in medication but remains a persistent health problem around the world [8]. Since the introduction of antiretroviral therapy (ART) in 1987, Forsythe, et al [9], stated significant improvements in treatment for people living with HIV and emphasized the importance of reinforcing both treatment and prevention.
Health care professionals, including nurses and social workers in health care settings, have essential roles in the care of PLWHA. Consequently, both social work and nursing curricular efforts, as well as practicum settings, must focus on the development of knowledge, skills, and unbiased attitudes for the best patient outcomes.
Social workers have an ethical responsibility to each client and are often the first line of defense for marginalized individuals who face physical, psychosocial, or socioeconomic challenges [10,11] Additionally, Chandran, et al [12] noted that social work education is grounded in values of service, social justice, dignity and worth of the individual, the importance of human relationships, integrity, and competence. Social Workers are professionally obligated to live these values in their face-to-face interactions in their practice with a marginalized population (Chandran, p.340) [12], such as PLWHA. Thus, it is paramount that social work students/social workers are informed,knowledgeable and skillful in their approach when working with individuals diagnosed with HIV/AIDS.
ISSN: 2581-3846
Nursing students are educated to provide quality health care to all patients. Graduates of both professions are generally the first among health care providers to encounter a patient living with HIV/AIDS. Social Workers have had significant involvement in the HIV response [13]. Similarly, Edmonds, et al. [14] noted that social workers have helped people cope with HIV diagnoses since the earliest days. At every stage of the HIV care continuum, social worker-involvement with HIV-positive patients is central to achieving viral suppression. In collaboration with social workers, nurses' roles in HIV management involve health promotion and disease prevention [13]. The Performance of these roles requires knowledge regarding HIV disease so that nurses can offer effective and compassionate care to patients, alleviating physical, emotional, social, and spiritual suffering at all stages of HIV disease (p.353). Additionally, Eustace [15], emphasizing HIV/AIDS family-focused prevention and management strategies, noted that nurses should be proactive in advocating for HIV/ AIDS family intervention and HIV/ AIDS family policies to improve outcomes in family functioning, processes, and relationships.
As members of the helping professions, nurses and social workers both pledge to protect the public in providing healthcare services. Possessing a highly positive attitude toward PLWHA is, therefore, essential for professionally meeting the healthcare needs of patients. Nevertheless, the achievement of such an attitude is not always the case. Dharmalingan, et al. [16] found that negative attitudes about HIV/AIDS among nursing students and other health care providers likely contribute to the prevalence of healthcare disparities among PLWHA. Notably, clinical exposures introduce nursing students to the art of caring for patients with various disease processes and allow opportunities to improve their attitudes toward those patients [15]. Baytner-Zamir, et al. [17], shared that students who had more practice or personal experience with HIV/AIDS had better attitudes regarding the care of the patients, and were more willing to care for PLWHA than those without experience. For example, through clinical experiences and contact with PLWHA, Baytner-Zamir et al. [17], noted that there were persistent misconceptions among medical students, mostly regarding HIV transmission via breastfeeding and knowledge of HIV prevention, following exposure to the virus. Many of the students' attitudes included stigmatization, shame, and fear [18]. Overall, no positive changes in students' attitudes were observed during the preclinical years of medical school; these results from the medical students were similar to those of social work and nursing students [18]. As future health care professionals, nursing and social work students' attitudes toward PLWHA must embrace a well-informed and positive approach to the care of PLWHA [16,17].
Stigma and discrimination have long been associated with HIV/AIDS and experienced by PLWHA [19]. HIV/AIDS stigma, discrimination, and resulting fears around disclosure remain key barriers to effective care and disease prevention [20][21][22][23][24]. Stigmatization and discrimination themes from the aforementioned studies include: (a) negative attitudes, fear of contagion, and misperceptions about transmission; (b) acts of discrimination in the workplace and by families, friends, and health care providers; and, (c) participants' use of self-isolation as a coping mechanism for responding to the stigma. Social workers and nurses can challenge such stigma and support PLWHA by focusing on therapeutic relationships, building rapport, and creating non-judgmental therapeutic environments [23,25].
As noted, adequate HIV/AIDS knowledge among health care providers including nursing and social work students, is one of the leading focal points for improved positive attitudes, reduced stigma, and better health outcomes for PLWHA. This study compares social work and nursing students' knowledge and attitudes toward PLWHA while studying at a public HBCU in the State of Maryland.
The significance of this study informs all helping professionals, including social workers and nurses, about health care outcomes for PLWHA impacted by the acquired knowledge and attitudes displayed within their professions. This study also examines how expected improvements in knowledge development and attitudes can affect the outcomes of patient and client care of PLWHA, using the Cognitive Dissonance Theory (CDT) framework, which measures the students' cognitive discrepancies of attitudes and knowledge and the expected outcomes, according to Festinger's [26], as cited by Hinojosa, et al [27]: The core framework of CDT implies a four-step process of dissonance arousal and reduction. First, a cognitive discrepancy occurs; second, individuals respond with psychological discomfort-dissonance; third, they become motivated to reduce the dissonance; and fourth, they engage in discrepancy reduction to reduce dissonance. It is important to note that to effectively resolve dissonance, individuals must engage in some form of discrepancy reduction; however, some people may not successfully resolve dissonance and as such, may remain in a negative affective state (p.173).
Timmins and De Vires [28] noted that cognitive dissonance theory adds considerable insight into an issue that haunts contemporary health care. Social workers do experience cognitive dissonance; they are adversely impacted by the conflicts, and they find ways to reconcile the cognitive dissonance [29]. Similarly, Daniel [30] stated that the feeling of estrangement in psychiatric nursing practice expressed by some nursing staff could be explained theoretically through the concept of cognitive dissonance.
Timmins and De Vires [28]concluded that understanding the dissonance cognitions and the reduction mechanism might help prevent care erosion and its devastating impact. The authors emphasized that ignoring, denying, trivializing and justifying of substandard practice are not appropriate ways of reducing dissonance.
Problem and Hypotheses
The researchers wanted to determine if there were changes in the adequacy of HIV/AIDS knowledge, negative attitudes, and elements of stigmatization through time. This has profound curriculum implications; to that effect, they formulated the following hypotheses: 1. College students' attitudes toward the care of HIV/AIDS patients significantly improve according to their educational attainment.
2. Students' knowledge of HIV/AIDS significantly improves according to their educational attainment.
3. Social Work students are more knowledgeable about HIV/AIDS than nursing students are.
4. Social Work students have better attitudes toward HIV/AIDS patients' care than nursing students do.
Method Research Design
Data were collected using a cross-sectional survey to examine social work and nursing students' knowledge and attitudes about HIV/AIDS patients. As in a typical exploratory quantitative study, researchers took concurrent samples of students at different academic levels instead of following the evolution of individual students. Then, they performed a descriptive analysis and comparison of means for knowledge and attitude indicators between discipline groups and academic level groups. An assumption was that the formation of knowledge and attitudes (dependent variables) had already occurred; therefore, the researchers could measure the differences between groups according to the independent variables major and attainment level). Researchers also assumed that senior-level nursing and social work students would demonstrate expert or quasi-expert knowledge and strong positive attitudes toward providing care to HIV/AIDS patients. The participants for this study were recruited from Bowie State University, College of Professional Studies.
Sampling Procedures
The participants included a random sample of 142 students, ages 18-49. Researchers received IRB (Institutional Review Board) approval. Students were selected at random from pre-existing classes. They gave their informed consent and agreed that participation wasvoluntary and confidential. Additionally, 49 social work students
Data Collection Procedures
Researchers measured the dependent variables through selfassessment scales that were completed by each subject over four weeks. The researchers trained and oriented undergraduate students on how to conduct and administer the research instruments; the surveys were distributed and completed with pencils. The Knowledge Scale, HIV-KQ-45, was previously used by Carey and Schroder [31] with reliability in different samples of Cronbach alpha = .75 to .89. The Attitude Scale was a modified version of the one used by Aggarwal et al., [20], covering aspects of (a) blame for getting AIDS; (b) type of hospital care for patients who are HIV positive; and (c) equal/ unequal rights of AIDS patients. The determined Cronbach alpha is .82. Eleven items on demographics included: age, date of birth, gender, race, ethnicity, academic classification, residency status, employment status, student status, and whether one had children and how many. The researchers were granted permission from four professors to survey students in their courses. The participants were informed that their participation was voluntary and confidential and that they would receive an incentive for participation (raffle). Only participants who met the eligibility criteria of being at least 18 years old were allowed to participate in this study. The researchers used alphanumeric identification codes in the instruments to ensure tracking and the protection of the privacy of the participants; data were pre-coded in Excel, and then transferred to SPSS for analysis.
Data Analysis Procedures
The analysis occurred in two stages as delineated below: 1. Descriptive analysis in the form of tables of frequencies, percentages, and means. These data allowed the researchers to observe contrasting and converging patterns in variables within, across, and among subgroups.
2. Hypothesis testing, with alpha set at 0.01. Two contrasts of means were carried for the academic level of students (freshman, sophomore, junior, senior) and for students majoring in Social Work and Nursing.
Sample Demographics
The study sample consisted of 142 HBCU students majoring in social work and nursing, ages 17 to 49 with a mean of 24. The proportions were 49 (48.5%) social work and 78 (54.9%) nursing. There were 50 students, aged 17-21, 13 students, aged 22-17, and 13 students among the age group of 34-39 and 40-49. The gender split was 88% females and 12.2% males. The racial composition of the sample was 83.3% Black; Asian and Whites were 4% each group; 7% were American Indian or Alaskan Native and Native Hawaiian or Pacific Islanders, and 9.1% did not identify their race. Based on long-term experience with these programs, the researchers can state that the sampling was comprehensive and included typical students in these programs. In identifying academic classifications, 33.8% of the participants were sophomores and seniors, while 8.3% of the participants were freshman and 21.8% were juniors.
Dependent Variables
HIV knowledge and attitudes about AIDS patients are two important aspects of social work and nursing programs of study as they are included in several courses, although taught separately and by different professors. Nursing includes courses such as Health Assessment and Care of Complex Clients; Social Work includes the topic in Social Research and Community Service courses. The typical methods of assessing outcomes about HIV/AIDS in both programs are tests for knowledge review and case studies. Students are expected to excel in both areas by the end of their period of formation as they may encounter many cases in their jobs. Descriptive results, summarized in Table 2, indicated that social work (SW) students had a better knowledge score than nursing (NU) students in a 60-point test, SW mean = 34.61 vs. NU mean = 34.10 but this difference is not significant. The test used is True/False and combines scientific and stereotypical statements about HIV; it must be noticed that in both groups the mean is just above half the scale -meaning they are far from scoring at the expert level. The Likert attitude scale contained 57 statements about AIDS patients and AIDS patients care, and student would show agreement or disagreement in 5 points; the highest indicating a positive attitude and the lowest a negative attitude. Results show no significant difference between the mean score of social work, 4.58 points, and nursing, 4.61points; both groups evidenced mostly positive attitudes toward AIDS patients. In addition, the variable distributions are similar to the normal, not skewed. University. An independent-samples t-test was used for this hypothesis. The means of knowledge for Social Work was 4.58 and for Nursing 4.61. The t (48) = -0.305 and it is nonsignificant, therefore, the two means are assumed equal.
4. Social work students have better attitudes toward HIV/AIDS patients than nursing students do. An independent-samples t-Test was used for this hypothesis. The mean of knowledge for Social Work was 34.61 and for Nursing 34.13. The t (48) = 0.466, non-significant, therefore, the two means are assumed equal.
In a further analysis, the researchers considered whether the two dependent variables might be highly correlated, which would generate confounding results because it would be difficult to distinguish the effects of the independent variables from any of the dependent variables. It was found that the Pearson Correlation Coefficient between attitude and knowledge is 0.3 (low) and Sig. < 0.01. There is just a small likelihood of confounding results, as the correlation between dependent variables is low.
Discussion
Nursing and Social Work students have similar knowledge about HIV and attitudes toward the treatment of AIDS despite their assumedly differentiated training. Attitudes are more on the positive than negative side and they do not change through the years of study. Knowledge about HIV improves significantly through the years of study (Table 4) but does not reach high standards of competency as would be expected of nursing and social work graduates. The expected mean would be at least 56 out of a maximum of 70, which is equivalent to a grade of B in the scale of Bowie State University. These outcomes suggest a potential cognitive dissonance when students face real cases of HIV/AIDS in their work; they are trained to react positively but do not have sufficient knowledge to respond with best practices of care toward the patients. This would generate an attraction-avoidance conundrum with negative consequences for the treatment of patients. The expected professional behavior is just the opposite: a well-trained professional that applies best practices of care and safety measures, mediated by a highly positive attitude to the patient.
Limitations of the Study
There are limitations to this study. Since the framework established for providing care to HIV/AIDS patients centered on an examination of the knowledge and attitudes of nursing and social work majors, the results of the study may not be generalizable to other students in other disciplines. A larger and more inclusive sample with more males and students of other majors would be needed. A small sample size limits the generalizability of the study. A second limitation of the study is the use of closed-ended questions. There should be open-ended questions to explore different aspects of knowledge and attitude.
Implications for Future Research
The findings from this investigation indicate that Knowledge and Attitudes about HIV/AIDS show a critical gap in the curriculum of Nursing and Social Work undergraduate students. It would be convenient to study how widespread this gap is in other colleges and universities. An assessment would provide guidelines for curriculum design and practice involving the training of students in this area. In addition, it would be convenient to analyze if other independent variables, such as student gender, socio-economic status and financial aid designation of recipient/non recipient, have an impact.
Regional and national studies are needed, specifically conducted by social work and nursing educators, to explore investigating the need to infuse content into the curriculum, aimed at training nursing and social work students in HBCUs to become culturally competent and sanative in providing care to HIV/AIDS patients.
In summary, the researchers, namely social work and nursing educators in HBCUs, must develop a course in HIV/AIDS, which addresses the disease across the life span. This is especially important since the CDC acknowledged that at the end of 2016, 478,100 Black/African American had been given an HIV diagnoses, (CDC, 2016) [32]. Additionally, the CDC reported that in 2018, adult and adolescent Blacks/African Americans accounted for more than 42% of all newly diagnosed HIV cases in the United States. Of those more than 42%, 31% of those cases were Black/African American men and 11% were Black/African American women (CDC, 2016) [32].
Implications for Policy
According to the CDC, African Americans comprised up to 42% of all reported HIV cases in the United States in 2017 (CDC, 2017), [32] even though African Americans comprised only 12.6% of the United States population (US Census, 2010). According to CDC (2017), there were 1.1 million PLWHA in the United States at the end of 2017. HIV remains a significant cause of death target populations. HIV was reported the 9th leading cause of death for persons aged 25 to 34 and the 9th cause of death for persons aged 35 to 44 (CDC, 2017) [32]. However, HIV cases continue to decrease among all African-Americans in the United States (CDC, 2017) [32]. CDC (2017) continues to maintain, however, that African-American men, women, and adolescents continue to be disproportionately affected by HIV/AIDS. For aspiring students in the helping professions, like nursing and social work, there must be comprehensive curriculum policies developed at the university level to address this issue.
HBCU presidents and administrators must be open to discussing HIV/AIDS in general and be receptive to allowing faculty to develop and implement curricula aimed at addressing this pandemic as an educational goal for all students. There is urgency to include this educational goal in the curriculum, given how HIV/AIDS is decimating African American communities.
There must be state appropriation of funds to develop and collaborate with local and state agencies to implement evidence-based behavioral and educational models that will provide African-Americans and all students with basic facts and information on how HIV/AIDS is and is not transmitted. Additionally, the state must appropriate funds to develop/implement a required general education course for all students. This required general education course will prepare nursing and social work students with the skills and knowledge needed to provide culturally competent care to those persons infected and affected with HIV/AIDS.
Contributions to Social Work Education
Results from this study support the integration of incorporating more HIV/AIDS content in social work education. Social work and nursing students were equally knowledgeable about HIV/AIDS and their attitudes toward patients with the disease were mostly positive, which did not change through the years of study.The research findings from this investigation will support the development of culturally competent interventions. It is anticipated that the findings from this investigation will assist with the development of a practice model and a course on Page 6 of 6 ISSN: 2581-3846 HIV/AIDS required for all students, and especially for nursing and social work majors, at HBCUs.
Given the pervasiveness of the disease, the infusion of HIV/ AIDS information throughout the nursing and social work curricula should be mandatory for accredited schools of nursing and social work. It is important for schools of nursing and social work to fully operationalize and make a commitment and obligation to infuse HIV/AIDS content throughout the curriculum to enhance student knowledge and attitudes toward PLWHA.
|
v3-fos-license
|
2018-12-05T05:40:08.083Z
|
2015-03-01T00:00:00.000
|
80723795
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://spp.slu.cz/doi/10.25142/spp.2015.002.pdf",
"pdf_hash": "2e4553ad8f817589fa9aa358013edd38a74e78ed",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44141",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "2e4553ad8f817589fa9aa358013edd38a74e78ed",
"year": 2015
}
|
pes2o/s2orc
|
THREE PERSPECTIVES ON SOCIAL PATHOLOGY
The issue of social pathology is a multilayered topic which goes through a series of disciplines. It can be perceived mainly as a sociological topic. The paper presents the contexts of other disciplines which define the issue such as social deviations, respectively from the viewpoint of psychology and pedagogy as risk behaviour. The paper points to three possible views on social pathology. The first view is social pathology as a set of phenomena which are perceived as problematic by the society (deviating from the behaviour according to the norms of the society). The second view is social pathology as a field of study preparing graduates for the prophylaxis and treatment of pathologically social phenomena. The third view is social pathology as a study subject concerning not only the students of socially oriented fields but mainly the students who are to become teachers. They often tend to be important sources of social support and possible actors in the process of searching for and elimination of pathological phenomena in the period of childhood and adolescence.
Unemployment, homelessness, homosexuality, divorce rate, injuries, illnesses and other may be listed as an example.2. Asocial phenomena are more serious because they undermine the political, economic and moral foundations of society.Their common denominator is aggression.They include all violence, such as vandalism, domestic violence, bullying, spectator violence, extremism, but also cruelty towards animals.3. We consider socially pathological phenomena as the most dangerous.These phenomena have the most negative consequences for the society.They include crime, addiction, prostitution or suicide.Fischer and Škoda (2009) also define social pathological phenomena as unhealthy, abnormal and generally undesirable social phenomena.Social pathology denotes also the sociological discipline that focuses on patterns of such behaviour.It deals with the study and analysis of the causes and factors that led to the emergence of specific socially pathological phenomena.It also deals with the possibilities of prevention and therapy of these phenomena.Social pathology can be seen as a particular problem area in the society which includes phenomena that we mark as socially pathological, or socially deviant in sociological terms.In school environment, these phenomena are defined by the National Strategy of Prevention of Risk Behaviour in Schooling Environments 2013-2018, which operates with the notion of risk behaviour, the busiest term denoting social pathologies in the educational reality to date.The following phenomena are identified as risk behaviour (National Strategy, 2013): -interpersonal aggression -aggression, bullying, cyberbullying, and other risk forms of, -communication through multimedia, violence, intolerance, anti-Semitism, extremism, -racism and xenophobia, homophobia, -delinquent behaviour in relation to material goods -vandalism, theft, graffiti and other, -criminal offences and misdemeanours, -truancy and non-fulfillment of school duties, -addictive behaviour -the abuse of addictive substances, internet addiction, gambling, -risks sports activities, injury prevention, -risk behaviour in traffic, accident prevention, -the spectrum of eating disorders, -the negative influence of sects, -hazardous sexual behaviour.So how to define behaviour that we call as risk one?
The very concept comes from a psychological perspective on the problem and refers to behaviour through which an individual or a group endangers itself, but can also pose a threat to the society.Sobotková et al. (2014, p. 40) define the concept of risk behaviour thus: "we understand the concept of risk behaviour as superior to the concepts of problem, delinquent, antisocial and dissocial behaviour etc.These are often the subject of concepts and theories that try to explain their essence through either biological, psychological or social causes or their combinations."This term encompasses behaviour which results in a demonstrable increase in health, social, educational and other risks to individuals or to the society.It may endanger the health, life or social integrity of the individual.It is behaviour aimed against the interests of the society.(Kikalová and Kopecký, 2014) Risk is permanent and natural part of life and risk-taking is not an exceptional part of behaviour, it even has a relationship to personal or collective benefit and it develops from both emotional and rational components.Skopalová (2010, p. 9) states that "its causes are considered to be those factors which influence its existence.These are phenomena or even entire processes that, to some extent, lead to and help committing unlawful activities.Risk behaviour is the result of mutually effective forces and factors.However, it cannot always be determined which to blame for the greatest share.Despite this, the first commonly mentioned factor is the family, followed by the influence of the peer group."Most youth are successfully coping with radical changes in the body but in recent decades, increasing groups of the youth accept risky ways of life, often with negative consequences for their future lives.This is a worldwide trend.These facts are monitored and analyzed the most in the USA, especially since the late 1990's.In this context, we speak of "new morbidity of the youth".Research shows that these phenomena occur together, they often have the same causes and risk factors.Therefore, they actually constitute a syndrome.(Hamanová, 2000) In connection with risk behaviour, we speak of the so-called syndrome of adolescent risk behaviour.This denotes experimenting with various kinds of risky behaviour.It can be considered as part of the development of adolescents and in most cases it disappears during this period.This does not mean that an individual may be endangered by this behaviour later in life -meaning the risk behaviour associated with crossing the line of criminal activity.According to the World Health Organization (WHO), adolescents are a separate risk group.Syndrome of adolescent risk behaviour is divided into three components: -substance abuse (decreasing age of users, a growing share of the young female population), -negative effects on psychosocial development (maladjustment, conduct disorders, aggression, delinquency, criminality, social phobia, self-mutilation, suicidal tendencies), -risk behaviour in reproductive area (early sex, early parenthood, frequent change of partners, venereal diseases (Sobotková et al., 2014).The media can also have a negative impact on the development of a young person.Especially TV shows where violence, inappropriate entertainment or aggression appear.This also applies to computer games which are popular today.The development of the child may be further influenced by family factors, e.g.unemployment of one or both parents.(Vašutová, Panáček, 2013) A second perspective on social pathology can be social pathology as a field of study which belongs to the spectrum of helping professions (along with social education, social work, special education, addictology and other).This study programme called Social Pathology and Prevention can currently be studied at two university-type tertiary schools in the Czech Republic.It is the Department of Social Pathology and Sociology, Faculty of Education, University of Hradec Králové and Institute of Pedagogical and Psychological Sciences, Faculty of Public Policies, Silesian University in Opava.Graduates of Social Pathology and Prevention are ready to perform the following professions: -Worker in educational facilities for institutional and protective education (diagnostic institute, orphanage, orphanage with school, educational institution for the youth).In this context, they are close to the study and profiling of special pedagogy, which prepares specialists for educational work in ethopedic institutions (child behavioural specialists).-Worker in educational institutions for the performance of preventive care (counselling centres).
-Worker in non-profit organizations focusing on prevention and treatment of socially pathological phenomena, social counselling and social rehabilitation (senior homes, shelters, halfway houses, contact centres, drop-in centres, low-threshold facilities for children and the youth, after-care services, social services for families with children, citizens' advice bureaux).From the perspective of profiling, the graduates of this programme are close to social pedagogy or social work.-Methodist of prevention in schools (students or graduates of teaching acting as prevention methodologists in schools choose the programme as an appropriate extension of their qualifications).
In the Czech Republic, school prevention methodists and educational counsellors are trained using function education according to Act no.563/2004 Coll. on Pedagogical Staff.In this context, it is akin to study programs of teaching and special pedagogy.The special pedagogy programme in its pedagogical form includes in the graduate's profile the focus on the work of a school methodist of prevention and educational counsellor.-A worker of the Department of Social Affairs (social curator for adults, social curator for youth, Roma counsellor, consultant for national minorities, public guardian, a community planning worker, foster care worker, worker for domestic violence, maltreated and abused children).
According to the amendment to Act no.206/2009 Coll., new branch of social pathology is included among the branches of university studies which are a condition of professional competence of a social worker, which enables graduates to find wide application in the labour market.-Worker of the Police of the Czech Republic within all police services (law enforcement, railway, traffic, immigration and border police), officer of the municipal police in the Czech Republic, police specialist -an expert in all police services.-Worker in penitentiary and post-penitentiary care (prisons for custody enforcement, for service of incarceration sentences, and for security detention).-Worker of the Probation and Mediation Service of the Czech Republic.The graduates of Social Pathology and Prevention bear a number of skills acquired during their study through a system of pedagogical and other internships.These competences include, in particular: Informational Competence which is characterized by the fact that the graduates have gone through the entire spectrum of base pedagogical and psychological subjects, subjects oriented in the social fields (including medicine), as well as the legislation related to their future profession and practice.The study is also interwoven with other items that will help the graduates to have a range of knowledge not only in specially (narrowly) targeted fields, but especially in general purview (e.g.philosophy, sociology, anthropology, and other).They also have knowledge on the causes of deviant behaviour, including crime and are able to apply it, they have knowledge needed for the identification of effective strategies to protect public order and law enforcement, the necessary pedagogical and psychological knowledge to be involved in educating on prevention of socially pathological phenomena as well as issues of security and the law.Altruistic Competence which bears the principles of working for others, not only to develop oneself, deep inner motivation for the development of individuals and groups.The graduates know the legal and ethical standards that form the framework of professional conduct in the field of security, prevention and the solution of all socially pathological phenomena; they can actively participate in or coordinate projects to support health (the healthy schools, the healthy city, the healthy social climate and the environment in workplaces).
Egoistic competence egoistic which entails self-care, self-education, relaxation techniques preventing possible burnout syndrome.Organizational and co-operational competence -the graduates know the procedures and methods of work with both the people who need help and protection, and those who act in a deviant way; they can organize social-pedagogical intervention with specific target groups in an interdisciplinary (inter-ministerial) manner and have the necessary knowledge to stimulate and coordinate programs of prevention of socially pathological phenomena.Empathetic and communicative competence and problem-solving -the graduates master basic procedures in resolving conflicts between the individual and the society, and in life-crisis situations of clients, including techniques for coping with these difficult and critical life situations, they manage assertive behaviour and are equipped with creative skills.The study programme Social Pathology and Prevention can be described as inter-and trans-disciplinary.From the above it is clear that it permeates numerous scientific and academic fields from which it chooses areas needed but also enters them retroactively, whether by description of the problems of social deviation, or by search for causes of these phenomena and design of procedures to prevent and ultimately eliminate these phenomena.
The third perspective on social pathology is the definition of social pathology as a study course occurring in the professional training of future teachers.It is therefore a course which in terms of undergraduate training at the University of Hradec Králové is included in the curricula of students of both socially oriented and instructionally oriented pedagogy.While it is compulsory and perceived as a profile course for the students of social pedagogy programmes, it is an elective course for students of the subject teaching programmes of pedagogy.Therefore, the course becomes marginal for these students in terms of their interest.It enters competition with courses whose demands are disproportionately lower.However, the usefulness of these courses for teachers is indisputable -especially to those full-time students who are just preparing for the career of a pedagogue and urgently need to be educated in this area which they are going to often encounter in practice.Hoferková (2013) points out that neither the prevention nor any possible intervention of risk behaviour in the population of university students have yet received sufficient attention.As Dulovics (2014) reports, sciences dealing with the field of assistance require people formed by very good undergraduate training who can be the bearers of a number of competences which include the competences relating to the issues of social pathology.However, the reality is often different, as shown by the numbers of pedagogy students at the Faculty of Education, University of Hradec Králové, who do enter the course of social pathology (it should be noted that the total number of pedagogy students varies in thousands): The challenge for undergraduate training remains in the effort to shift the Social Pathology course among the compulsory courses, which is not easy in the context of requirements for courses which should be contained in the common base of pedagogical studies.The instruction of the Social Pathology course focuses predominately on the following topics: -Theories of social deviance.General characteristics of social deviations and their classification.
-The causes of social deviance in sociological, biological, psychological, and multi-factorial approach.
-Socially negative phenomena (unemployment, divorce, population imbalance, homelessness), their characteristics.-Asocial phenomena and their characteristics (bullying, extremist movements, sects, behavioural disorders).-Crime (the concept, development and current status in the Czech Republic, kinds, children's crime and juvenile delinquency).-Prostitution (characteristics of the phenomenon, current situation, STDs).
-Suicide (characteristics of the problem, development and current status in he Czech Republic, forms of suicide of children and youth).-Addition (general characteristics, stage of formation, typology).
-Alcoholism, nicotine addiction (characteristic of the problems, current state in the Czech Republic, the emergence and progress of these addictions).-Non-alcoholic addiction (drugs, their classification, characteristics of physical and psychological additions).-Other addictions and their characteristics (pathological gambling, workaholism).
-The issues of prevention and therapy of social deviance (types of prevention, forms and possibilities of prevention and therapy).To illustrate the situation, it is interesting to point out and repeat the results of the surveys regarding these issues which have been carried out by the author of the present paper.The first is a content analysis of study programme curricula at teacher training faculties.This topic was mapped for 2006-2012 and the results of this investigation were described in detail in the author's monograph "Rizikové chování a jeho prevence v terciárním vzdělávání [Risk Behaviour and Its Prevention in Tertiary Education]" (Bělík, 2012).At this point, let us mention at least some of the results of this mapping.The content analysis was performed for curricula of 29 faculties and institutions that provide undergraduate teacher training.The following table lists concrete outcomes for the area of primary and secondary courses.At this point, it should be noted that this is a list of courses that were current in 2007.Detailed results for individual faculties, including the five-year development until 2012 can be obtained from the author.It should be noted that the dynamics of these issues in undergraduate education is very small and the results are very similar.The economic aspect of universities has a great influence on this situation, as they are forced to economize and limit the amount of courses offered as compulsory elective and elective.Results of content analysis:
Conclusions on Research Findings -Content Analysis
The results presented provide interesting information about undergraduate teacher training.We found that the preparation on faculties of education is incomparably superior in numbers of primary and secondary courses than at other faculties Based on the results, it can be generalized that courses we define as primary, do not occur more than secondary courses in any type of tertiary school.Based on the results from the table above, it is clear that theological faculties focus primarily on working with influencing values and standards.Science faculties and philosophical faculties devote very little curriculum to educational and psychological issues, while issues of social deviance are not studied at all.The risk of these results lie in the possible contradiction between frequencies which we obtained by studying the courses syllabi and their actual content and quality of teaching which is difficult to detect.Another risk of these results is the possible conflict between materials that were provided to the research by the faculties and the actual number of students who take the courses.
A Separate Investigation Focusing on Students' Knowledge
In 2007, later in 2012, and newly also in 2015, the author implemented research to map the knowledge of students of teaching (teaching for 1 st and 2 nd stage primary and for secondary schools) and other educational staff, including students of socially oriented fields.
In terms of content, this was a didactic test which was thematically based on the currently valid document of the Ministry of Education, Youth and Sports "Strategy of the Prevention of Risk Behaviour" and was formally created on the basis of the so-called.Nimierko's taxonomy of creation and classification of didactic tests, which includes questions on knowledge, understanding of knowledge, standardized tasks and problem solving situations.Formal requirements of the test in 2007 and 2012, including the modified and upgraded version of 2015 are available from the author's work.However, the interesting fact about the research is that according to the chosen classification, the results of students of teaching are completely opposed to the students of socially oriented disciplines.
Results of student of teaching: From the above table, it is clear that the vast majority of students of teaching who participated in the test would not be successful, or according to the chosen classification, their grade would be 3 and worse (equivalent to "C" and below).
Students of socio-educational fields had different results.This was done in both 2007 and 2012, and most recently in 2015 with a modified test, which takes into account other phenomena involving the topic of the risk behaviour syndrome: The results show that only two respondents (i.e.2.15 %) would receive the test grade 1 ("A"), which corresponds to 33-36 points.The grade 2 ("B"), which corresponds to 32-29 points, would be given to 44 respondents (i.e.47.32 %).30 respondents (i.e.32.25 %) would be classified by grades 3 ("C"), which corresponds to 28-26 points.Grade 4 ("D") would be given to 14 respondents (i.e.15.05 %), this result corresponds to 25-22 points.Only 3 respondents (i.e.3.23 %) gained less than 21 points, which corresponds to the classification grade 5 ("F").
Conclusion
This paper covers three perspectives on the issues of social pathology which the author encounters in the context of his professional activity at the Department of Social Pathology and Sociology of the Faculty of Education, University of Hradec Králové.It is social pathology as a problematic part of society, social pathology as a field of study and social pathology as a course which is important for pedagogues.The need for the discipline of social pathology is no longer in doubt today.This may be proven among other things by the significant amount of scientific literature that has been published (e.g.Fischer, Škoda -Social Pathology; Kraus, Hroncová -Social Pathology, and others), conferences on these issues that are held -for example Socialia 1997-2015 and other -research that is undertaken e.g. the research project carried out in the past in the workplace of the author entitled "Comprehensive Analysis of the Youth in the Eastern Bohemian Region" or "Analysis of Educational Reality of the Prevention of Social Deviance in Tertiary Education", and other.From the above it is clear that in practice, a number of problems occur which social pathology needs to be involved in, however, they need to be seen as invitations to improve the theory and practice in this field.
= elective course, CE = compulsory elective course
|
v3-fos-license
|
2021-06-16T20:02:48.424Z
|
2021-01-01T00:00:00.000
|
235445068
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/766/1/012092/pdf",
"pdf_hash": "afece977e35f959eadffb70e6fd2a6108740b840",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44142",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "afece977e35f959eadffb70e6fd2a6108740b840",
"year": 2021
}
|
pes2o/s2orc
|
Day-ahead sharing model of multi-integrated energy service providers based on source-load matching degree
The limited regulation capacity in the integrated energy community is a key factor restricting the development of the community-based integrated (distributed) energy system. Based on this, a model for optimal sharing of day-ahead energy in multi-communities considering the matching degree of source-load curves between communities is proposed. Firstly, the relationship between energy supply and multi-energy flow is analysed concretely in a single community. A mathematical model that includes electric vehicles and multiple energy conversion equipment is built, then, an objective function minimizing the cost of energy purchase, equipment operation and maintenance, and EV battery loss is established. Secondly, the comprehensive Spearman constant and Euclidean distance matching index based on photovoltaic and load data among the communities are considered to optimize the multi-community operation efficiency with the goal of minimizing the energy interaction cost. Finally, in a 3-community simulation system simulation, results show that the multi-energy sharing mode can effectively improve the overall economy of the system and the photovoltaic consumption capacity. The introduction of matching index also improves the energy transmission efficiency, verifying the rationality of the proposed model.
Introduction
The construction of energy Internet is an important approach to seek sustainable resource saving and environmental-friendly development, which makes the parallel growth of thermal and cold electricity and the collaborative optimization between production and supply in the region possible [1]. The parklevel integrated energy service provider is one of its typical applications.
At present, the penetration rate of distributed energy resources (DERs) in modern energy system continues to increase due to its flexibility and efficiency, environmental protection as well as multiple interactions. The model of power production and consumption on demand side has also been changed. Information interaction and energy interconnection are used to complete intelligent power consumption and two-way interaction between energy and information [2]. On the other hand, the reform of distribution and sale separation has brought more emerging entities into the market. Integrated energy service providers are changing from suppliers to service providers [3]- [5]. As community energy managers, they can maintain the balance of supply and demand according to the information released by the market, but the maximum energy utilization rate cannot be achieved only by internal coordination of the community. Therefore, how to carry out multi-user and multi-regional energy interconnection, realize reasonable sharing of internal resources, and improve economic benefits has become a hot topic.
This paper proposes an energy interaction form considering the matching degree of source and load among different communities. The integrated energy service provider is responsible for day-ahead energy scheduling, and describes the energy structure of the community. The community with photovoltaic, EVs and micro gas turbine unit is modeled and analyzed. On the basis of meeting the matching index, the minimum daily cost can be obtained by energy scheduling, and the optimal utilization of resources can also be realized. Finally, an example is given to compare the cost of each community in different scenarios, results verify that the communities with successful matching can effectively absorb clean energy and reduce their own electricity purchase costs. Fig.1 shows the structure of community energy network. The integrated energy service provider (IES) collects and forecasts the regional load, power output and weather information according to the energy management system (EMS), and plans unit output in advance, so as to realize the coordination between energy production and supply. The communities are connected by physical and information structure, service providers make use of the complementary and interactive of consumption behavior of each community to further utilize the energy and the keep balance between supply and demand.
Multi-energy Community Structure
If there is no energy interaction between communities, when a power supply shortage occurs, the service providers directly purchase electricity from the grid. In inter community interaction mode, service providers make energy interaction plans with multiple communities according to the matching information, achieve mutual assistance through energy dispatching, and obtain economic benefits. When the interactive energy still cannot meet the load demand, the service provider purchase electricity again.
Matching Index
Considering the trend of photovoltaic and daily load curve, the greater the difference of community curves, the more efficient energy complementary and consumption. To measure the matching degree and realize the effective utilization of energy, a matching degree measurement index between communities based on Spearman correlation coefficient and Euclidean distance is proposed. Spearman correlation coefficient is an index to evaluate the correlation between two statistical variables using monotone equation, which represents the correlation direction of two independent variables. Its intuitive definition is as follow: Where: 6 is the coefficient of Spearman correlation coefficient standard formula; is the correlation coefficient between any two vectors; J is the vector dimension. In this paper, a day is divided into 24 periods, which represents 24 pairs of independent and identically distributed data; d is the difference of the rank of the elements in the two vectors in the ascending sequence.
The successful energy sharing matching communities feature in surplus of some communities and shortage of others. When an overall shortage occurs, the community will purchase electricity from the grid.
Objective Function
Comprehensive energy service providers carry out energy sharing among communities according to matching information. When the overall power shortage occurs, the transaction with the power grid can achieve the lowest day-ahead cost. The operation and maintenance cost of micro gas turbine and battery loss of EV are also considered in the objective function, is as follow: Where: M is the total number of communities after successful matching; T is the total time periods in a day, the dispatching time is 1h; gas ,
Constrains
Constrains includes operation constraints of micro gas turbine and waste heat boiler, EV operation constraints, power balance in communities, balance of power purchase as well as sale between communities and matching constraints
Simulation Parameters
The system in this paper are: (1) three communities in the same area, including 5, 4 and 6 buildings respectively. The roof of each building is equipped with photovoltaic panels, and its installed capacity ranges from 60kWp to 120kWp; (2) the number and make of EVs in each community is set as 20, BYD song Pro; (3) each community is equipped with one micro gas turbine (500kW), one waste heat boiler and one refrigerator. The actual load data of a typical summer day are selected for example analysis, as shown in Fig.2-4. Tab.1 and 2 are specific parameters of micro gas turbine and EV and electricity price.
Results Analysis
For scenario one, when a shortage of power occurs, community service providers purchase electricity from the grid, but the energy utilization rate is low, and the waste of solar power cannot be solved. Energy interaction in scenario 2 is shown in Fig.5, each community needs to receive real time energy information of others. Larger scale information interaction is easy to cause information channel blocking. The interaction power of community 2 and 3 is small, and the effect of implementation is not significant.
According to the matching value, the energy transmission between community 1 and 2, 1 and 3 can be obtained. As shown in Fig.6, scenario 3 uses matching indicators to provide guidance for The analysis shows that: during the period of 0:00-6:00, the output of micro gas turbine in community 3 has surplus, which can provide part of cold and thermal energy to community 2 and 3. In 12:00-20:00, there is a shortage of power supply in community 1. Firstly, the information is released to other community service providers to purchase photovoltaic. If photovoltaic can not meet the supply, then purchase electricity from the grid. EV can realize intelligent charging and discharging according to the optimization target, discharge in peak load period and charge at night. The fluctuation amplitude is slowed down due to transferable load, photovoltaic and micro gas turbine are reasonably scheduled. Fig.9 is the commercial and residential dual-use community. Considering community features, community 3 has more photovoltaic output, but the energy consumption is also relatively higher. In order to ensure the temperature of the commercial area, the cold load will also exist throughout the day and account for a high proportion. During the period from 0:00 to 6:00, due to the large cold and thermal load, the output of micro gas turbine is more, and part of the thermal and cold energy can be sold, EV starts charging under the guidance of electricity price.
Conclusion
This paper takes the comprehensive energy community as the object, focuses on the economy of community operation and energy consumption, carries out the form of energy sharing from the perspective of load matching, and formulates the community day ahead cost minimization strategy. The results are as follows: 1) The energy consumption cost of each community can be reduced under the guidance of matching index, and good interaction mode can realize energy coordination and mutual assistance to a certain extent.
2) Compared with the two-way flow mode, the multi-directional sharing mode is more flexible. Through energy sharing, it fully taps the community energy response potential and improve energy utilization.
|
v3-fos-license
|
2020-09-07T01:01:01.045Z
|
2020-09-04T00:00:00.000
|
221507492
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1364/oe.409532",
"pdf_hash": "43879fabf977a483d1a20c21b009ee0b214e5e07",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44143",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "43879fabf977a483d1a20c21b009ee0b214e5e07",
"year": 2020
}
|
pes2o/s2orc
|
A wide-range wavelength-tunable photon-pair source for characterizing single-photon detectors
The temporal response of single-photon detectors is usually obtained by measuring their impulse response to short-pulsed laser sources. In this work, we present an alternative approach using time-correlated photon pairs generated in spontaneous parametric down-conversion (SPDC). By measuring the cross-correlation between the detection times recorded with an unknown and a reference photodetector, the temporal response function of the unknown detector can be extracted. Changing the critical phase-matching conditions of the SPDC process provides a wavelength-tunable source of photon pairs. We demonstrate a continuous wavelength-tunability from 526 nm to 661 nm for one photon of the pair, and 1050 nm to 1760 nm for the other photon. The source allows, in principle, to access an even wider wavelength range by simply changing the pump laser of the SPDC-based source. As an initial demonstration, we characterize single photon avalance detectors sensitive to the two distinct wavelength bands, one based on Silicon, the other based on Indim Gallium Arsenide.
Introduction
Characterizing the temporal response function of single-photon detectors is crucial in timeresolved measurements, e.g. determining the lifetime of fluorescence markers [1], characterizing the spontaneous decay of single-photon emitters [2] and the photon statistics of astronomical sources [3] and measuring the joint spectral of photon-pair sources [4], so that the timing uncertainty contributed by the detection process can be taken into account. Typically, the temporal response of a detector is obtained from the arrival time distribution of photons collected from a pulsed laser. In this work, we present an alternative approach that leverages on the tight timing correlation [5] of photon pairs generated in spontaneous parametric down-conversion [6,7] (SPDC): the coincidence signature corresponding to the detection of two photons of the same pair is used to infer the temporal response function of the photodetectors. Compared to a pulsed laser, a SPDC source is easier to align, and is wavelength-tunable by changing the critical phase-matching condition of the SPDC process [8]. In addition, one can address two wavelength bands with the same source by choosing a non-degenerate phase matching condition.
For an initial demonstration, we generate photon pairs with a tunable wavelength range over 100 nm in the visible band, and over 700 nm in the telecommunication band -a tunability at least comparable to existing femtosecond pulsed lasers -and use it to characterize both Silicon (Si-APDs) and Indium Gallium Arsenide (InGaAs-APDs) avalanche photodiodes. In particular, we characterize timing the behaviour of a fast commercial Si-APD (Micro Photon Devices PD-050-CTC-FC) over a continuous wavelength range, for which we previously assumed an approximately uniform temporal response of the detector in the wavelength range from 570 nm to 810 nm [3]. With the measurement reported in this work, we observe a significant variation of Fig. 1. Wavelength-tunable photon pair source based on Type-II SPDC. The critical phase-matching condition is changed by varying the angle of incidence θ i of the pump at the crystal, in order to generate photon pairs at the desired wavelength in the visible and telecommunications band. A Silicon (Si) plate separates the photons in each pair. Tight timing correlations between photons in each pair, and a characterized detector SPD2, allow measuring the jitter of a single-photon detector (SPD1). A calibrated color glass filter (CGF) can be inserted to infer the wavelength of the photons sent to SPD1 using a transmission measurement. LD: laser diode, BBO: β-Barium Borate, SMF: single-mode fiber, GG495, BG39: color glass filters. the timing jitter even on a relatively small wavelength interval of ≈10 nm. A better knowledge of the timing response of this particular Si APD contributes to a better understanding of coherence properties of light in such experiments. Similarly, better characterization of the timing response over a wide wavelength range helps to better model fluorescence measurements regularly carried out with such detectors [1].
Correlated photon pair source
The basic configuration of the spontaneous parametric down conversion source is shown in Fig. 1. The output of a laser diode (central wavelength λ p = 405 nm, output power 10 mW) is coupled to a single-mode optical fiber for spatial mode filtering, and focused to a Gaussian beam waist of 70 µm into a 2 mm thick β-Barium Borate crystal as the nonlinear optical element, cut for Type-II phase matching (θ 0 = 43.6 • , φ = 30 • ).
For this cut, SPDC generates photon pairs in the visible and telecommunications band, respectively. We collect the photons in a collinear geometry, with collection modes (beam waists ≈ 50 µm) defined by two single-mode fibers: one fiber (SMF450: single mode from 488 nm to 633 nm) collects signal photons and delivers them to the single-photon detector SPD1, while the other fiber (standard SMF28e, single transverse mode from 1260 nm to 1625 nm) collects idler photons and delivers them to SPD2. The signal and idler photons are separated to their respective fibers using a 100 µm-thick, polished Silicon (Si) plate as a dichroic element. The plate acts as a longpass filter (cut-off wavelength ≈ 1.05 µm), transmitting only the idler photons while reflecting approximately half of the signal photons.
To suppress uncorrelated visible and infrared photons detected by our SPDs, we insert both a blue color glass bandpass filter (BG39) in the pump path, attenuating parasitic emission from the pump laser diode and broadband fluorescence from the mode cleaning fiber, and a green color glass longpass filter (GG495) in the path of the idler photons to suppress pump light at SPD1. For the idler path, the silicon dichroic is sufficient. To tune the wavelength of down-converted photons, we change the critical phase-matching condition of the SPDC process by varying the angle of incidence θ i of the pump beam at the crystal [9,10]. Figure 2 (red dots) shows the signal and idler wavelengths, λ s and λ i , measured for our source for θ = 12.7 • to 26.7 • . To measure the signal wavelength λ s , we insert different standardized color glass longpass filters (CGF in Fig. 1) for different angles θ i , and measure the transmission of the signal photons in order to infer their wavelength. The inset of Fig. 2 shows an example where a filter OG570 is used to infer λ s close to the cut-off wavelength of the filter. The corresponding idler wavelength is calculated through energy conservation in SPDC, λ −1 i = λ −1 p − λ −1 s . Our measured SPDC wavelengths can be well described by a numerical phase matching model based on optical dispersion properties of BBO [11,12] (blue line).
This simple pair source provides photons in a wavelength range of λ s = 526 nm to 661 nm and λ i = 1050 nm to 1760 nm, comparable with existing dye and solid-state femtosecond pulsed lasers [13,14]. In the following section, we demonstrate how the tight timing correlations of each photon pair can be utilized to characterize the temporal response of single photon detectors.
Characterizing the temporal response of single-photon detectors
The time response function f (t) of a single photon detector characterizes the distribution of signal events at a time t after a photon (of a sufficiently short duration) is absorbed by a detector. It characterizes the physical mechanism that converts a single excitation into a macroscopic signal, and can be measured e.g. recording the average response to attenuated optical pulses from a femtosecond laser [15]. In this paper, we use the timing correlation in a photon pair, which emerges at an unpredictable point in time. This requires two single photon detectors registering a photon. As the photon pair is correlated on a time scale of femtoseconds, and the relevant time scales for detector reponses is orders of magitudes larger, the correlation function c 12 (∆t) of 3. Biasing and readout circuit for the superconducting nanowire single-photon detector (SNSPD). The SNSPD is current-biased using a constant voltage source and a series resistor R. When a photon is absorbed by the SNSPD, it changes temporarily from a superconducting to a conducting state. The resulting current change reaches a signal amplifier, which provides the photodetection signal. time differences ∆t between the macroscopic photodetector signals is a convolution of the two detector response functions, where N the total number of recorded coincidence events. Obtaining the detector response function f 1 (t) from a measured correlation function c 12 (∆t) requires the known response function f 2 (t) of a reference detector. For a device under test, f 1 (t) can then be either reconstructed by fitting a c 12 (∆t) in Eqn. 1 with a reasonable model for f 1 (t; P) (with a parameter set P) to a measured correlation function, or obtained from it via deconvolution.
To measure c 12 (∆t), we evaluate the detection time at single photon detector SPD1 by recording the analog detector signal with an oscilloscope, and interpolating the time it crosses a threshold of around half the average signal height with respect to a trigger event caused by a signal of single photon detector SPD2. The histogram of all time differences ∆t for many pair events then is a good represenation of c 12 (∆t).
Reference detector characterization
We use a superconducting nanowire detector (SNSPD) with a design wavelength at 1550 nm as the reference detector SPD2, because a SNSPD has an intrinsic wide-band sensitivity and fast temporal response. To determine its response function f 2 (t), we measure the correlation function c 12 from photon pairs with two detectors of the same model (Single Quantum SSPD-1550Ag). Figure 3 shows the biasing and readout circuit of a single SNSPD. The SNSPD is kept at a temperature of 2.7 K in a cryostat, and is current-biased using a constant voltage source (V bias = 1.75 V) and a series resistor (R = 100 kΩ) through a bias-tee at room temperature. The signal gets further amplified by 40 dB at room temperature to a peak amplitude of about 350 mV.
We first expose both detectors to photons at a wavelength of 810 nm using a degenerate PPKTP-based photon pair source pumped with a 405 nm laser diode (Fig. 4 (a)). The choice of using this source instead of the BBO-based source shown in Fig. 1 was borne out of convenience rather than from any limitation in our BBO-based source described before, as the PPKTP-based type-II SPDC source was readily available [16]. Figure 4 (b) shows the cross-correlation c 12 (∆t) for the two SNSPDs, normalized to background coincidences (red dots).
The histogram closely follows a Gaussian distribution (blue line) with standard deviation σ 12 = 23.6(1) ps. This suggests that the two responses f 1 (t), f 2 (t) are also Gaussian distributions, and Eqn. 1 can be simplified to where N is the total number of correlated photon pairs detected, G(σ, ∆t) = e −∆t 2 /(2σ 2 ) / √ 2πσ 2 is a normalized Gaussian distribution, and C 0 is associated with the accidental coincidence rate. The standard distribution of the correlation is then simply related to those of the individual detectors by σ 2 12 = σ 2 1 + σ 2 2 . Assuming the same response for both detectors, we can infer at a wavelength of 810 nm, corresponding to a the full-width at half-maximum (FWHM) of 39.2(2) ps.
Next, we calibrate the SNSPD at 1550 nm using photon pairs at 810 nm and 1550 nm generated from the same PPKTP-based SPDC source pumped with a 532 nm laser diode [ Fig. 4 (a)]. The non-degenerate photon pairs are separated by a Si plate as a dicroic element. Figure 4 (c) shows the cross-correlation (red dots) of the photodetection times at the two SNSPDs, and a fit of a Gaussian distribution (blue line) with a standard deviation σ 12,810/1550 = 23.8(2) ps. With Finally, to determine the temporal response function of a SNSPD at 548 nm, we used the BBO-based pair source [ Fig. 1] to prepare non-degenerate photon pairs at 548 nm and 1550 nm. Figure 4 (d) shows the cross-correlation obtained with our detectors. The fit to a Gaussian distribution (blue line) leads to a standard deviation σ 12,548/1500 = 23.7(1) ps. With the same argument as before, and using σ 1,1500 = 16.9(2) ps, we obtain a timing jitter of 38.9(7) ps (FWHM) at 548 nm. So in summary, the timing jitter of the SNSPD shows no statistically significant dependency on the wavelength in our measurements. The timing jitter partually originates from the threshold detection mechanism: for a photodetection signal V(t), the timing uncertainty σ t for crossing a threshold, contributed by the electrical noise σ V , is given by σ t;noise = σ V /(dV/dt) at the threshold [17,18]. For our SNSPDs, we estimate σ t;noise ≈ 15 ps, corresponding to a contribution of about 35 ps to the timing jitter of the combined SNSPD and electronic readout system, i.e., we are dominated by this electrical noise. The jitter of the oscilloscope is claimed to be a few ps, which suggests that the intrinsic jitter of these SNSPDs is about 10 − 20 ps (FWHM) [19].
In the following section, we use the standard deviation σ 2 obtained at these wavelengths to define the temporal response function of the reference detector f 2 = G(σ 2 ) in Eqn. 1, and use the method outlined in Sec. 3 to characterize f 1 of an unknown detector.
Avalanche photodetector characterization
First, we characterize the temporal response function f Si of a thin Silicon avalanche photodiode (Si-APD) from Micro Photon Devices (PD-050-CTC-FC). Although thin Si-APDs have been characterized in previous works at a few discrete wavelengths [20][21][22], there has yet been a characterization performed over a continuous wavelength range.
Following Refs. [3,23], we describe the temporal response function with a heuristic model a combination of a Gaussian component of mean µ and standard deviation σ, and an exponential term with a characteristic decay constant τ. The weights of each distribution are described by A and B. The Gaussian component is associated with an avalanche that occurs due to the absorption of a photon in the depletion region. The exponential component, convoluted with a Gaussian distribution, is associated with an avalanche that is initiated by a photoelectron that diffused into the depletion region produced by photon absorption elsewhere. We characterize the Si-APD over a wavelength range from λ 1 = 542 nm to 647 nm in steps of about 10 nm. The photon wavelength is tuned by rotating the crystal, changing the angle of incidence θ i of the pump from 13.7 • to 24.7 • , in steps of 1 • . For each θ i , we obtain the cross-correlation c 12 (∆t) similarly as in section 4. Figure 5 (red dots) shows g (2) , the crosscorrelation normalized to background coincidences, obtained when signal and idler wavelengths are λ 1 = 555 nm and λ 2 = 1500 nm, respectively. For every (θ i , λ 1 , λ 2 ), we deduce f Si by fitting the measured c 12 to the model in Eqn. 1 with f 1 = f Si , and f 2 a Gaussian distribution with full-width at half-maximum (39.9 ps) corresponding to the SNSPD jitter at 1550 nm. For the SNSPD, we assume that its jitter remains constant over the wavelength range λ 2 = 1082 nm to 1602 nm, motivated by the observation that it does not differ significantly for λ 2 = 810 nm and 1550 nm. The fit results in parameters σ and τ which characterize f Si at the corresponding wavelength λ 1 . Figure 5 (inset) shows f Si (∆t) for λ 1 = 555 nm. Two figures of merit are of interest for characterizing the thin Si APD: the duration τ of the exponential tail, and the ratio R between the coincidences attributed to the Gaussian component to those attributed to the exponential component, Both values determine if the full-width at half-maximum (FWHM), a value typically quoted for the detector jitter, serves as a good figure of merit for the temporal response of a detector. For example, the jitter of a detector with R 0.5 and σ τ, is better described by τ than the FWHM of the temporal response function. Figure 6 (a) shows that R reduces while τ increases with increasing wavelength. The detector jitter (FWHM) is shown in Figure 6 (b). The observation that τ changes significantly with wavelength is especially revelant for fluorescence lifetime measurements, where the exponential tail in the temporal response function can be easily misattributed to fluorescence when the Fig. 7. Cross-correlation function normalized to background coincidences g (2) (∆t) of the InGaAs-APD and the reference SNSPD. The cross-correlation approximates the InGaAs-APD temporal response well since the latter is much slower than the SNSPD. We fit the measured g (2) (∆t) (red dots) with a model consisting of two Gaussian distributions (solid line) with an overall width of 196 ps (FWHM). Dashed lines: individual Gaussian components, ∆t: time difference between the photodetection times.
detector is not characterized at the wavelength of interest [1].
Next, we characterize the temporal response function f InGaAs of an InGaAs avalanche photodiode (S-Fifteen Instruments ISPD1) which is sensitive in the telecommunication band. We extract f InGaAs by measuring the cross-correlation c 12 of the detection times between the InGaAs-APD and our reference SNSPD. We note that since the expected jitter of the InGaAs-APD (≈ 200 ps) is significantly larger than that of the SNSPD (≈ 40 ps), f InGaAs is well-approximated by c 12 .
Again, we fit c 12 (∆t) to a heuristic model [24], here comprising of a linear combination of two Gaussian distributions where A and B are the weights of each distribution, µ 1 (µ 2 ) and σ 1 (σ 2 ) is the mean and standard deviation characterizing the Gaussion distribution G, and C 0 is associated with the accidental coincidence rate. Figure 7 shows the measured cross-correlation c 12 (red dots) and the fit result (blue line) when the InGaAs-APD detected photons with a wavelength of 1200 nm. We tune the wavelength of the photons sent to the InGaAs-APD from 1200 nm to 1600 nm in steps of 100 nm, and obtain c 12 for each wavelength. Figure 8 shows the parameters describing the temporal response of the InGaAs-APD: its jitter, the ratio R = A/B of the two Gaussian distributions contributing to f InGaAs , the temporal separation between the two Gaussian distributions (µ 1 − µ 2 ), and the standard deviation of the two Gaussian distributions (σ 1 ,σ 2 ). We find no significant variation of any parameter over the entire wavelength range.
Conclusion
We have presented a widely-tunable, non-degenerate photon-pair source that produces signal photons in the visible band, and idler photons in the telecommunications band. With the source, we demonstrate how the tight-timing correlations within each photon pair can be utilized to characterize single-photon detectors. This is achieved by measuring the cross-correlation of the detection times registered by the device-under-test (DUT), and a reference detector -an SNSPD, which has a relatively low and constant jitter over the wavelength range of interest. By taking into account the jitter introduced by the reference detector, we are able to extract the temporal response function of the DUT. As the source is based on SPDC in a BBO crystal, its output wavelengths are continuously tunable by varying the angle of incidence of the pump at the crystal. We experimentally demonstrated wavelength-tunability of over 100 nm in the visible band, and over 700 nm in the telecommunications band -a similar tunability compared to existing femtosecond pulsed laser systems. With our source, we measured the temporal response functions of two single-photon detectors, an Si-APD and an InGaAs-APD, over a continuous wavelength range centered at the visible and telecommunications band, respectively. For the InGaAs-APD, we observed no significant variation of its jitter over a wide wavelength range. For the Si-APD, we observed that the exponential component of its temporal response increases with wavelength. This observation emphasizes the need for an accurate accounting of Si-APD jitter in precision measurements, e.g. characterizing fluorescence markers at the wavelength of interest [1], or measuring the photon statistics of narrowband astronomical sources [3].
|
v3-fos-license
|
2021-03-29T05:19:53.210Z
|
2021-03-01T00:00:00.000
|
232382648
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/s21061992",
"pdf_hash": "b34fc408d7748d92516250a66430925574878592",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44144",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "b34fc408d7748d92516250a66430925574878592",
"year": 2021
}
|
pes2o/s2orc
|
Recent Developments in Graphene-Based Toxic Gas Sensors: A Theoretical Overview
Detecting and monitoring air-polluting gases such as carbon monoxide (CO), nitrogen oxides (NOx), and sulfur oxides (SOx) are critical, as these gases are toxic and harm the ecosystem and the human health. Therefore, it is necessary to design high-performance gas sensors for toxic gas detection. In this sense, graphene-based materials are promising for use as toxic gas sensors. In addition to experimental investigations, first-principle methods have enabled graphene-based sensor design to progress by leaps and bounds. This review presents a detailed analysis of graphene-based toxic gas sensors by using first-principle methods. The modifications made to graphene, such as decorated, defective, and doped to improve the detection of NOx, SOx, and CO toxic gases are revised and analyzed. In general, graphene decorated with transition metals, defective graphene, and doped graphene have a higher sensibility toward the toxic gases than pristine graphene. This review shows the relevance of using first-principle studies for the design of novel and efficient toxic gas sensors. The theoretical results obtained to date can greatly help experimental groups to design novel and efficient graphene-based toxic gas sensors.
Introduction
The conversion of energy from one form to another many times affects the air composition in several ways. It is well-known that fossil fuels have been powering industrial development and the amenities of modern life that we enjoy. However, the combustion of fossil fuels contributes to a great extent to composition variations of the atmosphere, and this is mainly due to harmful gas emissions.
Harmful gases include, for instance, aliphatic hydrocarbons, carbon monoxide (CO), nitrogen oxides (NO x ), and sulfur oxides (SO x ), among others. In this context, health expenditures have increased due to air pollution, which is mainly associated with the rapid industrialization of many countries. Consequently, the disruption of ecological balance and serious public health issues caused by harmful gases are raising global concerns [1,2]. Table 1 illustrates some aspects about the harmful effects of toxic gases related to human health. Refs.
Carbon monoxide No No
Tissue hypoxia, hypoxic cardiac dysfunction, subtle cardiovascular, unconsciousness, and death after prolonged exposures or after acute exposures to high concentrations of CO. [3][4][5] Nitrogen oxides Yes No Nausea, headache, respiratory illness (cough and irritation of the respiratory tract), asthma, pneumonia, possibly tuberculosis, and Parkinson′s disease. [6][7][8]9] Sulfur oxides Yes Yes Neurological damage, bronchitis, bronchial asthma, emphysema, bronchoconstriction and mucus. [3,4,10,11] On the other hand, until today, a sizable number of countries depend on oil for energy uses and development. Consequently, harmful effects on the environment such as global warming, ozone depletion, acid rain and climate change may result from the gases emanating from fossil fuel combustion. Therefore, harmful gases do not only affect human health, but they also have an undesirable impact on the environment.
According to Springer, the greenhouse effect of the troposphere is beneficial because it makes the earth habitable at an overall average temperature of about 15 °C [12] (see Figure 1). However, higher concentrations of CO2, methane, water vapor, chlorofluorocarbons (CFCs), ozone, and nitrous oxide in the upper atmosphere could result in global warming, accompanied by economic and environmental implications [12][13][14][15][16]. [12,16].
Against this backdrop, effectively sensing and capturing these harmful gases, such as CO, NOx, and SOx, can greatly help protect the environment and human health [9,17]. Nowadays many materials, such as metal oxide semiconductors, conducting polymers, carbon-based materials, have been investigated and utilized as toxic gas sensors [18][19][20]. However, the challenges of these gas sensors can be one or more of the following: cost, sensitivity (e.g., ppb level is rare), and poor selectivity, among others [19]. Therefore, it is necessary to design high-performance gas sensors for detecting these toxic gases. As an Figure 1. Model of the Earth s gaseous protective shield [12,16].
Against this backdrop, effectively sensing and capturing these harmful gases, such as CO, NO x , and SO x , can greatly help protect the environment and human health [9,17]. Nowadays many materials, such as metal oxide semiconductors, conducting polymers, carbon-based materials, have been investigated and utilized as toxic gas sensors [18][19][20]. However, the challenges of these gas sensors can be one or more of the following: cost, sensitivity (e.g., ppb level is rare), and poor selectivity, among others [19]. Therefore, it is necessary to design high-performance gas sensors for detecting these toxic gases. As an alternative, among the carbon materials, graphene, a 2D monolayer form of sp 2 -hybridized carbon atoms, could prove a key material for sensing applications due to its exceptional thermal conductivity, high electron mobility, excellent mechanical properties, and high specific surface area [21][22][23][24]. Due to its remarkable properties, graphene opens up a wide range of promising applications in the sensors field, from fundamental science to industrial applications [25][26][27][28][29]. However, the main issue is that gas molecules are weakly adsorbed on graphene due to its low reactivity [30][31][32]. For this reason, both at the theoretical and experimental levels, several strategies have been developed to modify the electronic and structural properties of graphene and, consequently, improve its reactivity toward the toxic gases. Such strategies include doped, decorated, defective, and functionalized graphene [33][34][35].
Experimentally, the current conception of novel graphene sensor materials and the required performance improvements are largely limited by lack of rapid and economical synthesis routes and post-testing strategies to ensure their functionality. Currently, different synthesis techniques such as chemical vapor deposition, sputtering, drop casting, spin coating, and inkjet printing have been used to fabricate high quality graphene for the detection of toxic gases [36,37]. However, many of these methods are expensive and not easily scalable for mass production. In addition, it is difficult to control the doping concentration and the number of graphene layers. Undoubtedly, such limitations can be overcome if sensor materials are designed, modeled, and evaluated from a theoretical point of view (e.g., first-principle methods). The first-principle or ab initio methods are based on the quantum mechanics theory. Specifically, the density functional theory (DFT) is primarily a formalism of electronic ground state structure, couched in terms of the electronic density distribution [38]. The DFT-based simulations are essential for explaining and understanding the experimental results at the molecular level, or as a predictive tool for the rational design of novel gas sensors [39]. The DFT calculations provide important information, such as the adsorption mechanisms, the adsorption energy, charge transfer, electronic modification after gas adsorption, feasible approaches to enhance adsorption or desorption, that are critical for designing novel gas sensors [40]. Due to the critical role that theoretical calculations have in the design of toxic gas sensors, numerous DFT studies have been conducted to investigate novel graphene-based gas sensors. However, to date there are no detailed and critical reviews of the current progress in theoretical design of graphene-based toxic gas sensors; state-of-the-art reviews mainly focus on experimental evidence [28,29,41,42]. Therefore, this review presents a detailed and critical analysis of the progress of graphene-based toxic gas sensors by using first-principle methods. The modifications made to graphene, such as defective, doped, and decorated to improve the detection of CO and NO x , and SO x toxic gases, are revised and analyzed in detail.
Pristine Graphene
Different approaches have been used for theoretical studies into pristine graphene, such as aromatic molecules (finite system) and periodic systems (supercell) (see Figure 2). There have been several theoretical studies conducted on the use of pristine graphene as a toxic gas sensor [43][44][45][46]. One of the first DFT-based studies on the use of pristine graphene as a toxic gas sensor was performed by Leenaerts et al. [43]. They investigated the adsorption of CO, NO 2 , and NO on pristine graphene using a 4 × 4 graphene supercell with the generalized gradient approximation (GGA), specifically the Perdew-Burke-Ernzerhof (PBE) functional; the adsorption energies of −14, −67, and −29 meV were found for CO, NO 2 , and NO molecules, respectively [43]. At the same time, Wehling et al. conducted the first joint experimental and theoretical investigation of the NO 2 adsorption on graphene. To this end, they used the local density approximation (LDA) and the GGA for their calculations [44]. The computed NO 2 adsorption energy with the GGA method was similar to that reported by Leenaerts et al. [43]. However, the NO 2 adsorption energy calculated using the LDA method was higher that the computed energy that employed the GGA method [44]. In another investigation, Lin et al. studied the CO, and NO 2 adsorption on graphene using a 4 × 4 graphene supercell with the van der Waals density functional (vdW-DF2) and LDA methods [45]. The CO and NO 2 adsorption energies calculated by vdW-DF2 were larger than those obtained by LDA [45]. For the adsorption mechanism of the toxic gases on the pristine graphene, for the CO and NO molecules, the most stable interaction occurs when the CO [43,45] and NO [43] molecules are parallel to the graphene surface, whereas for the NO 2 molecule, the most stable interaction is with the O atoms of the N-O bonds pointing toward the graphene surface [43][44][45]. Although the adsorption energies of the gases on graphene are notably affected by the methods employed [44][45][46], the interaction between gases and pristine graphene is weak [43]. This could limit the sensitivity of pristine graphene to detecting toxic gases. on graphene using a 4 × 4 graphene supercell with the van der Waals density functional (vdW-DF2) and LDA methods [45]. The CO and NO2 adsorption energies calculated by vdW-DF2 were larger than those obtained by LDA [45]. For the adsorption mechanism of the toxic gases on the pristine graphene, for the CO and NO molecules, the most stable interaction occurs when the CO [43,45] and NO [43] molecules are parallel to the graphene surface, whereas for the NO2 molecule, the most stable interaction is with the O atoms of the N-O bonds pointing toward the graphene surface [43][44][45]. Although the adsorption energies of the gases on graphene are notably affected by the methods employed [44][45][46], the interaction between gases and pristine graphene is weak [43]. This could limit the sensitivity of pristine graphene to detecting toxic gases.
Pristine Graphene Decorated with Transition Metals
To date, researchers have employed different strategies for improving the reactivity of pristine graphene toward the detection of toxic gases. One of these strategies is the use of pristine graphene decorated with transition metals. This strategy involves the deposition of transition metal atoms onto the pristine graphene. To date, various DFT studies on toxic gas adsorption on pristine graphene decorated with transition metals are available in the literature [47][48][49][50][51]. In the first instance, the CO, NO, and SO2 adsorption on Co-decorated graphene was studied [47]. Lately, the NO [48] and SO2 [49] adsorption on Pt-decorated graphene was investigated. In another study, the CO and NO adsorption on Lidecorated graphene was calculated [50]. Finally, the NO2 adsorption on Ni-, Pd-, and Ptdecorated graphene was computed [51]. On the interaction mechanism between the toxic gases and graphene decorated with transition metals; for the CO and NO molecules, the most stable interaction occurs when the CO [47,50] and NO [47,48,50] molecules are vertical to the graphene decorated with transition metals. Furthermore, it has been reported that the atom type used to decorate the graphene can influence on the adsorption mechanism between the toxic gas and the graphene [51]. For instance, the mode of NO2 adsorption on the graphene decorated with Ni is different with respect to the graphene decorated with Pd and Pt (see Figure 3).
Pristine Graphene Decorated with Transition Metals
To date, researchers have employed different strategies for improving the reactivity of pristine graphene toward the detection of toxic gases. One of these strategies is the use of pristine graphene decorated with transition metals. This strategy involves the deposition of transition metal atoms onto the pristine graphene. To date, various DFT studies on toxic gas adsorption on pristine graphene decorated with transition metals are available in the literature [47][48][49][50][51]. In the first instance, the CO, NO, and SO 2 adsorption on Co-decorated graphene was studied [47]. Lately, the NO [48] and SO 2 [49] adsorption on Pt-decorated graphene was investigated. In another study, the CO and NO adsorption on Li-decorated graphene was calculated [50]. Finally, the NO 2 adsorption on Ni-, Pd-, and Pt-decorated graphene was computed [51]. On the interaction mechanism between the toxic gases and graphene decorated with transition metals; for the CO and NO molecules, the most stable interaction occurs when the CO [47,50] and NO [47,48,50] molecules are vertical to the graphene decorated with transition metals. Furthermore, it has been reported that the atom type used to decorate the graphene can influence on the adsorption mechanism between the toxic gas and the graphene [51]. For instance, the mode of NO 2 adsorption on the graphene decorated with Ni is different with respect to the graphene decorated with Pd and Pt (see Figure 3).
The adsorption energies of toxic gases on graphene decorated with transition metals is much higher than those on pristine graphene (see Table 2). Such increments in the adsorption energy can be attributed to the modification of the electronic properties of transition metals-decorated graphene compared to undecorated pristine graphene. For example, a high charge transfer from metallic atoms to the graphene has been observed, which improves the reactivity of pristine graphene [48,49]. All previous results demonstrated that the toxic gases adsorption energies were enhanced on graphene decorated with transition metals compared to the adsorption energies on pristine graphene. This shows that pristine graphene decorated with transition metals is a promising material for use in toxic gas sensors. However, to date, DFT studies on the selectivity of graphene decorated with The adsorption energies of toxic gases on graphene decorated with transition metals is much higher than those on pristine graphene (see Table 2). Such increments in the adsorption energy can be attributed to the modification of the electronic properties of transition metals-decorated graphene compared to undecorated pristine graphene. For example, a high charge transfer from metallic atoms to the graphene has been observed, which improves the reactivity of pristine graphene [48,49]. All previous results demonstrated that the toxic gases adsorption energies were enhanced on graphene decorated with transition metals compared to the adsorption energies on pristine graphene. This shows that pristine graphene decorated with transition metals is a promising material for use in toxic gas sensors. However, to date, DFT studies on the selectivity of graphene decorated with transition metals toward toxic gases have scarcely been reported in literature. Therefore, more theoretical studies on the selectivity of pristine graphene decorated with transition metals should be carried out.
Defective Graphene
Another strategy employed to modify the reactivity of pristine graphene is through defects. As has been reported in literature, nanoscale defects bring new functionalities that could be useful for different applications. For instance, structural defects notably modify the mechanical, chemical, and electronic properties of graphene [52]. At the theoretical level, structural defects have become very important to modify the graphene reactivity because these can be introduced into graphene during synthesis by chemical treatment or irradiation [52,53]. To date, there have been various theoretical studies conducted on the use of defective graphene as toxic gas sensor [54][55][56][57][58][59][60][61][62][63][64][65]. For instance, Huang et al. investigated the CO, NO, NO2 on armchair graphene nanoribbons (AGNRs) with edge dangling
Defective Graphene
Another strategy employed to modify the reactivity of pristine graphene is through defects. As has been reported in literature, nanoscale defects bring new functionalities that could be useful for different applications. For instance, structural defects notably modify the mechanical, chemical, and electronic properties of graphene [52]. At the theoretical level, structural defects have become very important to modify the graphene reactivity because these can be introduced into graphene during synthesis by chemical treatment or irradiation [52,53]. To date, there have been various theoretical studies conducted on the use of defective graphene as toxic gas sensor [54][55][56][57][58][59][60][61][62][63][64][65]. For instance, Huang et al. investigated the CO, NO, NO 2 on armchair graphene nanoribbons (AGNRs) with edge dangling bond defects using PW91 functional (see Figure 4). The CO, NO, and NO 2 adsorption energies were −1.34, −2.29, and −2.70 eV, respectively. These results indicate that the toxic gas adsorption at AGNR edges is stronger than on graphene surface [56]. To date, different defects have been introduced in the graphene surface to improve its reactivity toward the toxic gases, see Table 3. The single-vacancy and Stone-Wales defects have been used to modify the graphene surface. The single-vacancy has been the defect mostly studied. The single-vacancy defects in graphene have been found to have stronger interactions with toxic gases compared to pristine graphene. This shows that graphene with single-vacancy defects is a promising material for use in toxic gases sensors. The good sensitivity of graphene with single-vacancy defects is attributed to the modified electronic properties compared to those of pristine graphene. The removed C atom produces the three neighboring C atoms having three dangling bonds, which produce localized states at the Fermi level [59,63]. For the adsorption mechanism of the toxic gases on the graphene with a single-vacancy; in the case of CO and NO molecules, the most stable adsorption occurs when the C and N atoms of the CO [57][58][59][60][61] and NO [57,58,60,63] molecules are in the vacancy of graphene, respectively, whereas for the NO 2 and SO 2 molecules, the most stable interaction occurs when the NO 2 [58] and SO 2 [64] molecules are vertical to the defective graphene with the N and S atoms toward the vacancy of graphene, respectively.
The single-vacancy defects in graphene have been found to have stronger interactions with toxic gases compared to pristine graphene. This shows that graphene with singlevacancy defects is a promising material for use in toxic gases sensors. The good sensitivity of graphene with single-vacancy defects is attributed to the modified electronic properties compared to those of pristine graphene. The removed C atom produces the three neighboring C atoms having three dangling bonds, which produce localized states at the Fermi level [59,63]. For the adsorption mechanism of the toxic gases on the graphene with a single-vacancy; in the case of CO and NO molecules, the most stable adsorption occurs when the C and N atoms of the CO [57][58][59][60][61] and NO [57,58,60,63] molecules are in the vacancy of graphene, respectively, whereas for the NO2 and SO2 molecules, the most stable interaction occurs when the NO2 [58] and SO2 [64] molecules are vertical to the defective graphene with the N and S atoms toward the vacancy of graphene, respectively. It has also been shown that an extra electric field serves as a good strategy to enhance the reactivity of defective graphene toward the toxic gases, as it positively affects the ma-terial′s electronic properties [61]. Recently, the CO adsorption on a graphene sheet with single-vacancy defects under different electric fields was investigated [61]. The calculated adsorption energies of CO on the single-vacancy defective graphene under an applied electric field of −0.016 a.u. was 62.6% higher than without the electric field [61], which shows that an external electric field offers a good way to enhance the reactivity of defective graphene toward the toxic gases.
When toxic gas sensors are exposed to aerobic environments, interference from other gases will cause false alarms [57]. Therefore, it is essential to explore the selectivity of the graphene-based sensors toward the toxic gases. In this direction, Ma et al. demonstrated that CO and O2 molecules are chemisorbed on graphene with single-vacancy defects. This limits selectively toward the CO since O2 chemisorption would lead to a false alarm [57]. In another study, the selectivity of graphene with a single-vacancy defect toward the various gases was investigated (e.g., H2, N2, O2, CO, CO2, H2O, H2S, and NH3). Five gases (H2, O2, CO, CO2, and NH3) exhibited chemisorption, whereas the remaining gases showed physisorption (N2, H2O, and H2S). For the H2, O2, CO2, and NH3 chemisorption involves It has also been shown that an extra electric field serves as a good strategy to enhance the reactivity of defective graphene toward the toxic gases, as it positively affects the material s electronic properties [61]. Recently, the CO adsorption on a graphene sheet with single-vacancy defects under different electric fields was investigated [61]. The calculated adsorption energies of CO on the single-vacancy defective graphene under an applied electric field of −0.016 a.u. was 62.6% higher than without the electric field [61], which shows that an external electric field offers a good way to enhance the reactivity of defective graphene toward the toxic gases.
When toxic gas sensors are exposed to aerobic environments, interference from other gases will cause false alarms [57]. Therefore, it is essential to explore the selectivity of the graphene-based sensors toward the toxic gases. In this direction, Ma et al. demonstrated that CO and O 2 molecules are chemisorbed on graphene with single-vacancy defects. This limits selectively toward the CO since O 2 chemisorption would lead to a false alarm [57]. In another study, the selectivity of graphene with a single-vacancy defect toward the various gases was investigated (e.g., H 2 , N 2 , O 2 , CO, CO 2 , H 2 O, H 2 S, and NH 3 ). Five gases (H 2 , O 2 , CO, CO 2 , and NH 3 ) exhibited chemisorption, whereas the remaining gases showed physisorption (N 2 , H 2 O, and H 2 S). For the H 2 , O 2 , CO 2 , and NH 3 chemisorption involves the dissociation of the molecules; namely, O 2 → O + O. It is remarkable that only the CO molecule remains without dissociation. Therefore, the graphene with a single-vacancy would be more selective toward the CO detection. For instance, it is observed that the O 2 molecule requires about 5.11 eV to get dissociated and then bind to the vacancy. Whereas the CO molecule avoids paying that huge dissociation energy. This fact shows that graphene with a single-vacancy defect has a higher selectivity toward the CO detection [59].
Doped Graphene
Another approach widely used to modify the reactivity of pristine graphene is through doping. Doping atoms proved to substantially modify the electronic, chemical, and structural properties of pristine graphene [66,67]. At the theoretical level, there are various routes to dope the graphene sheet. A widely used method is to replace a carbon atom with the doping atom in the graphene sheet. Currently, different doped-graphene sheets have been explored as toxic gas sensors, replacing a carbon atom by a dopant atom [55,. Around 30 elements of the periodic table have been explored for use as doping materials, with N being the most studied element dopant due to its similar atomic radii with C (see Table 4). Among the toxic gases reviewed, CO gas is the most investigated due its high toxicity in humans [101]. It is also observed that the GGA (specifically PBE) method and supercell approach are the most widely approaches used for studying doped graphene for use in toxic gas sensors. Interestingly, several studies consider dispersion corrections in the calculations to better describe the interaction between the toxic gases and doped graphene. According to adsorption energies of the toxic gases, in most cases, it is observed that the toxic gases were adsorbed stronger on doped graphene than on pristine graphene. This shows that the doped-graphene sheets are good candidates as toxic gas sensors. The increase in the adsorption energy can be attributed to the modification of the structural and electronic properties of doped graphene compared to pristine graphene. For instance, a high charge transfer from metallic atoms to the graphene has been observed, which improves the reactivity of doped graphene toward the toxic gases [71,85,99]. However, in some cases, the interaction between the toxic gases and the doped-graphene sheet has been reported to be low, as in the case of the interaction of CO on the N-doped graphene. For the adsorption mechanism of the toxic gases on the doped graphene, it has been reported that the atom type used to dope the graphene can influence on the adsorption mechanism between the toxic gas and the graphene [69,71]. Finally, although all calculations are conducted at the DFT level, there are discrepancies between the results reported (e.g., NO 2 adsorption energies on the N-doped graphene) in Table 4. These can be attributed to various factors, such as the functional and dispersion corrections employed in the calculations, the type and site of gas adsorption on which the adsorption energy was calculated, among others.
An extra electric field could be a good strategy to enhance the reactivity of graphenebased gas sensors [61]. In this sense, the CO adsorption on Al-doped graphene under different electric fields was investigated [83]. The calculated adsorption energies of CO on the Al-doped graphene under an applied electric field of −0.03 a.u. were higher than those without the electric field [83]. This indicates that an external electric field is a good way to enhance the reactivity of doped graphene toward the toxic gases [83,95]. It can also be used for the desorption of toxic gases on the sensor surface, only modifying the direction of the electric field. In this context, the NO and NO 2 adsorption on Fe-doped graphene under different electric fields (0.01-0.05 a.u.) was investigated [79]. Electric fields above 0.03 a.u. have been found to cause NO and NO 2 desorption from the surface of Fe-doped graphene [79]. CO desorption from the Al-doped-graphene surface has also been demonstrated under the application of an electric field ≥0.03 a.u. [83]. Therefore, an electric field can be employed to reactivate the doped-graphene toxic gas sensors for repetitious applications. On the other hand, the selectivity of doped graphene toward the toxic gases has been investigated [77,79]. In this sense, Cortés-Arriagada et al. investigated the selectivity of Fe-doped graphene toward the CO and SO 2 molecules in O 2 environments [77]. They computed an O 2 adsorption energy of −1.68 eV, which is similar or higher than the energy absorption of CO and SO 2 molecules. This limits selectively toward the CO and SO 2 molecules in aerobic environments [77]. Another study examined the selectivity of Fe-doped graphene toward the NO and NO 2 gases in O 2 environments [79]. Results showed that Fe-doped graphene is selective toward the NO and NO 2 molecules in O 2 environments [79].
Another strategy for doping graphene has been to substitute various carbon atoms with the doping atoms. Figure 5a shows a graphene sheet with three atoms of N and a vacancy. This type of doping is known as pyridinic-type doping. Currently, there are some detailed studies on the use of pyridinic-type N-doped graphene (PNG) as toxic gas sensors [57,102]. Ma et al. investigated toxic gas adsorption on PNG sheet using the GGA method [57]. They demonstrated that the PNG is a good candidate for selectively sensing CO from air [57]. Recently, the NO, and SO2 adsorption on PNG was investigated using the B3LYP approximation [102]. It was shown that the NO molecule is weakly adsorbed on PNG sheet [57,102], which shows that PNG may not be a good candidate as a NO sensor. However, the SO 2 gas is strongly adsorbed (−2.58 eV), thus, PNG may be a good candidate as a SO 2 sensor.
Another strategy employed to dope the graphene sheets is inserting the doping atom in a double vacancy (divacancy), see Figure 5b. These structures are interesting because they show better reactivity toward the toxic gases than defective graphene [60]. Consequently, there have been various studies on the use of doped vacancy-defected graphene as toxic gases sensors [60,62,75,80,89]. Jia et al. investigated the CO adsorption on Mn-doped vacancy-defected graphene using the PBE functional [62]. The CO adsorption energy on Mn-doped vacancy-defected graphene was higher than on defective or pristine graphene [62]. In another study, the CO and NO adsorption on Fe-doped vacancy-defected graphene was investigated using the PBE approximation [75]. Adsorption energies of −1.10 and −2.41 eV were computed for CO and NO on Fe-doped vacancy-defected graphene, respectively [75]. At the same time, Gao et al. computed the NO 2 and SO 3 interaction on Fe-doped vacancy-defected graphene employing the PBE functional (see Figure 6) [80]. Adsorption energies of NO 2 (−1.59 eV) and SO 3 (−1.39 eV) were investigated on Fe-doped vacancy-defected graphene [80]. Recently, Ni-doped vacancydefected graphene sheets were studied as toxic gases sensors considering the PBE functional [89]. The computed results indicate that NO (−1.87 eV) and NO 2 (−1.30 eV) were strongly adsorbed on Ni-doped vacancy-defected graphene, while the SO 2 (−0.36 eV) and SO 3 (−0.38 eV) gases were weakly adsorbed [89]. Finally, the CO and NO adsorption on Pd-doped vacancy-defected graphene were computed using the PBE functional [60]. The computed adsorption energies of CO and NO molecules on Pd-doped vacancy-defected graphene were higher than on single-vacancy and pristine graphene [60]. [76] a Results obtained under electric field = 0.0 a.u. b Adsorption energy calculated using zigzag graphene nanoribbons.
Another strategy for doping graphene has been to substitute various carbon atoms with the doping atoms. Figure 5a shows a graphene sheet with three atoms of N and a vacancy. This type of doping is known as pyridinic-type doping. Currently, there are some detailed studies on the use of pyridinic-type N-doped graphene (PNG) as toxic gas sensors [57,102]. Ma et al. investigated toxic gas adsorption on PNG sheet using the GGA method [57]. They demonstrated that the PNG is a good candidate for selectively sensing CO from air [57]. Recently, the NO, and SO2 adsorption on PNG was investigated using the B3LYP approximation [102]. It was shown that the NO molecule is weakly adsorbed on PNG sheet [57,102], which shows that PNG may not be a good candidate as a NO sensor. However, the SO2 gas is strongly adsorbed (−2.58 eV), thus, PNG may be a good candidate as a SO2 sensor. Another strategy employed to dope the graphene sheets is inserting the doping atom in a double vacancy (divacancy), see Figure 5b. These structures are interesting because they show better reactivity toward the toxic gases than defective graphene [60]. Consequently, there have been various studies on the use of doped vacancy-defected graphene as toxic gases sensors [60,62,75,80,89]. Jia et al. investigated the CO adsorption on Mndoped vacancy-defected graphene using the PBE functional [62]. The CO adsorption energy on Mn-doped vacancy-defected graphene was higher than on defective or pristine graphene [62]. In another study, the CO and NO adsorption on Fe-doped vacancy-defected graphene was investigated using the PBE approximation [75]. Adsorption energies of −1.10 and −2.41 eV were computed for CO and NO on Fe-doped vacancy-defected graphene, respectively [75]. At the same time, Gao et al. computed the NO2 and SO3 interaction on Fe-doped vacancy-defected graphene employing the PBE functional (see Figure 6) [80]. Adsorption energies of NO2 (−1.59 eV) and SO3 (−1.39 eV) were investigated on Fedoped vacancy-defected graphene [80]. Recently, Ni-doped vacancy-defected graphene sheets were studied as toxic gases sensors considering the PBE functional [89]. The computed results indicate that NO (−1.87 eV) and NO2 (−1.30 eV) were strongly adsorbed on Ni-doped vacancy-defected graphene, while the SO2 (−0.36 eV) and SO3 (−0.38 eV) gases were weakly adsorbed [89]. Finally, the CO and NO adsorption on Pd-doped vacancy- defected graphene were computed using the PBE functional [60]. The computed adsorption energies of CO and NO molecules on Pd-doped vacancy-defected graphene were higher than on single-vacancy and pristine graphene [60]. Many theoretical studies have been conducted on the use of doped graphene as toxic gas sensors. The results evidence that doped graphene sheets are good candidate materials as gas sensors. To experimentally confirm some of the above-mentioned theoretical predictions, various doped graphene materials have been synthesized and evaluated as toxic gas sensors [103][104][105][106][107][108]. Based on experimental evidence, the sensitivity and selectivity of doped graphene were higher than pristine graphene [103][104][105][106][107]. However, it is difficult to control the doping concentration and the number of graphene layers. Hence, future trends should be focused on the improvement of doped graphene gas sensors through novel, low-cost industrially scalable techniques that allow to control the doping concentration and type in graphene.
Conclusions and Perspectives
This review presents a detailed and critical analysis of current progress of graphenebased toxic gas sensors using first-principle methods. Following the development of graphene as a gas sensor, it has gained considerable interest from both a theoretical and a technological viewpoint. Therefore, modifications made to graphene to improve the detection of CO, NOx, and SOx toxic gases were revised and analyzed in detail. Based on this Many theoretical studies have been conducted on the use of doped graphene as toxic gas sensors. The results evidence that doped graphene sheets are good candidate materials as gas sensors. To experimentally confirm some of the above-mentioned theoretical predictions, various doped graphene materials have been synthesized and evaluated as toxic gas sensors [103][104][105][106][107][108]. Based on experimental evidence, the sensitivity and selectivity of doped graphene were higher than pristine graphene [103][104][105][106][107]. However, it is difficult to control the doping concentration and the number of graphene layers. Hence, future trends should be focused on the improvement of doped graphene gas sensors through novel, low-cost industrially scalable techniques that allow to control the doping concentration and type in graphene.
Conclusions and Perspectives
This review presents a detailed and critical analysis of current progress of graphenebased toxic gas sensors using first-principle methods. Following the development of graphene as a gas sensor, it has gained considerable interest from both a theoretical and a technological viewpoint. Therefore, modifications made to graphene to improve the detection of CO, NO x , and SO x toxic gases were revised and analyzed in detail. Based on this review, we concluded the following: (a) The interaction between toxic gases and pristine graphene is weak, which reduces the sensitivity and selectivity of pristine graphene toward the toxic gases. (b) The pristine graphene decorated with transition metals is a promising material for use in a toxic gas sensor. However, up to now these types of studies are still scarce; therefore, more theoretical studies on the sensitivity and selectivity of pristine graphene decorated with transition metals toward the toxic gases should be carried out. (c) It was observed that graphene with single-vacancy defects interacts stronger with the toxic gases compared to pristine graphene. Therefore, it is a promising material for use in toxic gas sensors. In addition to point defects, line or multivacancy defects should be investigated at the DFT level, to enrich graphene functionalities. (d) Bilayer and multilayer graphene exhibit higher different dimensionalities than singlelayer graphene, which can increase the number of possible defect types, namely, point defects, line defects, and so on. At the theoretical level, more attention should be paid to understanding stable bilayer and multilayer graphene with randomly distributed defects. (e) A large number of theoretical studies have addressed the use of doped graphene as a toxic gas sensor. The evidence indicates that doped-graphene sheets are good candidate materials. However, up to date, DFT studies on the selectivity of doped graphene toward the toxic gases are limited. Therefore, more theoretical studies on the selectivity of doped graphene toward the toxic gases should be carried out.
In addition, feasible approaches to facilitate the desorption of toxic gas on the doped graphene surface should be investigated. (f) The pyridinic-type N-doped graphene and doped vacancy-defected graphene are good materials for use in toxic gases sensors. However, more DFT-based studies on pyridinic-type N-doped graphene and doped vacancy-defected graphene as toxic gas sensors are needed. (g) The reasons for the difference of adsorption energy obtained by using different functionals (e.g., GGA, LDA, PBE, and vdW-DF2) in the calculation methods should be compared and analyzed. (h) This review shows the importance of theoretical studies for the design of novel and efficient toxic gas sensors. The theoretical results obtained up to now can help and motivate experimental groups to design novel and efficient graphene-based toxic gas sensors.
|
v3-fos-license
|
2019-01-31T20:16:45.104Z
|
2019-01-31T00:00:00.000
|
59610572
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12893-019-0481-0",
"pdf_hash": "0998e09f0292b85d0dac1dc9b15dad9e797f88ac",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44145",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0998e09f0292b85d0dac1dc9b15dad9e797f88ac",
"year": 2019
}
|
pes2o/s2orc
|
Bronchial angiolipoma successfully treated by sleeve resection of the right bronchus intermedius: a case report
Background Angiolipoma is a rare, benign tumor that primarily develops in the limbs and trunk. The occurrence of angiolipoma in the lungs is extremely rare; to date, only two cases of primary bronchial angiolipoma have been reported. Here, we report a case of angiolipoma of the right bronchus intermedius that was successfully treated with sleeve resection and reconstructive surgery. Case presentation This report presents a case of angiolipoma that developed in the right bronchus intermedius of a 68-year-old man. A chest CT revealed a 10-mm endobronchial mass that was clearly visible as a high-attenuation area of contrast enhancement. Bronchoscopy revealed a submucosal tumor on the anterior wall of the entrance to the right bronchus intermedius that was constricting the airway lumen. The tumor surface was covered with numerous engorged blood vessels, and the middle and inferior pulmonary lobes were intact. Bronchial sleeve resection of the right bronchus intermedius was performed. Histologically, a mixture of proliferating blood vessels and adipocytes were observed within the bronchus wall. Therefore, the pathological diagnosis was angiolipoma. Lung function was preserved, and complete resection of the tumor was achieved. At present (2 years and 7 months after surgery), the patient is recurrence-free. Conclusion Accordingly, using bronchial sleeve resection and end-to-end anastomosis techniques, we accomplished complete tumor excision and avoided the need to resect additional lung parenchyma. Our procedure preserved pulmonary function and yielded a curative result. Bronchoscopic intervention or minimal parenchymal resection should be considered as treatments for bronchial angiolipoma. Given the small number of reports of bronchial angiolipoma, the collection of additional data is important to elucidate the clinical characteristics of this rare tumor.
Background
To the best of our knowledge, only two cases of angiolipoma that developed in the lung have been reported [1,2]. Here, we report a case of angiolipoma of the right bronchus intermedius that was successfully treated with sleeve resection and reconstructive surgery.
Case presentation
A 68-year-old man was referred to our department with infraclavicular lymphadenopathy and an endobronchial tumor that was incidentally discovered on computed tomography (CT) at another hospital. An excision of the left infraclavicular nodes was performed, but no malignant findings were observed. A chest CT revealed a 10-mm endobronchial mass that was clearly visible as a highattenuation area of contrast enhancement (Fig. 1a, b). An 18-fluorodeoxyglucose positron emission tomographic whole-body scan revealed no significant uptake in the lesion. Bronchoscopy revealed a submucosal tumor on the anterior wall of the entrance to the right bronchus intermedius that was constricting the airway lumen (Fig. 2a, b). The tumor surface was covered with numerous engorged blood vessels, and the middle and inferior pulmonary lobes were intact. Although a biopsy of the mass was performed, no definitive diagnosis was achieved.
A posterolateral thoracotomy was performed through the fifth intercostal space under general anesthesia. The bronchus intermedius was dissected, and the membranous portion was opened to expose the lumen. The distal end of the tumor was transected first followed by the proximal end, providing adequate tumor-free margins. Because the tumor had clearly defined borders, the resection line was determined by macroscopically securing the margin from the tumor. Subsequently, the tumor and bronchus intermedius were removed en bloc. The tumor measured 13 × 6 mm in size and was hemispherical in morphology. Examination of frozen tumor sections suggested angioma with no malignant findings. The presence of tumor-free margins at both the proximal and distal ends of the bronchus was also confirmed by examination of frozen sections. The excised segment of the bronchus measured 1 cm in length; thus, the bronchus was reconstructed by end-to-end anastomosis using 3-0 PDS (polydioxanone) sutures without excessive tension. The anastomosis was then wrapped in a pedicled intercostal muscle flap to isolate it from the pulmonary artery.
Histologically, a mixture of proliferating blood vessels and adipocytes were observed within the bronchus wall ( Fig. 3a-c). Therefore, the pathological diagnosis was angiolipoma. The patient experienced no postoperative complications and was discharged on postoperative day 15. Two years and 7 months postsurgery, the patient has experienced no recurrence.
Discussion and conclusions
Angiolipoma is a rare benign neoplasm. The first reported case of angiolipoma, which was described by Bowen in 1912 [3], was in a patient with subcutaneous lesions in all four limbs. The most common sites of angiolipoma occurrence include the upper and lower limbs, abdomen, precordium, and back. In contrast, the occurrence of angiolipoma in the lung is extremely rare, and only two cases have been reported to date [1,2]. Similar to the present case, the tumor developed in the right bronchus intermedius in one of these cases. In the other case, the tumor developed in the bronchus of the right inferior lobe [1,2]. In each of these cases, angiolipoma developed in the central bronchus and protruded into the airway. Due to the indolent nature of this neoplasm, symptoms of bronchial obstruction, such as coughing, wheezing, dyspnea, sputum production, hemoptysis, atelectasis, and pneumonia, may be observed in these patients.
Histologically, angiolipoma consists of mature adipose and vascular tissues in varying proportions [3,4]. The vascular tissue is predominantly located in the tumor periphery and may include capillaries with an occasional fibrin thrombus. In the present case, we observed a mixture of proliferating capillaries and small blood vessels as well as proliferating adipocytes between the mucosal epithelium and bronchial cartilage. In addition, the tumor surface was lined with normal bronchial epithelium. This finding suggests that endobronchial angiolipoma arises in bronchial submucosal adipose tissue, similarly to endobronchial lipoma. Angiolipomas are histologically classified into two types: infiltrating and non-infiltrating. Infiltrating angiolipomas are not capsulated and have the tendency to invade and spread to the surrounding tissues [5]. The present case did not exhibit such a tendency and was, therefore, considered to be a non-infiltrating angiolipoma.
With respect to treatment, bronchoscopic intervention should be considered as the first choice of therapy for benign endobronchial tumors. Previously reported cases of angiolipoma have been treated with either (a) surgical excision or (b) bronchoscopic resection with a high-frequency electric snare together with argon plasma coagulation under general anesthesia. Due to the risk of recurrence, the authors of the former case study recommended surgical resection for patients in whom surgery could be tolerated [1]. In the latter case, the authors preferred a bronchoscopic approach due to the low risk of malignant transformation of this rare, benign tumor [2]. In the present case, the lesion was located in the right bronchus intermedius and was covered with numerous engorged blood vessels. We decided to perform open surgery rather than endoscopic resection because (a) the endoscopic resection procedure may increase the risk of tumor bleeding; (b) complete resection with a high-frequency electric snare is potentially challenging due to the hemispherical, rather than polypoid, tumor morphology; (c) the presence of a malignant tumor (e.g., a carcinoid tumor) should not be discounted; and (d) bronchial sleeve resection and reconstruction could be performed safely given that the length of the longitudinal tumor axis was relatively short. Notably, a lung-sparing technique should be considered when benign tumors are treated surgically. Accordingly, using bronchial sleeve resection and end-to-end anastomosis techniques, we accomplished complete tumor excision and avoided the need to resect additional lung parenchyma. Our procedure preserved pulmonary function and yielded a curative result. As a treatment option for angiolipomas other than surgical resection or bronchoscopic intervention, radiotherapy can be considered. In case of incomplete resection for infiltrating angiolipoma, postoperative radiotherapy should be Fig. 3 a Formalin-fixed resected specimen. The surface of this hemispherical lesion was covered with normal bronchial mucosa. b A histological image of the specimen demonstrates an endobronchial tumor. Hematoxylin and eosin stain; original magnification: × 12.5. c The surface of the tumor was covered with normal bronchial epithelium, and was composed of adipose tissue and numerous blood vessels. Hematoxylin and eosin stain; original magnification: × 200 recommended. However, in the present case, because complete resection was achieved, postoperative irradiation was not performed.
In conclusion, bronchoscopic intervention or minimal parenchymal resection should be considered as treatments for bronchial angiolipoma. Given the small number of reports of bronchial angiolipoma, the collection of additional data is important to elucidate the clinical characteristics of this rare tumor.
Abbreviations CT: Computed tomography
|
v3-fos-license
|
2022-12-14T16:18:53.297Z
|
2022-12-01T00:00:00.000
|
254618859
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1999-4915/14/12/2752/pdf?version=1670599761",
"pdf_hash": "23bba6b05ca249d4eb43a2d08be4358563baaf67",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44146",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "8ac4842c8465368780ff19443ef8d89836589539",
"year": 2022
}
|
pes2o/s2orc
|
Development and Visualization Improvement for the Rapid Detection of Decapod Iridescent Virus 1 (DIV1) in Penaeus vannamei Based on an Isothermal Recombinase Polymerase Amplification Assay
Viral diseases have seriously restricted the healthy development of aquaculture, and decapod iridescent virus 1 (DIV1) has led to heavy losses in the global shrimp aquaculture industry. Due to the lack of effective treatment, early detection and regular monitoring are the most effective ways to avoid infection with DIV1. In this study, a novel real-time quantitative recombinase polymerase amplification (qRPA) assay and its instrument-free visualization improvement were described for the rapid detection of DIV1. Optimum primer pairs, suitable reaction temperatures, and probe concentrations of a DIV1-qRPA assay were screened to determine optimal reaction conditions. Then, its ability to detect DIV1 was evaluated and compared with real-time quantitative polymerase chain reactions (qPCRs). The sensitivity tests demonstrated that the limit of detection (LOD) of the DIV1-qRPA assay was 1.0 copies μL−1. Additionally, the presentation of the detection results was improved with SYBR Green I, and the LOD of the DIV1-RPA-SYBR Green I assay was 1.0 × 103 copies μL−1. Both the DIV1-qRPA and DIV1-RPA-SYBR Green I assays could be performed at 42 °C within 20 min and without cross-reactivity with the following: white spot syndrome virus (WSSV), Vibrio parahaemolyticus associated with acute hepatopancreatic necrosis disease (VpAHPND), Enterocytozoon hepatopenaei (EHP), and infectious hypodermal and hematopoietic necrosis virus (IHHNV). In conclusion, this approach yields rapid, straightforward, and simple DIV1 diagnoses, making it potentially valuable as a reliable tool for the detection and prevention of DIV1, especially where there is a paucity of laboratory equipment.
Introduction
With a growing global population and improved nutritional awareness, the aquaculture industry has been developing rapidly [1]. Penaeus vannamei is presently the most important cultured shrimp in the world [2,3]. However, serious diseases affecting this species, especially the ones caused by viral infections, pose a severe threat to the global shrimp industry. Decapod iridescent virus 1 (DIV1) is a novel pathogen discovered in 2016 that has a substantial impact on the global aquaculture industry and has drawn public attention in recent years [4][5][6][7]. DIV1 is an icosahedral symmetric virus with approximately 166k bp of double-stranded DNA and has a wide host range. Susceptible species that have been reported include the following: Fenneropenaeus chinensis; Macrobrachium rosenbergii; Procambarus clarkia; Penaeus monodon; Macrobrachium nipponense; Exopal aemon carinicauda; and two species of crab, Eriocheir sinensis and Pachygrapsus crassipes [4,5,[8][9][10][11]. The infected shrimp generally show an empty stomach and intestinal tract, pale hepatopancreas, and a soft shell [5]. Additionally, diseased shrimp sink to the bottom of the aquaculture pool due to their weakened swimming ability. Furthermore, dead individuals accumulate at the bottom of the aquaculture pool, and the cumulative mortality may reach 80% [12]. Because effective treatment is still unavailable for shrimp infected with DIV1, rapid and effective early pathogen detection plays a crucial role in controlling the spread of the virus in farms and reducing production losses [13].
The target sequences for the detection of DIV1 include MCP, ATPase, ribonucleotide reductase (RNR), and DNA methyltransferase genes [5,13]. Focusing on these targets, various methods have been described for the detection of DIV1, including the nested polymerase chain reaction (nested PCR), real-time quantitative PCR (qPCR), quantitative loop-mediated isothermal amplification (qLAMP), and in situ hybridization assay [12,14,15]. However, most approaches require expensive laboratory equipment, professional operation, and a relatively long amplification time. These disadvantages limit their role in rapid field detection. As a novel isothermal-amplification method, recombinase polymerase amplification (RPA) applies the complex formed by UvsX recombinase, UvsY protein, signal-stranded oligonucleotides (30-35 nt primers), and single-strand binding proteins (SSBs) to assist the site-specific D-loop strand invasion of dsDNA, and then amplifies the target DNA fragments rapidly and efficiently at ambient temperatures within less 30 min [16]. Compared to other DNA amplification methods, RPA exhibits high efficiency, rapid detection speed, simple operation, and affordable price in basic laboratory and field applications. Additionally, RPA has now been wildly used to detect bacteria, viruses, parasites, genetically modified crops, cancer, and so on [17][18][19][20][21][22][23]. Real-time quantitative recombinase polymerase amplification (qRPA), developed on the basic RPA, does not require post-amplification purification or gel electrophoresis. qRPA can analyze products quantitatively in a completely closed tube, preventing cross-contamination and false positives caused by aerosolized products [24].
In the present study, an effective real-time quantitative recombinase polymerase amplification (qRPA) assay (DIV1-qRPA assay) was described to address the current lack of rapid DIV1 field detection methods. Compared with basic RPA, the sensitivity and specificity of qRPA are higher, and qRPA does not need product purification and gel electrophoresis. The amplified products could be quantitatively analyzed in real-time with a simple fluorescence detector, which is more suitable for on-site detection [16]. Additionally, an equipment-free visual optimization of DIV1-detection results was developed in this study (DIV1-RPA-SYBR Green I assay), which is also expected to serve as novel technical guidance for the prevention and rapid on-site diagnosis of DIV1 infection.
Pathogen Samples and Recombinant Plasmid Construction
White spot syndrome virus (WSSV), Vibrio parahaemolyticus associated with acute hepatopancreatic necrosis disease (Vp AHPND ), and Enterocytozoon hepatopenaei (EHP) were supplied by the Institute of Oceanology, Chinese Academy of Sciences. Additionally, the pathogenic liquid of infectious hypodermal and hematopoietic necrosis virus (IHHNV) was provided by the Guangxi Academy of Fishery Sciences, Nanning, China. The full length of the DIV1 ATPase gene (GenBank Accession Number: KY681040.1) was synthesized and cloned into the pUC-57 vector; then, the insert sequence was confirmed by sequencing. The recombinant plasmids were extracted by a TIANprep Mini Plasmid Kit (DP103, Tiangen, Beijing, China), and the concentration of the plasmid was then determined with a NanoDrop One spectrophotometer (Thermo Scientific, Waltham, MA, USA). The DNA copy number of the DIV1 recombinant plasmid was calculated using the following equation: DNA copy number(copies/µL) = concentration (ng/µL) × 10 −9 clone size (bp) × 660 (g/mol/bp) × 6.022 × 10 23 (copies/mol) The recombinant plasmid was 434 ng µL −1 , equal to 1.0 × 10 11 copies µL −1 , then diluted serially tenfold from 1.0 × 10 11 to 1.0-copies µL −1 .
Primer and Probe Design
Five primer pairs and probes for the DIV1-qRPA assay were designed based on the conserved regions of the ATPase gene of DIV1. These primer pairs and a probe were blasted against the NCBI nucleotide database (https://blast.ncbi.nlm.nih.gov/Blast.cgi, accessed on 18 June 2021) to ensure no homology with other organism sequences. The DIV1-qPCR assay was carried out using the previously designed primer pairs and probe [14]. All primers and probes above are listed in Table 1.
DIV1-RPA-SYBR Green I Assay
DIV1-RPA-SYBR Green I assay was performed using GenDx Basic kits (KS101, GenDx Biotech, Suzhou, China). The 50 µL reaction system contained 20 µL of rehydration buffer, 5 µL of best primer pair, 2 µL of DIV1 recombinant plasmid, 21 µL of ddH 2 O, and 2 µL of magnesium acetate. The reactions were performed in a VeritiPro Thermal Cycler (Thermo Fisher Scientific, Waltham, MA, USA) at optimum reaction temperature screened from the process above for 20 min. Then, 2 µL of obtained production was verified by 1.5% gel electrophoresis, and the remaining products were mixed with 2 µL of SYBR Green I nucleic acid dye (1:10 dilution of 10,000× stock solution, Solarbio, Beijing, China). Then, their fluorescence intensities were observed with a 302 nm UV. To optimize the ideal visualization conditions, five different final primer concentrations of 0.3 µM, 0.2 µM, 0.1 µM, 0.05 µM, and 0.025 µM were designed to avoid the false positive caused by the dimer.
Evaluation of Sensitivity and Specificity
To compare the sensitivity among DIV1-qPCR, DIV1-qRPA, and DIV1-RPA-SYBR Green I assay, the gradient diluted DIV1 plasmid sample with a concentration from 1.0 × 10 5 to 1.0 copies µL −1 was used as a template in the positive control group and the equivalent volume of ddH 2 O was used as template in the no-template control (NTC) group to estimate the LOD of the three assays. Standard curves of DIV1-qPCR and DIV1-qRPA were created versus the concentration gradient of the diluted plasmid, and their Ct values were calculated and expressed as mean ± SD.
To determine the specificities of DIV1-qRPA and DIV1-RPA-SYBR Green I assay, the DIV1 plasmid sample with a concentration of 1.0 × 10 5 copies µL −1 and the other DNA samples for the specificity assays containing WSSV, IHHNV, EHP, and Vp AHPND were used as templates in the positive control group, and the equivalent volume of ddH 2 O was used as a template in the NTC group for experiments.
Primer Screening
To optimize subsequent reaction conditions, five different sets of primers were screened based on the threshold time. According to the amplification curve after reacting at 35 • C for 20 min, all five sets of primers showed good amplification efficiency. All primer pairs reached the detection threshold within 7 min and showed a plateau period after that. Even the lowest end-point relative fluorescence unit (End-RFU) was almost higher than 1000 ( Figure 1A). Among all the primer pairs, the fifth group of primers (F R5 and R R5 ) first reached the detection threshold less than 4 min after the reaction began, and the End-RFU was significantly higher than that of other groups, at 2385.5 ± 64.6 ( Figure 1A,B). Therefore, the fifth set of primers was selected for use in the following assays.
Green Ⅰ assay, the gradient diluted DIV1 plasmid sample with a concentration from 1.0 × 10 5 to 1.0 copies μL −1 was used as a template in the positive control group and the equivalent volume of ddH2O was used as template in the no-template control (NTC) group to estimate the LOD of the three assays. Standard curves of DIV1-qPCR and DIV1-qRPA were created versus the concentration gradient of the diluted plasmid, and their Ct values were calculated and expressed as mean ± SD.
To determine the specificities of DIV1-qRPA and DIV1-RPA-SYBR Green Ⅰ assay, the DIV1 plasmid sample with a concentration of 1.0 × 10 5 copies μL −1 and the other DNA samples for the specificity assays containing WSSV, IHHNV, EHP, and VpAHPND were used as templates in the positive control group, and the equivalent volume of ddH2O was used as a template in the NTC group for experiments.
Primer Screening
To optimize subsequent reaction conditions, five different sets of primers were screened based on the threshold time. According to the amplification curve after reacting at 35 °C for 20 min, all five sets of primers showed good amplification efficiency. All primer pairs reached the detection threshold within 7 min and showed a plateau period after that. Even the lowest end-point relative fluorescence unit (End-RFU) was almost higher than 1000 ( Figure 1A). Among all the primer pairs, the fifth group of primers (FR5 and RR5) first reached the detection threshold less than 4 min after the reaction began, and the End-RFU was significantly higher than that of other groups, at 2385.5 ± 64.6 ( Figure 1A,B). Therefore, the fifth set of primers was selected for use in the following assays.
Optimizing the Reaction Temperature of DIV1-qRPA Assay
To improve the conditions for subsequent assays, seven different temperatures (30 • C, 32 • C, 35 • C, 37 • C, 40 • C, 42 • C, and 45 • C) were screened based on the End-RFU and the threshold time. For all the tested temperatures, the target fragments were effectively amplified, reached the detection threshold within seven minutes, and showed a subsequent plateau period (Figure 2A). Particularly, the amplification curve at 42 • C first reached the detection threshold at 1.14 ± 0.146 min, p < 0.05, and arrived at the platform stage at the 8th minute ( Figure 2B). Meanwhile, a relatively high average End-RFU with a value of 3277.2 ± 143.9, p < 0.05, was observed at 42 • C, really close to the maximum End-RFU of 3557.6 ± 137.8, p < 0.05, at 40 • C ( Figure 2C). Thus, 20 min and 42 • C were chosen as the subsequent testing reaction conditions. was shortened as the reaction temperature gradually increased. However, when the reaction temperature reached 45 °C, the detection threshold time suddenly lagged behind that at 40 °C. An interesting finding is that the End-RFU obtained at 37 °C-42 °C was comparatively higher than other sets, and the End-RFU of 45 °C (2006.2 ± 131.7) was the lowest among all the setting temperatures ( Figure 2C). Hence, by taking the intersection of these two temperature ranges, we concluded that the best temperature range for the DIV1-qRPA assay is 37 °C-42 °C. In the range of 37-45 • C, the threshold time was relatively shorter than other groups ( Figure 2B). When the set temperature was lower than 42 • C, the threshold time was shortened as the reaction temperature gradually increased. However, when the reaction temperature reached 45 • C, the detection threshold time suddenly lagged behind that at 40 • C. An interesting finding is that the End-RFU obtained at 37-42 • C was comparatively higher than other sets, and the End-RFU of 45 • C (2006.2 ± 131.7) was the lowest among all the setting temperatures ( Figure 2C). Hence, by taking the intersection of these two temperature ranges, we concluded that the best temperature range for the DIV1-qRPA assay is 37-42 • C.
Optimizing the Probe Consumption of the DIV1-qRPA Assay
To optimize the probe consumption, four different probe volumes (0.6 µL, 0.9 µL, 1.2 µL, and 1.8 µL) with a concentration of 10 µM were respectively added to the reaction system. All groups were successfully amplified except the NTC group ( Figure 3A). When the probe volume was no more than 1.2 µL, the corresponding End-RFU increased with the increase in probe volume, and the End-RFU of 1.4 µL (3038 ± 245.0) was similar to that of 1.2 µL with 3267.5 ± 159.0, p < 0.05 ( Figure 3B). Hence, we finally utilized 1.2 µL of the 10 µM probe in the subsequent sensitivity experiments. To optimize the probe consumption, four different probe volumes (0.6 μL, 0.9 μL, 1.2 μL, and 1.8 μL) with a concentration of 10 μM were respectively added to the reaction system. All groups were successfully amplified except the NTC group ( Figure 3A). When the probe volume was no more than 1.2 μL, the corresponding End-RFU increased with the increase in probe volume, and the End-RFU of 1.4 μL (3038 ± 245.0) was similar to that of 1.2 μL with 3267.5 ± 159.0, p < 0.05 ( Figure 3B). Hence, we finally utilized 1.2 μL of the 10 μM probe in the subsequent sensitivity experiments.
Sensitivity Evaluation of qPCR and qRPA Assays
To test the sensitivity, six sets of gradient-diluted DIV1 plasmid samples with concentrations from 1.0 × 10 5 to 1.0 copies μL −1 and the equivalent volume of ddH2O were used to analyze the DIV1-qPCR, DIV1-qRPA and DIV1-RPA-SYBR Green Ⅰ assays. Compared with each group, significant differences were indicated with asterisks: ****, p < 0.0001.
Sensitivity Evaluation of qPCR and qRPA Assays
To test the sensitivity, six sets of gradient-diluted DIV1 plasmid samples with concentrations from 1.0 × 10 5 to 1.0 copies µL −1 and the equivalent volume of ddH 2 O were used to analyze the DIV1-qPCR, DIV1-qRPA and DIV1-RPA-SYBR Green I assays. Analyzing the amplification curve in the DIV1-qRPA assay, we observed that the six sets were all successfully amplified except for NTC, and all sets reached the detection threshold in 10 min ( Figure 4A). The standard curve showed that the DIV1-qRPA assay had a high correlation coefficient (R 2 = 0.9891) within the range of 1.0 × 10 5 -1.0 DNA copies µL −1 ( Figure 4B). The regression equation was Ct = −3.517 log n + 21.34 (n = DIV1 DNA copies). In the meantime, the standard curve of the DIV1-qRPA assay also showed a high correlation coefficient (R 2 = 0.9985) within the range of 1.0 × 10 7 -1.0 × 10 1 DNA copies µL −1 (Figure 5A), and the regression equation of qPCR was Ct = −3.221 log n + 39.78 (n = DIV1 DNA copies). Compared with the DIV1-qPCR assay, the DIV1-qRPA assay has a more stable amplification effect at a low input template concentration ( Figure 5B), and the LOD of the DIV1-qRPA assay was 1.0 copies µL −1 higher than the LOD of the qPCR (10 copies µL −1 ).
Optimizing the Primer Concentrations of DIV1-RPA-SYBR Green Ⅰ Assay
To prevent false positives caused by dimers, five alternative final primer concentrations (0.3 μM, 0.2 μM, 0.1 μM, 0.05 μM, and 0.025 μM) were set to select the optimum concentration based on the fluorescence results and electrophoresis images. The fluorescence results indicated that when the primer concentration was higher than 0.025 μM, the difference between the NTC and the positive sample was too small to distinguish between them, while the NTC had the lowest background value at 0.025 μM and its dimer was also the lightest in the corresponding electrophoresis image ( Figure 6A). Therefore, 0.025 μM was selected as the optimal primer concentration for DIV1-RPA-SYBR Green Ⅰ.
Optimizing the Primer Concentrations of DIV1-RPA-SYBR Green I Assay
To prevent false positives caused by dimers, five alternative final primer concentrations (0.3 µM, 0.2 µM, 0.1 µM, 0.05 µM, and 0.025 µM) were set to select the optimum concentration based on the fluorescence results and electrophoresis images. The fluorescence results indicated that when the primer concentration was higher than 0.025 µM, the difference between the NTC and the positive sample was too small to distinguish between them, while the NTC had the lowest background value at 0.025 µM and its dimer was also the lightest in the corresponding electrophoresis image ( Figure 6A). Therefore, 0.025 µM was selected as the optimal primer concentration for DIV1-RPA-SYBR Green I.
Sensitivity Evaluation of RPA-SYBR Green Ⅰ Assay
The sensitivity of the DIV1-RPA-SYBR Green Ⅰ assay was determined by observin the fluorescence results of each tube after the reaction was completed. The results show that reducing the primer concentration indeed affected the detection limit, and the sens tivity of the DIV1-RPA-SYBR Green Ⅰ assay was still high. When the input template con centration was lower than 1.0 × 10 3 copies μL −1 , there was little difference between th positive and NTC samples to the naked eye under UV light, indicating the LOD of th DIV1-RPA-SYBR Green Ⅰ assay was 1.0 × 10 3 copies μL −1 ( Figure 6B).
Sensitivity Evaluation of RPA-SYBR Green I Assay
The sensitivity of the DIV1-RPA-SYBR Green I assay was determined by observing the fluorescence results of each tube after the reaction was completed. The results show that reducing the primer concentration indeed affected the detection limit, and the sensitivity of the DIV1-RPA-SYBR Green I assay was still high. When the input template concentration was lower than 1.0 × 10 3 copies µL −1 , there was little difference between the positive and NTC samples to the naked eye under UV light, indicating the LOD of the DIV1-RPA-SYBR Green I assay was 1.0 × 10 3 copies µL −1 ( Figure 6B).
Specificity Evaluation of qRPA and RPA-SYBR Green I Assay
To test the specificity, the DIV1 plasmid sample with a concentration of 1.0 × 10 5 copies µL −1 , the ddH 2 O, and the DNA templates of WSSV, IHHNV, EHP, and Vp AHPND were used to determine the DIV1-qRPA assay and the DIV1-RPA-SYBR Green I assay. The amplification curves or green fluorescence were only observed from positive plasmid samples, suggesting that the DIV1-qRPA assay and DIV1-RPA-SYBR Green I assay both had good specificity (Figures 4C and 6C).
Discussion
DIV1 is a recently identified pathogen in crustaceans and poses a severe threat to the aquatic industry in China and around the world [8]. Because the onset of DIV1 is currently difficult to treat effectively, it is, therefore, urgent to develop a rapid and sensitive field detection method to prevent this virus in advance [13,15]. RPA, as a newly emerged isothermal molecular detection method, has gained popularity in pathogen detection, especially since it is rapid, simple, sensitive, specific, and affordable to use. In the present study, the novel DIV1-qRPA assay and the DIV1-RPA-SYBR Green I assay we developed could rapidly detect DIV1 with high sensitivity and specificity and are suitable for field detection.
Previous studies have shown that the RPA reaction could operate at temperatures ranging from 22 • C to 45 • C [25]. In this study, we established that the optimal temperature range for the RPA reaction was 37-42 • C based on the End-RFU and the threshold time.
The amplification efficiency at all temperatures of this range was good. This result was consistent with previous studies showing that the RPA reaction does not require precise temperature control in this range [25]. Moreover, from the abnormally low End-RFU of 45 • C (2006.2 ± 131.7) ( Figure 2C), we inferred that a high reaction temperature might lead to the inactivation of enzymes in the system and thus greatly reduce the amplification efficiency of RPA. In addition, it was also proven that the RPA reaction could be carried out at body temperature, so the requirement for external heating equipment might be reduced [19]. According to a previous report, the threshold fluorescence value could be reached within 5-8 min with agitation beginning at the fourth minute of the reaction; otherwise, the time it takes to reach a detectable level without agitation is between 8 and 14 min [26]. In our operation, a stable amplification curve was more likely to form when agitation was fully performed at the beginning of the process and the third or fourth minute of the reaction. Moreover, when the concentration of the input template was low, the LOD might be improved by taking out the reaction mixture and fully mixing it several times in the fluorescence measurement gap. In addition, the LOD could also be improved by appropriately increasing probe consumption when the template input concentration was low.
Compared with the DIV1-qPCR assay, the DIV1-qRPA assay saved almost half of the DIV1-qPCR's reaction time and obtained a higher sensitivity of 1.0 copies µL −1 . The LOD of the DIV1-qPCR was also higher than that of all current detection methods for DIV1. Although the qLAMP assay could be performed at a constant temperature too, the six intricate oligonucleotide primers, a hot reaction temperature of 60 • C, and a lengthy reaction period of 45 min were necessary [27]. In addition, it has been reported that the repeatability of the qLAMP assay could be poor when template concentration was lower than 10 3 DNA copies µL −1 [14]. Compared with the qLAMP, the DIV1-qRPA assay required a lower reaction temperature and shorter amplification time and showed a more stable amplification effect at a low input template concentration. Therefore, the DIV1-qRPA assay could not only be used as an alternative method for detection in the laboratory, but it is also suitable for rapid detection in the field with simple equipment.
RPA products can be equipment-free and visually analyzed using a variety of methods. In addition to the qRPA method mentioned in this study, the recombinase polymerase amplification and a lateral flow dipstick (RPA-LFD) method are also frequently used to detect pathogens [28,29]. However, a drawback of using the RPA-LFD test for the identification of pathogens is the potential post-amplification contamination of samples in field settings [24]. Moreover, the use of gold nanoparticles, fluorescence-labeled probes, biotin, biotin-ligand complexes, and antibodies has resulted in a large increase in the cost of evaluating high-throughput clinical samples. Hence, we employed the SYBR Green I to optimize an affordable visual analysis method for DIV1-qRPA detection results that eliminated the risk of potential sample contamination. The DIV1-RPA-SYBR Green I assay could also maintain a high level of specificity and sensitivity that achieved a LOD of 1.0 × 10 3 copies µL −1 . This is the first study to develop a DIV1-RPA-SYBR Green I assay for rapid and sensitive DIV1 detection. The potential flaw of the RPA-SYBR Green I assay for DIV1 is that SYBR Green I dye can nonspecifically bind to any double-stranded DNA. Therefore, when there is a lot of template DNA or primers present, the specificity of DIV1 detection would be decreased. Previous studies have attempted to limit the amount of DNA templates or primers in the reaction system to ensure the specificity of RPA-SYBR Green I assay [30,31]. In this study, we optimized the concentration of primers in the reaction system with the other conditions fixed and also assisted gel electrophoresis results in avoiding subjective judgment. False positives might also occur with high DNA content, and false negatives could conversely occur with low DNA content. A total amount of 300 ng to 2 µg of DNA template in a 50 µL reaction system has been recommended to avoid false negative and positive detection results in clinical testing in the field [30]. In this study, false positives due to primer dimers were not observed when 2 µL templates with concentrations of 1.0 × 10 5 copies µL −1 or less were added to the mixture system at a final primer concentration of 0.025 µM. Moreover, low-cost commercial nucleic acid extraction methods for field samples, such as magnetic bead-based technology and heated NaOH method, could be used to further reduce the cost of RPA detection [32].
Conclusions
We developed a highly sensitive and specific real-time quantitative RPA assay and improved its instrument-free visualization for rapidly detecting DIV1. The LOD of the DIV1-qRPA assay reached 1.0 copies µL −1 , which was higher than the LOD of qPCR and qLAMP, and the visual detecting limitation of the instrument-free DIV1-RPA-SYBR Green I assay was 1.0 × 10 3 copies µL −1 . Both assays could be performed at 42 • C within 20 min and had no cross-reactivity with WSSV, Vp AHPND , EHP, or IHHNV. These two methods offer straightforward, eye-catching, and equipment-free approaches for DIV1 detection in shrimp farms, quarantine stations, and basic laboratories with limited resources, especially in remote and rural regions; the most appropriate method can be chosen based on the practical conditions of the testing site. Furthermore, the results of this study may promote the wide application of DIV1 detection methods based on nucleic acid amplification technology and provide a reference value for monitoring and controlling this new virus in the aquaculture industry.
|
v3-fos-license
|
2019-04-10T13:03:27.754Z
|
2019-04-01T00:00:00.000
|
104296485
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.15252/msb.20188462",
"pdf_hash": "5e62bde0ae7bb8713519b8fcaa374f55fc15b6f7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44147",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "9cb51c9548de8427cd2adb15e58b1da21a45a88d",
"year": 2019
}
|
pes2o/s2orc
|
Enzyme promiscuity shapes adaptation to novel growth substrates
Abstract Evidence suggests that novel enzyme functions evolved from low‐level promiscuous activities in ancestral enzymes. Yet, the evolutionary dynamics and physiological mechanisms of how such side activities contribute to systems‐level adaptations are not well characterized. Furthermore, it remains untested whether knowledge of an organism's promiscuous reaction set, or underground metabolism, can aid in forecasting the genetic basis of metabolic adaptations. Here, we employ a computational model of underground metabolism and laboratory evolution experiments to examine the role of enzyme promiscuity in the acquisition and optimization of growth on predicted non‐native substrates in Escherichia coli K‐12 MG1655. After as few as approximately 20 generations, evolved populations repeatedly acquired the capacity to grow on five predicted non‐native substrates—D‐lyxose, D‐2‐deoxyribose, D‐arabinose, m‐tartrate, and monomethyl succinate. Altered promiscuous activities were shown to be directly involved in establishing high‐efficiency pathways. Structural mutations shifted enzyme substrate turnover rates toward the new substrate while retaining a preference for the primary substrate. Finally, genes underlying the phenotypic innovations were accurately predicted by genome‐scale model simulations of metabolism with enzyme promiscuity.
Introduction
Understanding how novel metabolic pathways arise during adaptation to environmental changes remains a central issue in evolutionary biology. The prevailing view is that enzymes often display promiscuous (i.e., side or secondary) activities and evolution takes advantage of such pre-existing weak activities to generate metabolic novelties (Jensen, 1976;Copley, 2000;Schmidt et al, 2003;Khersonsky & Tawfik, 2010;Huang et al, 2012;Nam et al, 2012;Näsvall et al, 2012;Voordeckers et al, 2012;Notebaart et al, 2014). However, it remains to be fully explored how these metabolic novelties are achieved via mutation events during periods of adaptation in short-term evolution experiments. Do genetic elements associated with promiscuous activities mutate mostly early on in adaptation when the initial innovative phenotype of growth on a new nutrient source is observed (Copley, 2000;Barrick & Lenski, 2013;Mortlock, 2013) or do promiscuous activities continue to play a role throughout the optimization process of continued fitness improvement on a non-native nutrient source (Barrick & Lenski, 2013)? In this work, mutational events that resulted in the ability of an organism to grow on a new, non-native carbon source were examined. These types of innovations have previously been linked to beneficial mutations that endow an organism with novel capabilities and expand into a new ecological niche (Wagner, 2011;Barrick & Lenski, 2013). Further, mutational events that were associated with more gradual enhancements of growth fitness (Barrick & Lenski, 2013) on the non-native carbon source were also examined. Such gradual improvements may stem from mutational events leading to regulatory improvements that fine-tune expression of desirable or undesirable pathways or possibly the fine-tuning of enzyme kinetics or substrate specificity of enzymes involved in key metabolic pathways (Copley, 2000;Barrick & Lenski, 2013). Enzyme promiscuity has been prominently linked to early mutation events, where mutations enhancing secondary activities may result in dramatic phenotypic improvements or new capabilities (Khersonsky & Tawfik, 2010;Barrick & Lenski, 2013). Therefore, in this work, we explored a diverse range of evolutionary routes taken during adaptation to new carbon sources. Specifically, we examined the role of enzyme promiscuity in both early mutations linked to innovative phenotypes and growth-optimizing mutations throughout various shortterm laboratory evolution experiments.
A second open question in understanding the role of enzyme promiscuity in adaptation concerns our ability to predict the future evolution of broad genetic and phenotypic changes (Papp et al, 2011;Lässig et al, 2017). While there has been an increasing interest in studying empirical fitness landscapes to assess the predictability of evolutionary routes (de Visser & Krug, 2014;Notebaart et al, 2018), these approaches assess predictability only in retrospect. There is a need for computational frameworks that forecast the specific genes that accumulate mutations based on mechanistic knowledge of the evolving trait. A recent study suggested that a detailed knowledge of an organism's promiscuous reaction set (the so-called "underground metabolism"; D'Ari & Casadesú s, 1998) enables the computational prediction of genes that confer new metabolic capabilities when artificially overexpressed (Notebaart et al, 2014). However, it remains unclear whether this approach could predict evolution in a population of cells adapting to a new nutrient environment through spontaneous mutations. First, phenotypes conferred by artificial overexpression might not be accessible through single mutations arising spontaneously. Second, and more fundamentally, mutations in distinct genes may lead to the same phenotype. Such alternative mutational trajectories may render genetic evolution largely unpredictable. Furthermore, computational approaches can aid in predicting and discovering overlapping physiological functions of enzymes (Guzmán et al, 2015;Notebaart et al, 2018), but these have also yet to be explored in the context of adaptation. In this study, we address these issues by performing controlled laboratory evolution experiments to adapt Escherichia coli to predicted novel carbon sources and by monitoring the temporal dynamics of adaptive mutations.
Computational prediction and experimental evolution of nonnative carbon source utilizations
Based on our knowledge of underground metabolism, we utilized a genome-scale model of E. coli metabolism that includes a comprehensive network reconstruction of underground metabolism (Notebaart et al, 2014) to test our ability to predict evolutionary adaptation to novel (non-native) carbon sources. This model was previously shown to correctly predict growth on non-native carbon sources if a given enabling gene was artificially overexpressed in a growth screen (Notebaart et al, 2014). This previous work identified a list of ten carbon sources that the native E. coli metabolic network is not able to utilize for growth in simulations but that can be utilized for growth in silico with the addition of a single underground reaction (Appendix Table S1). Based on this list-as well as substrate cost, availability, and solubility properties to maximize compatibility with our laboratory evolution procedures-we selected seven carbon sources (D-lyxose, D-tartrate, D-2-deoxyribose, D-arabinose, ethylene glycol, m-tartrate, monomethyl succinate) that cannot be utilized by wild-type E. coli MG1655 but are predicted to be growthsustaining carbon sources after adaptive laboratory evolution.
Next, we initiated laboratory evolution experiments to adapt E. coli to these non-native carbon sources. Adaptive laboratory evolution experiments were conducted in two distinct phases: first, a "weaning/dynamic environment" (Copley, 2000;Mortlock, 2013) stage during which cells acquired the ability to grow solely on the non-native carbon sources and, second, a "static environment" (Barrick & Lenski, 2013) stage during which a strong selection pressure was placed to select for the fastest growing cells on the novel carbon sources (Fig 1A).
During the "weaning/dynamic environment" stage of laboratory evolution experiments (Fig 1A, see Materials and Methods), E. coli was successfully adapted to grow on five non-native substrates individually in separate experiments. Duplicate laboratory evolution experiments were conducted in batch growth conditions for each individual substrate and in parallel on an automated adaptive laboratory evolution (ALE) platform using a protocol that uniquely selected for adaptation to conditions where the ancestor (i.e., wild type) was unable to grow (Fig 1A;LaCroix et al, 2015). In the weaning phase, E. coli was dynamically weaned off of a growthsupporting nutrient (glycerol) onto the novel substrates individually (Fig 1A, Appendix Table S2). A description of the complex passage protocol is given in the Fig 1 legend and expanded in the methods for both phases of the evolution. This procedure successfully adapted E. coli to grow on five out of seven non-native substrates, specifically, D-lyxose, D-2-deoxyribose, D-arabinose, m-tartrate, and monomethyl succinate. Unsuccessful cases could be attributed to various experimental and biological factors such as experimental duration limitations, the requirement of multiple mutation events, or stepwise adaptation events, as observed in an experiment evolving E. coli to utilize ethylene glycol (Szappanos et al, 2016).
The "static environment" stage of the evolution experiments consisted of serially passing cultures in the early exponential phase of growth in order to select for cells with the highest growth rates ( Fig 1A). Cultures were grown in a static media composition environment containing a single non-native carbon source. Marked and repeatable increases in growth rates on the non-native carbon sources were observed in as few as 180-420 generations (Appendix Table S1). Whole-genome sequencing of clones was performed at each distinct growth rate "jump" or plateau during the static environment phase (see arrows in Fig 1B, Appendix Fig S1). Such plateaus represent regions where a causal mutation has fixed in a population and it was assumed that the mutation(s) enabling the jump in growth rate were stable and maintained throughout the plateau region (LaCroix et al, 2015). Thus, clones were isolated at any point within this plateau region where frozen stock samples were available (LaCroix et al, 2015).
Modeling with underground metabolism accurately predicted key genes mutated during laboratory evolution experiments To analyze genotypic changes underlying the nutrient utilizations, clones were isolated and sequenced shortly after an innovative growth phenotype was achieved; mutations were identified (see Materials and Methods) and analyzed for their associated causality ( Fig 1B, Appendix Fig S1, Dataset EV1). Strong signs of parallel evolution were observed at the level of mutated genes in the Table 1, Dataset EV1). Such parallelism provided evidence of the beneficial nature of the observed mutations and is a prerequisite for predicting the genetic basis of adaptation (Bailey et al, 2015). Mutations detected in the evolved isolated clones for each experiment demonstrated a striking agreement with such predicted "underground" utilization pathways (Notebaart et al, 2014). Specifically, for four out of the five different substrate conditions, key mutations were linked to the predicted enzyme with promiscuous activity, which would be highly unlikely by chance (P < 10 À8 , Fisher's exact test; Table 1, Appendix Fig S2). Not only were the specific genes (or their direct regulatory elements) mutated in four out of five cases, but few additional mutations (0-2 per strain, Dataset EV1) were observed directly following the weaning phase, indicating that the innovative phenotypes observed required a small number of mutational steps and the method utilized was highly selective. For the one case where the prediction and observed mutations did not align-D-arabinosea detailed inspection of the literature revealed existing evidence that three fuc operon-associated enzymes can metabolize D-arabinose-FucI, FucK, and FucA (LeBlanc & Mortlock, 1971). The mutations observed in the D-arabinose evolution experiments after the weaning stage were in the fucR gene (Table 1), a DNA-transcriptional activator associated with regulating the expression of the transcription units fucAO and fucPIK (Podolny et al, 1999). Thus, it was inferred that the strains evolved to grow on D-arabinose in our experiments were utilizing the fuc operon-associated enzymes to metabolize
Static Phase
Weaning Phase A A schematic of the two-part adaptive laboratory evolution (ALE) experiments. The "weaning/dynamic environment" stage involved growing cells in supplemented flasks containing the non-native substrate (blue) and growth-promoting supplement (red). As cultures were serially passed, they were split into another supplemented flask as well as an "non-native substrate test flask" containing only the non-native nutrient (no supplement) to test for the desired evolved growth phenotype. The "static environment" stage consisted of selecting for the fastest growing cells and passing in mid log phase. B Growth rate trajectories for duplicate experiments (n = 2 evolution experiments per substrate condition) (green and purple) for the example case of D-lyxose.
Population growth rates are plotted against cumulative cell divisions. Clones were isolated for whole-genome sequencing at notable growth rate plateaus as indicated by the arrows. Mutations gained at each plateau are highlighted beside the arrows (mutations arising earlier along the trajectory persisted in later sequenced clones).
ª 2019 The Authors Molecular Systems Biology 15: e8462 | 2019 D-arabinose in agreement with prior work (LeBlanc & Mortlock, 1971). In this case, the genome-scale model did not identify the promiscuous reactions responsible for growth on D-arabinose because the promiscuous (underground) reaction database was incomplete (see section "Mutations in regulatory elements linked to increased expression of underground activities: D-arabinose evolution" for more details on D-arabinose metabolism). In general, key mutations observed shortly after strains achieved reproducible growth on the non-native substrate could be categorized as regulatory (R) or structural (S) ( Table 1). Of the fifteen mutation events outlined in Table 1, eleven were categorized as regulatory (observed in all five successful substrate conditions) and four were categorized as structural (three of five successful substrate conditions). For D-lyxose, D-2-deoxyribose, and m-tartrate evolution experiments, mutations were observed within the coding regions of the predicted genes, namely yihS, rbsK, and dmlA (Table 1, Appendix Fig S1). Regulatory mutations occurring in transcriptional regulators or within intergenic regions-likely affecting sigma factor binding and transcription of the predicted gene target-were observed for D-lyxose, D-2-deoxyribose, m-tartrate, and monomethyl succinate (Table 1). Observing more regulatory mutations is broadly consistent with previous reports (Mortlock, 2013;Toll-Riera et al, 2016). The regulatory mutations were believed to increase the expression of the target enzyme, thereby increasing the dose of the typically low-level side activity (Guzmán et al, 2015). This observation is consistent with "gene sharing" models of promiscuity and adaptation where diverging mutations that alter enzyme specificity are not necessary to acquire the growth innovation (Piatigorsky et al, 1988;Guzmán et al, 2015). Furthermore, although enzyme dosage could also be increased through duplication of genomic segments, this scenario was not commonly observed shortly after the weaning phase of our experiments. The one exception was observed in the D-2-deoxyribose evolution experiment where two large duplication events (containing 165 genes (yqiG-yhcE) and 262 genes (yhiS-rbsK), respectively) were observed (Appendix Fig S3). Notably, one of these regions did include the rbsK gene with the underground activity predicted to support growth on D-2-deoxyribose (Table 1).
To identify the causal mutation events relevant to the observed innovative nutrient utilization phenotypes, each key mutation (Table 1) was introduced into the ancestral wild-type strain using the genome engineering method pORTMAGE (Nyerges et al, 2016). This genome editing approach was performed to screen for mutation causality (Herring et al, 2006) on all novel substrate conditions, except for monomethyl succinate, which only contained a single mutation (Table 1). Individual mutants were isolated after pORT-MAGE reconstruction, and their growth was monitored in a binary fashion on the growth medium containing the non-native substrate over the course of 1 week. These growth tests revealed that single mutations were sufficient for growth on D-lyxose, D-arabinose, and m-tartrate (Appendix Table S3). Interestingly, in the case of D-2deoxyribose, an individual mutation (either the RbsK N20Y or the rbsR insertion mutation) was not sufficient for growth, thereby suggesting that the mechanism of adaptation to this substrate was more complex. To address this, a pORTMAGE library containing the RbsK N20Y and rbsR insertion mutations individually and in combination was grown on three M9 minimal medium + 2 g l À1 D-2-deoxyribose agar plates alongside a wild-type MG1655 ancestral strain control. The large duplications in the D-2-deoxyribose strain (Table 1) could not be reconstructed due to the limitations of the pORTMAGE method. After 10 days of incubation, visible colonies (1) Loss of function; increased rbsK expression (R) 181 kbp and 281 kbp Regions Sigma 70 binding: close proximity to À10 of dmlRp3 promoter c (R) Intergenic À51/À123 ( could be seen resulting from the reverse engineered library, but not from the wild-type strain (Appendix Fig S4A). Subsequently, 16 colonies were chosen and colony PCR was performed to sequence the regions of rbsK and rbsR where the mutations were introduced (Appendix Fig S4B). All 16 colonies sequenced contained both the RbsK N20Y and rbsR insertion mutations. Fifteen of the 16 colonies showed an additional mutation at RbsK residue Asn14-7 colonies showed a AAT to GAT codon change resulting in an RbsK N14D mutation and 8 colonies showed a AAT to AGT codon change resulting in an RbsK N14S mutation. The Asn14 residue has been previously associated with ribose substrate binding of the ribokinase RbsK enzyme (Sigrell et al, 1999). Only one of the 16 colonies sequenced did not acquire the residue 14 mutation, but instead acquired a GCA to ACA codon change at residue Ala4 resulting in an RbsK A4T mutation. It is unclear if the additional mutations occurred spontaneously during growth prior to plating, but it is possible that these Asn14 and Ala4 residue mutations were introduced at a low frequency during MAGE-oligonucleotide DNA synthesis (< 0.1% error rate at each nucleotide position) (Isaacs et al, 2011;Nyerges et al, 2018). In either case, these results suggested that the observed mutations in rbsK and rbsR enabled growth on the non-native D-2deoxyribose substrate and that there was a strong selection pressure on the ribokinase underground activity. Further, there were multiple ways to impact rbsK, as both duplication events and structural mutations (Table 1) or multiple structural mutations were separately observed in strains which grew solely on D-2-deoxyribose. Overall, these causality assessments support the notion that underground activities can open short adaptive paths toward novel phenotypes and may play prominent roles in innovation events.
Examination of growth-optimizing evolutionary routes
Once the causality of the observed mutations was established, adaptive mechanisms required for further optimizing or fine-tuning growth on the novel carbon sources were explored. Discovery of these growth-optimizing activities was driven by a systems-level analysis consisting of mutation, enzyme activity, and transcriptome analyses coupled with computational modeling of optimized growth states on the novel carbon sources. Out of the total set of 41 mutations identified in the static phase of the evolution experiments (Datasets EV1 and EV2), a subset (Table 2) was explored. This subset consisted of genes that were repeatedly mutated in replicate experiments or across all endpoint sequencing data on a given nonnative carbon source. To unveil the potential mechanisms for improving growth on the non-native substrates, the transcriptome of initial and endpoint populations (i.e., right after the end of the weaning phase, and at the end of the static environment phase, respectively) was analyzed using RNA-seq. Differentially expressed genes were compared to genes containing optimizing mutations (or their direct targets) and targeted gene deletion studies were performed. Additionally, for the D-lyxose experiments, enzyme activity was analyzed to determine the effect of a structural mutation acquired in a key enzyme during growth optimization on the single non-native carbon source. Analysis of mutations in the static growth-optimizing phase led to identification of additional promiscuous enzyme activities above and beyond those causal mutation mechanisms identified shortly after the weaning phase. Enzyme promiscuity appeared to play a role in the adaptive routes utilized to optimize growth in at least three of the five nutrient conditions (Table 2). Detailed analyses of these results are described in the following sections in case studies for the D-lyxose, D-arabinose, and D-2-deoxyribose evolution experiments.
Structural mutations linked to shifting substrate affinities: Dlyxose evolution A clear example of mutations involved with optimization was those acquired during the D-lyxose experiments that were linked to enhancing the secondary activity of YihS (Tables 1 and 2). Structural mutations were hypothesized to improve the enzyme side activity to support the growth optimization state, and this effect was experimentally verified. The effects of structural mutations on enzyme activity were examined for the YihS isomerase enzyme that was mutated during the D-lyxose evolution ( Fig 1B, Table 1). The activities of the wild-type YihS and three mutant YihS enzymes (YihS R315S, YihS V314L + R315C, and YihS V314L + R315S) were tested in vitro. A cell-free in vitro transcription and translation system (Shimizu et al, 2001;de Raad et al, 2017) was used to express the enzymes and examine conversions of D-mannose to D-fructose (a primary activity; Itoh et al, 2008) and D-lyxose to D-xylulose (side activity) (Fig 2A, Appendix Fig S5). The ratios of the turnover rates of D-lyxose to the turnover rates of D-mannose were calculated and compared ( Fig 2B). Although the single-mutant YihS enzyme did not show a significant change compared to wild type, the double-mutant YihS enzymes showed approximately a 10-fold increase in turnover ratio of D-lyxose to D-mannose compared to wild type (P < 0.0003, ANCOVA). These results suggest that the mutations shifted the affinity toward the innovative substrate (enzyme side activity), while still retaining an overall preference for the primary substrate, D-mannose (ratio < 1). This is in agreement with "weak trade-off" theories of the evolvability of promiscuous functions (Khersonsky & Tawfik, 2010) in that only a small number of mutations are sufficient to significantly improve the promiscuous activity of an enzyme without greatly affecting the primary activity.
Mutations in regulatory elements linked to increased expression of underground activities: D-arabinose evolution
An important growth rate optimizing mutation was found in the Darabinose experiments and occurred as a result of an araC gene mutation, a DNA-binding transcriptional regulator that regulates the araBAD operon involving genes associated with L-arabinose metabolism (Bustos & Schleif, 1993). Based on structural analysis of AraC (Fig 3A), the mutations observed in the two independent parallel experiments likely affect substrate binding regions given their proximity to a bound L-arabinose molecule (RCSB Protein Data Bank entry 2ARC; Soisson et al, 1997), possibly increasing its affinity for D-arabinose. Expression analysis revealed that the araBAD transcription unit associated with AraC regulation (Gama-Castro et al, 2016) was the most highly upregulated set of genes (expression fold increase ranging from approximately 45-65× for Exp 1 and 140-200× for Exp 2, q < 10 À4 , FDR-adjusted P-value) in both experiments ( Fig 3B). Further examination of these upregulated genes revealed that the ribulokinase (AraB) has a similar kcat on four 2-ketopentoses (D/L-ribulose and D/L-xylulose) (Lee et al, 2001) despite the fact that araB is consistently annotated to only act on Lribulose (EcoCyc) (Keseler et al, 2013) or L-ribulose and D/L-xylulose (BiGG Models; King et al, 2016). It was thus reasoned that AraB was catalyzing the conversion of D-ribulose to D-ribulose 5-phosphate in an alternate pathway for metabolizing D-arabinose ( Fig 3C) and this was further explored. The role of the AraB pathway in optimizing growth on D-arabinose was analyzed both computationally and experimentally. Parsimonious flux balance analysis (pFBA; Feist & Palsson, 2010;Lewis et al, 2010) simulations demonstrated that cell growth with AraB had a higher overall metabolic yield than growth with FucK (in simulations where only one of the two pathways was active, Appendix Fig S6). This supported the hypothesis that mutants with active AraB can achieve higher growth rates than those in which it is not expressed. This simulation result signaled the possibility of a growth advantage for using the AraB pathway and thus was explored experimentally. Experimental growth rate measurements of clones carrying either an fucK knockout or araBAD gene knockouts showed that the FucK enzyme activity was essential for growth on D-arabinose for all strains analyzed (strains isolated after initial growth on the single non-native carbon source and strains isolated at the end of the static environment phase) (Fig 3D, Appendix Table S4). However, removal of araB from endpoint strains reduced the growth rate to the approximate growth rate of the initially adapted strain (Fig 3D). This finding suggested that the proposed AraB pathway (Fig 3C) was responsible for enhancing the growth rate and therefore qualified as fitness optimization.
Putting these computational and experimental results in the context of previous work, a similar pathway has been described in mutant Klebsiella aerogens W70 strains (St Martin & Mortlock, 1977). It was suggested that the D-ribulose-5-phosphate pathway (i.e., the AraB pathway) is more efficient for metabolizing D-arabinose than the D-ribulose-1-phosphate pathway (i.e., the FucK A YihS V314L + R315S mutant enzyme activity on D-mannose and D-lyxose. LC-MS was used to analyze YihS activity at saturating substrate concentrations to compare turnover rates on each substrate. Product formation was followed over time at a constant enzyme concentration. Turnover rates were calculated using linear regression (n = 3 replicates for each enzyme, Dataset EV4). The error bars represent standard deviation (n = 3) of the peak area. B Turnover ratios of substrate conversion of D-lyxose/D-mannose are shown for the wild-type YihS and mutant YihS enzymes. A ratio < 1 indicates a higher turnover rate on D-mannose compared to D-lyxose. Error bars represent standard error (n = 3) calculated from the linear regression analysis.
pathway), because the FucK pathway requires that three enzymes (FucI, FucK, and FucA) recognize secondary substrates (St Martin & Mortlock, 1977). The conclusion of St Martin and Mortlock supports the role of the mutations observed here in araC. In summary, enzymatic side activities of both the fuc operon (innovative mutations) and ara operon (optimizing mutations) encoded enzymes were important for the adaptation to efficiently metabolize D-arabinose. Computational and expression analyses suggested that a similar mechanism of amplification of growth-enhancing promiscuous activities played a role in the m-tartrate optimization regime. Similar to the D-arabinose experiments, both independent evolutions on mtartrate possessed a mutation in the predicted transcription factor, ygbI. This mutation was associated with the overexpression of a set of genes (ygbI, ygbJ, ygbK, ygbL, ygbM, and ygbN) with likely promiscuous activity (Appendix Supplementary Text and Appendix Fig S7). Further experiments, however, are required to better elucidate the mechanism and involvement of ygb operonassociated enzymes in the metabolism of m-tartrate. A Structural mutations observed in sequencing data of Experiments (Exp.) 1 and 2 (green) as well as residues previously identified as important for binding L-arabinose (blue) are highlighted on one chain of the AraC homodimer protein structure. The six base pair deletion observed in Exp. 1 appears to be most clearly linked to affecting substrate binding. B Expression data (RNA-seq) for significantly differentially expressed genes (q-value < 0.05, FDR-adjusted P-value, n = 2 biological replicates for each condition). Scatter plot shows log2(fold change) of gene expression data comparing endpoint to initial populations for Exp. 1 and Exp. 2 (gray dots) with the location of the gene in the reference genome as the x-axis. Those genes that are associated with AraC transcription units are highlighted (red dots for Exp. 1 and blue dots for Exp. 2). Above the plot, the transcription units are labeled green if AraC activates expression (in the presence of arabinose) or red if AraC represses expression of those genes. C The proposed two pathways for metabolizing D-arabinose. The pink pathway is enabled by the optimizing mutations observed in araC. D Growth rate analysis of various weaned (starting point of static phase) and optimized (endpoint of static phase) strains with or without fucK or araB genes knocked out. Strains were grown in triplicate (n = 3) on M9 minimal media with D-arabinose as the sole carbon source. The colored bars represent the calculated mean growth rate, and the error bars represent the standard deviation. The P-values reported were calculated using a two-sided Welch's t-test.
Genome-scale modeling suggests a role of segmental genome duplication and deletion in adaptation: D-2-deoxyribose and D-lyxose evolutions
Large genome duplications and deletions were observed in the Dlyxose and D-2-deoxyribose evolution experiments. These events were examined using a genome-scale metabolic model to understand their potential impact on strain fitness. First, we considered whether the large deletion event in the D-2-deoxyribose evolution Exp. 1 ( Table 2, Appendix Fig S8) contained genes involved in metabolism. The 171 deleted genes were compared to those genes included in the genome-scale model of metabolism used in this study. It was found that 44 metabolic genes were located in this region of deletion (Dataset EV3). Flux variability analysis (FVA; Mahadevan & Schilling, 2003) simulations revealed that none of the 44 genes are individually necessary for optimal growth under these conditions (Dataset EV3). In fact, all 44 genes can be deleted at once from the genome-scale model without affecting the simulated growth rate. It was also interesting to note that 18 of the 44 genes were highly expressed in the initially evolved population after weaning and thus significantly down-regulated (log 2 (fold change) < À1, q-value < 0.025) by the large deletion event in the endpoint evolution population (Dataset EV3 and Appendix Fig S9). These observations are in agreement with previously reported findings that cells acquire mutations that reduce the expression of genes not required for growth during evolution and thus allow the cell to redirect resources from production of unnecessary proteins to increasing growth functions (Utrilla et al, 2016). Furthermore, there was an additional mutation observed at the same time as the large deletion, namely a smaller 902 bp deletion spanning a major part of the rbsB gene and into the intergenic region upstream of rbsK (Table 2, Dataset EV1, and Dataset EV2). The perceived impact of this deletion was to further increase the expression of rbsK, the gene associated with the underground activity required for growth on D-2-deoxyribose. The concept of removing enzymatic activities, and potentially multiple simultaneously, to increase fitness is an interesting avenue which, in this case, necessitates a significant number of additional experiments to confirm given the multiple genes affected. While the YihS structural mutations appeared to be the primary mutations responsible for optimizing growth on D-lyxose (Fig 2), a genome duplication event observed in Exp. 2 could play a role in improving the growth rate ( Fig 1B, Table 2). The genome duplication event spanned a 131 kilobase pair region (Appendix Fig S10A) resulting in significant up-regulation of 76 genes (Appendix Fig S10B). Included in this gene set were pyrE and xylB, two genes identified by modeling as important for metabolizing D-lyxose. The first gene, pyrE, could enhance growth by increasing nucleotide biosynthesis (Conrad et al, 2009), and this gene is important for achieving optimal growth in genome-scale model simulations (Appendix Fig S10C). The pyrE gene might have also played a role in improving growth fitness in the m-tartrate evolution experiments where intergenic mutations upstream of the pyrE gene were observed in both replicate evolving endpoint populations (Table 2, Appendix Supplementary Text and Fig S7C, and Dataset EV2). Another gene in the large duplication event was xylB, encoding a xylulokinase, which might be catalyzing the second step in the metabolism of D-lyxose (Appendix Fig S10D). Simulating increased flux through the xylulokinase reaction in an approach similar to a phenotypic phase plane analysis (Ibarra et al, 2003) improved the growth rate on D-lyxose (Appendix Fig S10C). Thus, increased expression of xylB and pyrE as a result of the duplication event in the Exp. 2 endpoint strains could be important for enhancing growth on the non-native substrate Dlyxose. While follow-up experiments over-expressing these genes individually are necessary to conclusively establish the causal role of increased pyrE and xylB expression, this study provides a high-level picture of the complex mechanisms at work in adaptation to new carbon sources, from structural and regulatory mutations to largescale deletions and duplications.
Discussion
The results of this combined computational analysis and laboratory evolution study show that enzyme promiscuity can play a major role in an organism's adaptation to novel growth environments. It was demonstrated that enzyme side activities can confer a fitness benefit and open routes for achieving innovative growth states. Further, it was observed that mutation events that enabled growth on nonnative carbon sources could be structural or regulatory in nature and that in four out of the five substrate conditions examined, a single innovative mutation event related to a promiscuous activity was sufficient to support growth. Strikingly, it was demonstrated that network analysis of underground activities could be used to predict these evolutionary outcomes. Furthermore, beyond providing an evolutionary path for innovation, it was demonstrated that enzyme promiscuity aided in the optimization of growth in multiple, distinct ways. It was shown that structural mutations in an enzyme with a secondary activity with a selective advantage could improve the substrate affinity for the non-native carbon source as was observed in the D-lyxose evolution experiments. Finally, it was observed that enzyme promiscuity beyond the enzyme activity initially selected for could open secondary novel metabolic pathways to more efficiently metabolize the new carbon source. This was most clearly observed in the D-arabinose evolutions in which fuc operon-associated enzyme activities were required for the initial innovative phenotype of growth, and then, the ara operon activities were associated with further growth optimization.
While this study showcases the prominent role of enzyme promiscuity in evolutionary adaptations, there is room for follow-up work to strengthen the claims and broaden implications. One strength of this study was examining multiple short-term laboratory evolution experiment conditions (i.e., multiple non-native substrates) in duplicate; however, the number of non-native substrates explored was still on a relatively small scale and the results were a collection of case studies. Next steps could include broadening the number of non-native substrates as well as conducting laboratory evolution experiments with many more replicates and over longer periods of time. Furthermore, there were many mutations, particularly acquired during the static environment phase of experiments, that were not thoroughly examined for causality. This is evident in the case of the small and large deletion in the D-2-deoxyribose evolution experiment found in the clone isolated after the final fitness jump. With hundreds of genes removed from the genome, a deep dive into this event is necessary to unravel the impact, and modeling along with transcriptomics was suggested as a tool to aid in this process. Finally, further studies could examine the trade-offs of enhancing secondary enzyme activities while maintaining a primary activity. This was touched upon while examining the influence of mutations on YihS enzyme activities; however, a more thorough look at enzyme kinetics for multiple cases (such as those observed in DmlA and RbsK (Table 1)) could provide a clearer picture of mutation trade-offs.
The results of this study are relevant to our understanding of the role of promiscuous enzymatic activities in evolution and for utilizing computational models to predict the trajectory and outcome of molecular evolution (Papp et al, 2011;Lässig et al, 2017). Here, we demonstrated that genome-scale metabolic models that include the repertoire of enzyme side activities can be used to predict the genetic basis of adaptation to novel carbon sources. As such, genome-scale models and systems-level analyses are likely to contribute significantly toward representing the complex implications of promiscuity in theoretical models of molecular evolution (Lässig et al, 2017).
Genome-scale model simulations
The iJO1366 (Orth et al, 2011; model accessible for download at: http://bigg.ucsd.edu/models/iJO1366) version of the genome-scale model of Escherichia coli K-12 MG1655 was utilized in this study as the wild-type model before adding underground reactions related to five carbon substrates (D-lyxose, D-2-deoxyribose, D-arabinose, mtartrate, monomethyl succinate) as previously reported (Notebaart et al, 2014). The underground reactions previously reported were added to iJO1366 using the constraint-based modeling package COBRApy (Ebrahim et al, 2013). The version of the iJO1366 model with the added underground reactions explored in this study is provided in Model EV1. All growth simulations used parsimonious flux balance analysis (pFBA) (Lewis et al, 2010). Growth simulations were performed by maximizing flux through the default biomass objective function (a representation of essential biomass compounds in stoichiometric amounts) (Feist & Palsson, 2010). To simulate aerobic growth on a given substrate, the exchange reaction lower bound for that substrate was adjusted to À10 mmol gDW À1 h À1 . Predictions of positive growth phenotypes have been demonstrated to be robust against the exact value of the uptake rate given that they are in a physiological range (Edwards & Palsson, 2000). We note that the metabolic network without the underground reactions is completely incapable of providing growth on any of the carbon sources examined and as such, the predictions can be considered qualitative predictions that are only dependent on the network structure.
For the pFBA results shown in Appendix Fig S6, Appendix Fig S7C, and Appendix Fig S10C, the effect of changing flux through a reaction of interest on growth rate was examined by sampling through a range of flux values (changing the upper and lower flux bounds of the reaction) and then optimizing the biomass objective function. This resulted in a set of flux values and growth rate pairs that were then plotted in the provided figures. Flux variability analysis (FVA) simulations (Mahadevan & Schilling, 2003) were implemented in COBRApy (Ebrahim et al, 2013) with a growth rate cutoff of 99% of the maximum biomass flux. FVA was used to analyze the potential growth impact of the large deletion event from the D-2-deoxyribose evolution. For the D-2-deoxyribose simulations, a glyceraldehyde demand reaction (Orth & Palsson, 2012) was added to prevent a false-negative gene knockout result with the removal of aldA, as previously described. Additionally, aldA isozyme activity for the reaction ALDD2x was also added to the model for D-2deoxyribose simulations. This isozyme addition was based on literature findings (Rodríguez-Zavala et al, 2006).
Laboratory evolution experiments
The bacterial strain utilized in this study as the starting strain for all evolutions and MAGE manipulations was an E. coli K-12 MG1655 (ATCC 4706). Laboratory evolution experiments were conducted on an automated platform using a liquid handling robot as previously described (Sandberg et al, 2014;LaCroix et al, 2015). As described above, the experiments were conducted in two phases, a "weaning/ dynamic environment" phase and an "static environment" phase. At the start of the weaning phase, cultures were serially passaged after reaching stationary phase in a supplemented flask containing the non-native carbon source at a concentration of 2 g l À1 and the growth-supporting supplement (glycerol) at a concentration of 0.2%. Cultures were passaged in stationary phase and split into another supplemented flask and a test flask containing only the non-native carbon source at a concentration of 2 g l À1 . As the weaning phase progressed, the concentration of the growth-supporting nutrient was adjusted to maintain a target max OD600 (optical density 600 nm) of 0.5 as measured on a Tecan Sunrise plate reader with 100 ll of sample. This ensured that glycerol was always the growth limiting nutrient. If growth was not observed in the test flask within 3 days, the culture was discarded; however, once growth was observed in the test flask, this culture was serially passaged to another test flask. Once growth was maintained for three test flasks, the second phase of the evolution experiments commenced-the static environment phase. The static environment phase was conducted as in previous studies (Sandberg et al, 2014;LaCroix et al, 2015). The culture was serially passaged during mid-exponential phase so as to select for the fastest growing cells on the innovative carbon source. Growth was monitored for a given flask by taking OD600 measurements at four time points, targeted to span an OD600 range of 0.05-0.3, with sampling time based on the most recently measured growth rate and the starting OD. Samples were also periodically taken and stored in 25% glycerol stocks at À80°C for reference and for later sequencing analysis. The evolution experiments were concluded once increases in the growth rate were no longer observed for several passages.
Growth data from the evolution experiments were analyzed with an in-house MATLAB package. Growth rates were calculated for each flask during the "static environment" phase of the evolution experiments by taking the slope of a least-squares linear regression fit to the logarithm of the OD measurements vs. time. Calculated growth rates were rejected if fewer than three OD measurements were sampled, the range of OD measurements was < 0.2 or > 0.4, or if the R 2 correlation for the linear regression was < 0.98. Generations of growth for each flask were calculated by taking log([flask final OD]/[flask initial OD])/log(2), and the cumulative number of cell divisions (CCD) was calculated based on these generations as described previously (Lee et al, 2011). Growth rate trajectory curves (Fig 1B, Appendix Fig S1) were produced in MATLAB by fitting a ª 2019 The Authors Molecular Systems Biology 15: e8462 | 2019 monotonically increasing piecewise cubic spline to the data as reported previously (Sandberg et al, 2014;LaCroix et al, 2015).
Whole-genome sequencing and mutation analysis
Colonies were isolated and selected on Lysogeny Broth (LB) agar plates and grown in M9 minimal media + the corresponding nonnative carbon source prior to genomic DNA isolation. For population sequencing conducted for endpoint strains (Dataset EV2), samples were taken directly from glycerol frozen stocks and grown in M9 minimal media + the corresponding non-native carbon source prior to genomic DNA isolation. Genomic DNA was isolated using the Macherey-Nagel Nucleospin Tissue Kit using the support protocol for bacteria provided by the manufacturer user manual. The quality of genomic DNA isolated was assessed using Nanodrop UV absorbance ratios. DNA was quantified using Qubit dsDNA high-sensitivity assay. Paired-end wholegenome DNA sequencing libraries were generated utilizing either a Nextera XT kit (Illumina) or KAPA HyperPlus kit (Kapa Biosystems). DNA sequencing libraries were run on an Illumina Miseq platform with a paired-end 600 cycle v3 kit. DNA sequencing fastq files were processed utilizing the computational pipeline tool, breseq (Deatherage & Barrick, 2014) version 0.30.0 with bowtie2 (Langmead & Salzberg, 2012) version 2.2.6, aligning reads to the E. coli K-12 MG1655 genome (NC000913.3; Datasets EV1 and EV2). For the clone and population samples sequenced in this study, the average of percent mapped reads was > 90%, the average mean coverage was 106 reads, the average total reads was 2.08E6 reads, and the average read length was 271. When running the breseq tool, the input parameters for clonal samples were options -j 8, and the input parameters for population samples were options -p -j 8-polymorphism-frequency-cutoff 0.0. For further information regarding breseq mutation call/read alignment methods, please refer to the breseq methods publication (Deatherage & Barrick, 2014) and documentation. Additionally, identification of large regions of genome amplification was identified using a custom python script that utilizes aligned files to identify regions with more than 2× (minus standard deviation) of mean read depth coverage. DNA-seq mutation datasets are also available on the public database ALEdb 1.0.2 (http://aledb.org; Phaneuf et al, 2019).
Enzyme activity characterization
All enzymes used in this study were generated by cell-free in vitro transcription and translation using the PURExpress in vitro Protein Synthesis Kit (New England Biolabs). Linear DNA templates utilized in all cell-free in vitro transcription and translation reactions were generated by PCR from dsDNA blocks encoding the enzymes with transcription and translations elements synthesized by Integrated DNA Technologies. Linear DNA templates were purified and concentrated using phenol/chloroform extraction and ethanol precipitation. The encoded enzymes were produced using PURExpress according to manufacturer's protocol with linear DNA templates concentrations of 25 ng/1 ll reaction.
The activities of the wild-type YihS and three mutant YihS enzymes toward D-Mannose and D-Lyxose over time were determined using LC/MS. Substrate (10 mM) was added to 7.5 ll of PURExpress reaction in a buffered solution (50 mM Tris, 100 mM KCl, 10 mM MgCl 2 , pH 8) for a total volume of 250 ll and incubated at 37°C. At different time points (0,15,30,60,120,240, and 1,320 min), 10 ll of samples was taken and quenched with 90 ll of LC/MS grade ethanol. Next, samples were dried under vacuum (Savant SpeedVac Plus SC110A) and resuspended in 50 ll of LC/MS grade methanol/water (50/50 v/v). The samples were filtered through 0.22-lm microcentrifugal filtration devices and transferred to 384-well plate for LC/MS analysis. An Agilent 1290 LC system equipped with a SeQuant â ZIC â -HILIC column (100 mm × 2.1 mm, 3.5 lm 200 Å , EMD Millipore) was used for separation with the following LC conditions: solvent A, H 2 O with 5 mM ammonium acetate; solvent B, 19:1 acetonitrile:H 2 O with 5 mM ammonium acetate; timetable: 0 min at 100% B, 1.5 min at 100% B, 6 min at 65% B, 8 min at 0% B, 11 min at 0% B, 12.5 min at 100% B, and 15.5 min at 100% B; 0.25 ml min À1 ; column compartment temperature of 40°C. Mass spectrometry analyses were performed using an Agilent 6550 quadrupole time-of-flight mass spectrometer. Agilent software Mass Hunter Qualitative Analysis (Santa Clara, CA) was used for naïve peak finding and data alignment. Analysis of covariance (ANCOVA) was used to determine whether the slopes of mutants for both xylose and mannose are significantly different from the wild-type slopes. Detailed instrument information and data are provided in Appendix Table S6 and Dataset EV4.
pORTMAGE Library Construction/Isolation of individual mutants
Mutations were introduced and their corresponding combinations accumulated during the laboratory evolution experiments into the ancestral E. coli strain using pORTMAGE recombineering technology (Nyerges et al, 2016). ssDNA oligonucleotides, carrying the mutation or mutations of interest, were designed using MODEST for E. coli K-12 MG1655 (ATCC 4706). To isolate individual mutants, a single pORTMAGE cycle was performed separately with each of the 15 oligos in E. coli K-12 MG1655 (ATCC 4706) + pORTMAGE3 (Addgene ID: 72678) according to a previously described pORTMAGE protocol (Nyerges et al, 2016). Following transformation, cells were allowed to recover overnight at 30°C and were plated to Luria Bertani (LB) agar plates to form single colonies. Presence of each mutation or mutation combinations was verified by High-Resolution Melting (HRM) colony PCRs with Luminaris HRM Master Mix (Thermo Scientific) in a Bio-Rad CFX96 qPCR machine according to the manufacturer's guidelines. Mutations were confirmed by capillary-sequencing. pORTMAGE oligonucleotides, HRM PCR, and sequencing primers are listed in Dataset EV5.
D-2-deoxyribose pORTMAGE library agar plate growth experiments
The pORTMAGE library containing rbsR and rbsK mutations separately and in combination was used to conduct growth experiments on M9 minimal medium + 2 g l À1 D-2-deoxyribose agar plates. The pORTMAGE library frozen glycerol stock composed of the library grown on LB medium, as well as the wild-type E. coli MG1655 frozen glycerol stock, also an LB grown stock, was used to inoculate M9 minimal medium + 2 g l À1 D-2-deoxyribose or M9 minimal medium + 2 g l À1 glycerol and grown at 37°C overnight. The overnight cultures, which contained some residual LB medium and glycerol from the frozen stock, underwent several generations each and were visibly dense (OD600 =~0.5-1.0). The next day, 1 ml of the overnight cultures was pelleted by centrifugation at 5,000 g for 5 min. After pelleting, cells were washed and resuspended in 1 ml of M9 minimal medium + no carbon source. Pelleting and washing was repeated two more times to remove any residual glycerol carbon source or LB media components, and the final resuspension was used for plating. Both the pORTMAGE library and wild-type cells (either from the glycerol or D-2-deoxyribose pre-culture, as specified in Appendix Fig S4A) were plated using a 10-ll inoculation loop on either half of three M9-minimal medium + 2 g l À1 D-2-deoxyribose agar plates (Appendix Fig S4A). The agar plates were made by mixing a 2× solution of D-2-deoxyribose M9 minimal medium with a 2× autoclaved solution of agar (18 g agar in 0.5 l of Milli-Q water). The plates were incubated at 37°C for a total of 9-10 days.
After 9-10 days of incubation, 16 pORTMAGE library colonies were picked from the three D-2-deoxyribose plates for colony PCR and sequencing (Appendix Fig S4A). Colony PCR was conducted (Qiagen HotStarTaq Master Mix Kit) with the primer sequences listed in Appendix Table S5 for rbsK and rbsR. DNA sequencing of PCR products was conducted by Eton Bioscience Inc using their SeqRegular services. The sequencing results are summarized in the main text and in Appendix Fig S4B. Sequencing alignments were conducted using the multiple sequence alignment tool Clustal Omega (Sievers et al, 2011;Appendix Fig S4B).
RbsK comparison to DeoK/kinases in other Enterobacteriaceae
Protein sequence alignment was conducted for the E. coli MG1655 RbsK N20Y mutant sequence from this study and DeoK sequences reported for E. coli strains (Bernier-Febreau et al, 2004;Monk et al, 2013), three pathogenic (AL862, 55989, and CFT073) and one commensal (EC185), as well as the DeoK sequence reported for S. enterica serovar Typhi (Tourneux et al, 2000). The sequence alignments were performed using the multiple sequence alignment package, T-Coffee (Notredame et al, 2000) (Appendix Supplementary Text and Fig S11).
Individual mutant growth test
Isolated mutants were tested for growth over the course of 1 week (Appendix Table S3). Individual colonies were isolated on LB agar plates and used to inoculate pre-cultures grown overnight in 2 ml of glucose M9 minimal liquid media in 10-ml tubes. The following morning, pre-cultures were pelleted at 2,000 g and gently resuspended (by pipetting) in M9 minimal medium without a carbon source and this spinning and resuspension was repeated twice to wash the cells of residual glucose. The final resuspension was in 2 ml of M9 minimal medium without a carbon source. The growth test tubes consisting of 2 ml of M9 minimal medium plus the corresponding innovative carbon source were inoculated with the washed cells at a dilution factor of 1:200. Growth was monitored over the course of 1 week by visually inspecting for increased cellular density, noting that the cultures had become opaque from cell growth. Once growth was observed, colony PCR was conducted (Qiagen HotStarTaq Master Mix Kit) with the primer sequences listed in Appendix Table S5. DNA sequencing of PCR products was conducted by Eton Bioscience Inc using their SeqRegular services. DNA sequencing was utilized to confirm the designed mutations were as expected and to confirm that no other mutations had been acquired in the regions of interest during the growth test.
RNA sequencing
RNA sequencing data were generated under conditions of aerobic, exponential growth on M9 minimal medium plus the corresponding non-native carbon source (D-lyxose, D-2-deoxyribose, D-arabinose, or m-tartrate). Cells were harvested using the Qiagen RNAprotect bacteria reagent according to the manufacturer's specifications. Prior to RNA extraction, pelleted cells were stored at À80°C. Cell pellets were thawed and incubated with lysozyme, SuperaseIn, protease K, and 20% sodium dodecyl sulfate for 20 min at 37°C. Total RNA was isolated and purified using Qiagen's RNeasy minikit column according to the manufacturer's specifications. Ribosomal RNA (rRNA) was removed utilizing Ribo-Zero rRNA removal kit (Epicentre) for Gram-negative bacteria. The KAPA Stranded RNA-seq kit (Kapa Biosystems) was used for generation of paired-end, strand-specific RNA sequencing libraries. RNA sequencing libraries were then run on an Illumina HiSeq 2500 using the "rapid-run mode" with 2 × 35 paired-end reads.
Reads were mapped to the E. coli K-12 genome (NC_000913.2) using bowtie (Langmead et al, 2009). Cufflinks (Trapnell et al, 2010) was utilized to calculate the expression level of each gene in units per kilobase per million fragments mapped (FPKM). This information was then utilized to run cuffdiff (Trapnell et al, 2013) to calculate gene expression fold change between endpoint and initial growth populations (n = 2 biological replicates for each condition tested) using a geometric normalization and setting a maximum false discovery rate of 0.05. Gene expression fold change was considered significant if the calculated q-value (FDR-adjusted Pvalue of the test statistic) was smaller than 0.025 and after conducting a Benjamini-Hochberg correction for multiple-testing (values obtained from cuffdiff analysis). The RNA-seq data are available in the Gene Expression Omnibus (GEO) database under the accession number GSE114358.
Metabolic map generation and data superimposition
All metabolic pathway maps generated in Fig 3 and Appendix Fig S7, Appendix Fig S9, and Appendix Fig S10 were generated using the pathway visualization tool Escher (King et al, 2015).
Bioscreen growth test of mutants
Individual sequenced clones (Dataset EV1) from the D-arabinose evolution experiments (Exp. 1 and Exp. 2) along with the wild-type E. coli K-12 MG1655 strain were utilized for bioscreen growth tests and gene knockout manipulations. A P1-phage transduction mutagenesis protocol based on a previously reported method (Donath et al, 2011) was followed to replace the fucK gene in the evolution and wild-type strains with a Kanamycin resistance cassette from the fucK Keio strain (Baba et al, 2006). The BW25113 Keio collection strain is effectively missing the araBAD genes, so the yabI Keio strain was utilized for the P1-phage transduction of all strains to transfer this neighboring araBAD deletion along with the yabI-replaced Kanamycin resistance cassette. It was deemed that a yabI deletion would not significantly affect the results of the growth experiments since yabI is a non-essential inner membrane protein that is a member of the DedA family (Doerrler et al, 2013). Escherichia coli K-12 contains seven other DedA proteins, and it is only collectively that they are essential (Boughner & Doerrler, 2012). The growth screens were conducted in a Bioscreen-C system machine. Pre-cultures were started from frozen stocks of previously isolated clones and grown overnight in M9 minimal medium + 0.2% glycerol. These pre-cultures were used to inoculate the triplicate bioscreen culture wells at 1:100 dilution of M9 minimal medium supplemented with either 2 g/l D-arabinose or 0.2% glycerol. The final volume for each well was 200 ll. The growth screen was conducted under continuous shaking conditions at 37°C. OD600 (optical density at 600 nm) readings were taken every 30 min over the course of 48 h. Growth rates were calculated using the tool Croissance (Schöning, 2017). The mean growth rates and standard deviation for each condition (n = 3) were calculated and reported in Appendix Table S4 and Fig 3D. The P-values reported in Fig 3D were calculated using a two-sided Welch's t-test.
• The genome-scale metabolic model is provided as Model EV1.
Expanded View for this article is available online.
|
v3-fos-license
|
2022-05-04T13:47:02.864Z
|
2022-05-04T00:00:00.000
|
248508759
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11883-022-01028-4.pdf",
"pdf_hash": "341a467a9be5432f4f05d2908bab28427cebb9a6",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44150",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "91037ef134f979667fd997fbd73ae8e250cfa3ed",
"year": 2022
}
|
pes2o/s2orc
|
Management of Dyslipidemia in Patients with Non-Alcoholic Fatty Liver Disease
Purpose of Review Patients with non-alcoholic fatty liver disease (NAFLD), often considered as the hepatic manifestation of the metabolic syndrome, represent a population at high cardiovascular risk and frequently suffer from atherogenic dyslipidemia. This article reviews the pathogenic interrelationship between NAFLD and dyslipidemia, elucidates underlying pathophysiological mechanisms and focuses on management approaches for dyslipidemic patients with NAFLD. Recent Findings Atherogenic dyslipidemia in patients with NAFLD results from hepatic and peripheral insulin resistance along with associated alterations of hepatic glucose and lipoprotein metabolism, gut dysbiosis, and genetic factors. Summary Since atherogenic dyslipidemia and NAFLD share a bi-directional relationship and are both major driving forces of atherosclerotic cardiovascular disease (ASCVD) development, early detection and adequate treatment are warranted. Thus, integrative screening and management programs are urgently needed. A stepwise approach for dyslipidemic patients with NAFLD includes (i) characterization of dyslipidemia phenotype, (ii) individual risk stratification, (iii) definition of treatment targets, (iv) lifestyle modification, and (v) pharmacotherapy if indicated.
Introduction
Non-alcoholic fatty liver disease (NAFLD) has become the most common chronic liver disease in Western countries and currently affects up to 25% of the general adult population. Due to its close association with type 2 diabetes mellitus (T2DM), hypertension, and dyslipidemia, it is often considered as the hepatic manifestation of the metabolic syndrome [1][2][3]. NAFLD encompasses a spectrum of liver disorders ranging from simple hepatocellular steatosis (non-alcoholic fatty liver, NAFL) to inflammatory non-alcoholic steatohepatitis (NASH) with or without concomitant fibrosis [4][5][6].
All-cause mortality is higher in patients with NAFLD compared to subjects without NAFLD, with the most common causes of death being cardiovascular disease (CVD), extrahepatic cancers, and liver-related complications [7][8][9].
In particular, atherosclerotic cardiovascular disease (ASCVD) represents a major disease burden in the NAFLD population and accumulating evidence indicates that
This article is part of the Topical Collection on Statin Drugs
Hans-Michael Steffen and Philipp Kasper contributed equally to this work. NAFLD has to be considered as a significant risk factor for fatal or non-fatal CVD events [10-13, 14•].
Another important risk factor in this context is atherogenic dyslipidemia, characterized by plasma hypertriglyceridemia, increased triglyceride-rich lipoproteins including very-low-density lipoproteins (VLDL), and their remnants intermediate-density lipoproteins (IDL), a predominance of small dense low-density lipoprotein (sd-LDL) particles together with low high-density lipoprotein (HDL) cholesterol levels [15,16].
Results from imaging studies demonstrate signs of subclinical arherosclerosis to be present in approximately 70% and 45% of middle-aged men and women, respectively, and low-density lipoprotein (LDL) has globally been recognized as the major driving force in the development of ASCVD and its clinical manifestations [17,18].
Atherogenic dyslipidemia can be observed in a large majority of patients with NAFLD and results from hepatic and peripheral insulin resistance along with associated alterations of the hepatic glucose and lipoprotein metabolism, gut dysbiosis, and genetic factors [4,16].
Since atherogenic dyslipidemia may be at least partially responsible for the increased subclinical and clinical ASCVD burden in patients with NAFLD, it represents a central treatment target in this population with high cardiometabolic risk [10,11,13,19].
In 2020, a change of the terminology from NAFLD to metabolic associated fatty liver disease (MAFLD) has been proposed [20][21][22] and the diagnosis can be made in the presence of hepatic steatosis and at least one of the following criteria: (i) overweight or obesity, (ii) T2DM, and (iii) metabolic dysregulation (at least two factors among increased waist circumference, hypertension, hypertriglyceridemia, low serum HDL-cholesterol levels, impaired fasting plasma glucose, insulin resistance, or subclinical inflammation evaluated by high-sensitivity C-reactive protein (CRP) levels). While there is still an ongoing debate, the term NAFLD is maintained in this narrative review that describes the relationship between NAFLD and dyslipidemia, delineates interrelated pathophysiological mechanisms in the development of ASCVD, and focuses on up-to-date management and treatment approaches for dyslipidemic patients with NAFLD.
Prevalence of Dyslipidemia Among Patients with NAFLD
Information about the frequency of lipid disorders among patients with NAFLD involving changes in serum cholesterol (hypercholesterolemia), triglycerides (hypertriglyceridemia), or both (combined dyslipidemia) varies. In a recent comprehensive meta-analysis of Younossi et al. including 86 studies with information about 8,515,431 patients with NAFLD from 22 countries, the overall prevalence of combined dyslipidemia in patients with NAFLD or NASH was estimated to be 69% and 72%, respectively [3]. This is in line with findings from the REGENERATE study that investigated the effects of obeticholic acid as potential treatment option in NAFLD, where 68-70% of a total of 931 recruited patients with NAFLD displayed combined dyslipidemia at baseline [23].
Hypercholesterolemia, as a specific subtype, was found in 44% of patients with NAFLD in a recent large prospective cohort study including middle-aged US adults [24]. In patients with NASH, prevalence rates of hypercholesterolemia are even higher and range from 60 up to 90% [25,26].
Focusing on hypertriglyceridemia, the overall prevalence among patients with NAFLD or NASH has been reported in a recent meta-analysis to be 41 and 83%, respectively. These rates are confirmed by other studies and it has been shown that in particular patients with NASH as well as patients with NAFLD with concomitant T2DM are more frequently affected [27][28][29][30].
Pathogenesis of Dyslipidemia in NAFLD
Alterations in lipid metabolism are centrally involved in both NAFLD and ASCVD development. In NAFLD pathogenesis, accumulation of liver fat results from a dysbalance between different pathways: inadequate uptake of circulating lipids, increased hepatic de novo lipogenesis (DNL), insufficient enhancement of fatty acid oxidation, and altered export of lipids as components of VLDL [4,31]. An elevated hepatic uptake of lipids in combination with enhanced DNL leads to an increase triglyceride synthesis and elevated secretion of VLDL [16]. This overproduction of VLDL initiates an atherogenic dyslipidemic milieu including plasma hypertriglyceridemia, accumulation of triglyceride-rich lipoproteins, increased number of sd-LDL particles, and low HDL-cholesterol levels [4,17,32]. Apolipoprotein B containing triglyceride-rich lipoproteins and their remnants (IDL) as well as the genetically determined cholesterol-rich Lp(a), composed of apolipoprotein (B) and (a), are an essential part of the development of atherosclerosis. Following the infiltration of the subendothelial space of the vascular wall, lipoproteins act as damage-associated molecular patterns (DAMPs) and lead to an activation of Toll-like receptors (TLR), which play a crucial role in the innate immune response.
Especially triglyceride-rich lipoproteins containing apolipoprotein C3 (ApoC3) are important activators of TLRs 2 and 4, which leads to activation of the NLRP3 (NOD-like receptor family, pyrin domain-containing protein 3) inflammasome [32,33]. Subsequently, NLRP3 inflammasome activation causes an activation of the enzyme caspase-1 (Interleukin-1 (IL) converting enzyme), followed by cleavage and thus activation of the IL-1ß family with subsequent induction of the IL-1 to IL-6 to CRP pathway. This inflammatory pathway plays a central role in ASCVD and vascular inflammation and is frequently activated in patients with NAFLD [4,34]. Interestingly, certain metabolic environments such as an increased hepatic triglyceride pool or genetic alterations, which are both associated with NAFLD, modulate the formation of atherogenic ApoB lipoproteins, which promotes a more proinflammatory phenotype [32].
Insulin resistance is another important driver for the lipoprotein abnormalities in NAFLD. It affects several processes such as dyslipidemia, hyperglycemia, activation of oxidative stress and inflammation, endothelial dysfunction, and ectopic lipid accumulation, altogether stimulating ASCVD development [16,35]. A recent study showed that alterations in lipid metabolism were mainly related to measures of insulin sensitivity rather than to obesity or the presence of NASH [36].
In conclusion, accumulating evidence indicates that NAFLD-driven dyslipidemia is substantially involved in the development of ASCVD in this at risk population [16].
Management Approaches of Dyslipidemia in Patients with NAFLD
Since patients with both NAFLD and dyslipidemia are at substantial ASCVD risk, integrative screening programs are urgently needed and forced management of dyslipidemia plays a pivotal role in the primary and secondary prevention of ASCVD. The management of dyslipidemia in patients with NAFLD should be performed according to recent guideline recommendations, e.g., the European Society of Cardiology (ESC) / European Atherosclerosis Society (EAS) guidelines for the management of dyslipidemia or the recently published ESC guidelines on cardiovascular disease prevention in clinical practice [37][38][39].
General principles of managing dyslipidemia in patients with NAFLD include the following: (i) diagnosis and characterization of dyslipidemia, (ii) ASCVD risk stratification of patients based on recommended criteria and modern scoring algorithms (e.g., SCORE2, ACC/AHA ASCVD risk estimator), (iii) definition of treatment targets for serum lipids, (iv) lifestyle modification, and (v) pharmacotherapy if indicated.
Diagnosis and Characterization of Dyslipidemia
An individualized diagnosis of dyslipidemia involves a comprehensive baseline assessment of the lipid profile including the determination of the following parameters: total cholesterol, LDL-cholesterol, non-HDL-cholesterol, HDL-cholesterol, and triglycerides (the latter in the fasting state when using the Friedewald formula to estimate LDLcholesterol). ApoB analysis, if available, is recommended for risk assessment, particularly in people with high triglyceride levels, DM, obesity, metabolic syndrome, or very low LDL-cholesterol levels. Lp(a) measurement should be considered at least once in each adult persons' lifetime to identify those with very high inherited Lp(a) levels > 180 mg/ dL (> 430 nmol/L) who may have a lifetime risk of ASCVD equivalent to the risk associated with heterozygous familial hypercholesterolemia.
Individual Risk Stratification
After the assessment of an abnormal lipid profile, an individual risk stratification of the total ASCVD risk should be performed in each patient. This includes a systematic global assessment of CVD risk factors, including documented manifestations of ASCVD (e.g., coronary or carotid artery plaques), blood pressure, history of cigarette smoking, type 1 or type 2 DM, overweight or obesity as measured by the body mass index (BMI), and waist circumference, chronic kidney disease (CKD), family history of premature CVD (men < 55 years and women < 60 years), and genetic lipid disorders (e.g., familial hypercholesterolemia) [40].
Furthermore, it is useful to evaluate additional factors modifying individuals ASCVD risk such as physical inactivity, alcohol intake, psychosocial stress (incl. vital exhaustion or social deprivation), chronic immune-mediated inflammatory disorders, psychiatric disorders, cardiac arrhythmias (e.g., atrial fibrillation), and obstructive sleep apnea syndrome [40].
In addition, an estimation of the 10-year CVD risk can be performed [41]. For this purpose, current European guidelines recommend the recently published SCORE2 algorithm for people < 70 years of age and the SCORE2-OP (older people) algorithm for individuals ≥ 70 years of age [39,42]. The SCORE2 algorithm estimates the individual 10-year risk of fatal and non-fatal CVD events in apparently healthy people aged 40-69 years with CVD risk factors that are untreated or have been stable for several years [39].
Although there are no specific ASCVD risk prediction tools that take into account the presence or severity of NAFLD, using a risk prediction algorithm can help to identify patients with NAFLD at higher risk of CVD who should benefit most from preventive action [43•, 44].
In a recent study by Golabi et al., it has been shown that an elevated ASCVD risk score ≥ 7.5% (defining an intermediate 10-year risk) among patients with NAFLD is associated with a higher risk of overall and cardiovascular mortality, confirming the usefulness of such scoring systems to identify patients at risk in this population [43•].
Validated cardiovascular risk calculators, such as SCORE2, typically divide the individual 10-year CVD risk in different categories (e.g., low, moderate, high, and very high risk).
While, for example, patients with well-controlled shortstanding DM (e.g., < 10 years) without evidence for target organ damage or additional ASCVD risk factors were classified as patients with moderate risk, patients with additional risk factors such as CKD, genetic lipid disorders, elevated blood pressure, or already established ASCVD are considered as patients at high or very high CVD risk [39].
Due to the fact that NAFLD has been recognized as significant risk factor for CVD morbidity and mortality with an approximate 10-year CVD risk ranging from 5 to 22% in recent trials [45,46], it seems reasonable to classify the risk for patients with NAFLD at least according to that of patients with DM [47]. This means, that patients with NAFLD without any other additional ASCVD risk factor (e.g., arterial hypertension, smoking, DM, obesity, CKD) should be classified as low to moderate CVD risk, whereas patients with NAFLD and at least one additional ASCVD risk factor should be classified as high CVD risk. Similarly, patients with NAFLD with markedly elevated single risk factors, in particular total cholesterol > 310 mg/dL (8 mmol/L) or LDL-cholesterol > 190 mg/dL (4.9 mmol/L) or familial hypercholesterolemia should be classified as high CVD risk.
Patients with NAFLD and documented clinical manifestations of ASCVD, including previous acute myocardial infarction, coronary revascularization or other arterial revascularization procedures, stroke or transient ischemic attack, aortic aneurysm or peripheral artery disease, and patients with NAFLD and documented ASCVD on imaging, including plaques on coronary angiography, carotid ultrasound, or on computed tomography angiography (CTA), should be classified as very high CVD risk individuals.
In addition, patients with NAFLD and advanced CKD (e.g., eGFR 45-59 mL/min/1.73 m 2 and microalbuminuria or eGFR < 45 mL/min/1.73 m 2 irrespective of albuminuria) should also be classified as very high CVD risk.
A possible risk stratification algorithm for patients with NAFLD is shown in Fig. 1.
Definition of Treatment Targets for Serum Lipids
Based on the risk stratification, individual target values for lipids can be determined.
In patients with NAFLD at moderate risk, an LDLcholesterol goal of < 100 mg/dL (2.6 mmol/L) should be considered. In patients with NAFLD, who are at high CVD risk, lipid-lowering treatment with an ultimate LDL-cholesterol goal of ≥ 50% LDL-cholesterol reduction and an LDL-cholesterol of < 70 mg/dL (1.8 mmol/L) should be recommended. In patients with NAFLD at very high risk (e.g., with established ASCVD), intensive stepwise lipid-lowering therapy should be sought, ultimately aiming at a ≥ 50% LDL-cholesterol reduction and a target LDL-cholesterol level of < 55 mg/dL (1.4 mmol/L).
Lifestyle Modification
Body Weight In patients with NAFLD and elevated BMI and dyslipidemia, guidelines recommend weight loss to achieve and maintain a long-term healthy weight (target BMI 20-25 kg/m 2 , and waist circumference < 94 cm (men) and < 80 cm (women)) as key lifestyle-modifying measure to improve their CVD risk profile [37][38][39]48]. In patients with NAFLD, a total body weight loss of ≥ 5% is required to substantial improve hepatic steatosis, ≥ 7% to improve hepatic inflammation or NASH resolution, and ≥ 10% to improve hepatic fibrosis [49][50][51][52].
Focusing on dyslipidemia, body weight loss also reduces total cholesterol and LDL-cholesterol levels and a decrease in LDL-cholesterol concentrations of 8 mg/dL (0.2 mmol/L) is observed for every 10 kg of weight loss in obese patients [40]. Serum triglycerides levels are much more responsive to weight changes than serum cholesterol. Even a modest weight loss of 5% has been shown to reduce serum triglycerides levels by around 10% [53]. However, weight loss always needs to be accompanied by further lifestyle modifications such as dietary changes and physical activity interventions to reduce CVD risk significantly.
A beneficial role of physical activity has been demonstrated in numerous studies and is established in all guidelines for both NAFLD and ASCVD [37-39, 41, 52, 54, 55]. Current European guidelines for NAFLD treatment recommend 150-200 min/week of moderate-intensity aerobic physical activities in three to five sessions (e.g., brisk walking, stationery cycling) [38]. Guidelines on ASCVD prevention propose 150-300 min/week of moderate-intensity or 75-150 min/week of vigorous-intensity aerobic exercise to improve cardiometabolic health and reduce CVD morbidity [36,39]. In addition, resistance exercise training is recommended on 2 or more days per week as well as reduced sedentary behavior, as sedentary time is associated with greater risk for several major metabolic chronic diseases and increased mortality [54,56].
Diet Several dietary habits such as a high caloric diet, excess of (saturated) fat, sugar-sweetened beverages, highfructose intake, and refined carbohydrates are associated with weight gain, NAFLD, and dyslipidemia [4,38,57]. Especially high-fructose consumption has adverse effects on metabolism leading to the development of intrahepatic insulin resistance, intrahepatic fat accumulation, and the aggravation of an atherogenic lipid profile [58,59].
Therefore, a healthy diet (e.g., low in saturated fat with a focus on whole grain products, vegetables, fruit, and marine fish) is recommended as a cornerstone of CVD prevention in all NAFLD individuals with dyslipidemia.
The macronutrient composition should be adjusted according to a Mediterranean diet (MD) or a similar dietary pattern [40]. MD is traditionally plant based (whole grains, legumes, fruit, vegetables), low in carbohydrates (limited simple sugars and refined carbohydrates), and rich in monounsaturated (mostly olive oil) and omega-3 fatty acids, and incorporates limited amounts of red meat and low-fat dairy. Although earlier studies have found that calorie restriction rather than the composition has beneficial effects in patients with NAFLD and dyslipidemia, a recent meta-analysis has shown that a MD significantly improves both NAFLD and NAFLD-related CVD risk factors such as hypertension and serum levels of total cholesterol [60].
Daily alcohol intake should be drastically reduced (< 10 g/day for men and women). In patients with hypertriglyceridemia or advanced fibrosis (≥ F2), alcohol consumption should be completely avoided [40].
In conclusion, weight loss in overweight or obese NAFLD with dyslipidemia is urgently needed but insufficient as a single intervention and should therefore always be accompanied by dietary and physical activity interventions.
Pharmacological Management of Hypercholesterolemia
The causal role of LDL-cholesterol has been demonstrated beyond any doubt in genetic, observational and interventional studies [17,[61][62][63][64]. Therefore, lowering LDL-cholesterol is crucial for the treatment of dyslipidemia in patients with NAFLD.
Statins play a well-established role in the primary and secondary prevention of CVD and are safe among patients with chronic liver disease such as NAFLD, including those with mild baseline elevation in transaminases (< 3 × upper limit of normal [ULN]) or compensated cirrhosis [65][66][67][68][69][70][71]. Similarly, there is evidence that statins may attenuate NASH and reduce the risk of liver fibrosis [72].
Although accumulating data show that statins can safely be used in patients with NAFLD, statins are still under-prescribed among these patients due to concerns about hepatotoxicity [73]. However, it should be noted that statins are contraindicated in patients with decompensated cirrhosis or acute liver failure [74,75].
In addition to their lipid-lowering effects, statins may have beneficial biologic effects, including amelioration of endothelial dysfunction, increased nitric oxide bioavailability, antioxidant properties, and inhibition of inflammation ultimately resulting in improved vascular function and stabilization of atherosclerotic plaques [75,76].
Treatment with the maximally tolerated dose of highintensity statins (e.g., rosuvastatin or atorvastatin) is recommended as first-line treatment of dyslipidemia in patients with NAFLD to reach the LDL-cholesterol goals set for the determined risk group [39]. High-intensity statin therapy reduces LDL-cholesterol by 50% on average. For patients with NAFLD and transaminase elevations > 3 × ULN, a lower initial dose of statins and monitoring of transaminases in 4-to 12-week intervals during cautious uptitration may be reasonable [74,75].
If the therapeutic goal cannot be achieved with the maximum tolerated statin dose, combination treatment with ezetimibe is recommended. Ezetimibe inhibits the intestinal cholesterol absorption through binding to the Niemann-Pick C1-Like 1 (NPC1L1) sterol receptor, which subsequently decreases the transportation of free fatty acids and nonesterified cholesterol to the liver [77]. The average LDLcholesterol reduction using a combined treatment with a high-intensity statin together with ezetimibe is approximately 65%.
While the use of ezetimibe is safe in patients with NAFLD, there are controversial data on whether ezetimibe can also directly improve biochemical and histological markers of NAFLD [78][79][80][81][82][83]. For patients not achieving the LDL-cholesterol goals with a maximum tolerated dose of a statin und ezetimibe, combination with a PCSK9 inhibitor is recommended for secondary prevention and should be considered as primary prevention for patients at very high CVD risk. Moreover, a statin combination with a bile acid sequestrant (e.g., cholestyramine) may be considered if the LDL-cholesterol goal cannot be reached. If a statin-based regimen is not tolerated at any dose (e.g., due severe myopathy), a monotherapy with ezetimibe or combination therapy with ezetimibe and PCSK9 inhibitor should be considered [39]. The general principle of managing elevated cholesterol levels follows the rule "the lower the better," with no adverse effects seen with even the lowest values of LDL-cholesterol [74,84,85]. Therefore, there is no need to de-intensify treatment in those who attain very low LDL-cholesterol levels during treatment [74]. Figure 2 proposes a possible algorithm for the management and treatment of dyslipidemia in patients with NAFLD.
Pharmacological Management of Hypertrigylceridemia
Although ASCVD risk is already increased at fasting triglycerides higher than 150 mg/dL (1.7 mmol/l), the use of drugs to lower triglyceride levels may only be considered in high-risk patients if triglycerides are higher than 200 mg/ dL (2.3 mmol/L) and triglycerides cannot be lowered by lifestyle measures [40,86]. In addition to the effects on the ASCVD risk, therapeutic lowering of severe hypertriglyceridemia is also useful to reduce the pancreatitis risk, which is clinically significant if triglycerides are > 880 mg/ dl (> 10 mmol/L) [40].
In individuals with hypertriglyceridemia [triglycerides > 200 mg/dL (2.3 mmol/L)] and high ASCVD risk, statin treatment is recommended as the first drug of choice.
Statin treatment reduces serum triglycerides by 15-30%, and more importantly VLDL as well as other apolipoprotein B containing atherogenic remnant particles that are typically increased in hypertriglyceridemia patients [53].
In high-risk (or above) patients with triglycerides > 1.5 mmol/L (135 mg/dL) despite statin treatment and lifestyle interventions, the administration of long chain n-3 polyunsaturated fatty acids (PUFAs, e.g., icosapent ethyl, 2 × 2 g/day) may be considered in combination with a statin [39]. Depending on the baseline triglyceride concentration, therapeutic administration of n-3 PUFAs can result in a 30-50% reduction in serum triglycerides [53].
In patients with combined dyslipidemia already taking statins, who achieved their LDL-cholesterol goal, but still display hypertriglyceridemia > 200 mg/dL (2.3 mmol/L), additional treatment with fenofibrate or bezafibrate may be considered. In patients with NASH and dyslipidemia, previous studies investigating the effects of the peroxisome proliferator-activated receptor (PPAR) agonists bezafibrate and fenofibrate have demonstrated beneficial effects on both lipid metabolism and liver function. Fenofibrate treatment in patients with NASH led to a decrease in elevated aminotransferases (ALT, AST) and gamma-glutamyltranspeptidase (GGT) activity as well as hepatocellular ballooning evaluated by biopsy, while short-term treatment with bezafibrate (2-8 weeks) has been found to reduce microvesicular steatosis [87]. However, no significant changes in inflammation and fibrosis could be observed. While the effect of bezafibrate and fenofibrate should therefore be considered as minor, results from studies investigating clinical effects of the new pan-agonist lanifibranor on NAFLD severity and concomitant dyslipidemia will be interesting [88,89]. Recently published data from a phase 2b study have already indicated promising beneficial effects on improving both NAFLD and dyslipidemia [90].
When focusing on PUFA's as additional treatment option, it should be recognized that there are controversial results regarding the use of n-3 fatty acid supplementation and their effects on CVD outcomes [91][92][93][94]. While the REDUCE-IT trial showed that the use of icosapent ethyl 2 g twice daily was superior to mineral oil in reducing triglycerides, CVD events, and CVD death among patients with high triglycerides [92], the recently published STRENGTH trial, which analyzed the effect of high-dose omega-3 fatty acids (combined formulation of eicosapentaenoic acid and docosahexaenoic acid) in patients at high CVD risk, observed no beneficial effects of omega-3 fatty acid supplementation on the reduction of major adverse CV events [94]. However, it could be observed in both studies that high-dose omega-3 fatty acid supplementation was associated with an increased risk of developing new-onset atrial fibrillation [92,94]. As NAFLD represents another emerging risk factor for cardiac arrhythmias [95][96][97], these potential side effects should always be critically considered when PUFA's are prescribed. In addition, while hepatic de novo lipogenesis is suppressed and fat oxidation is increased by omega-3 fatty acid supplementation, fasting and postprandial glucose concentrations are increased with questionable long-term effects [98]. Figure 3 proposes a possible algorithm for the management and treatment of hypertriglyceridemia in patients with NAFLD. NAFLD, non-alcoholic fatty liver disease; PCSK-9, proprotein convertase subtilisin/kexin type 9; siRNA, small interfering ribonucleic acid
Treatment of Cardiometabolic Comorbidities
Adequate management of dyslipidemia in patients with NAFLD can only succeed, if all cardiometabolic comorbidities are considered and holistic approaches are needed [99].
Hypertension, obesity, NAFLD, and T2DM commonly coexist with dyslipidemia and act synergistically to increase the individuals' ASCVD risk. In particular, optimal treatment of DM is important in patients with NAFLD and dyslipidemia, as insulin resistance is the primary mechanism leading to lipid derangements [100,101].
The prevalence of NAFLD in patients with T2DM is approximately 56% and a recent meta-analysis of 11 studies including 8346 patients observed that patients with NAFLD with concomitant T2DM had a twofold increased risk for the manifestation of ASCVD when compared to patients without NAFLD (OR 2.20; 95% CI 1.67-2.90) [2,102]. Recently, some anti-diabetic drugs, especially glucagon-like peptide-1 receptor agonists (GLP-1RA) and sodium-glucose cotransporter-2 inhibitors (SGLT-2i), have shown promising results in patients with NAFLD and in ASCVD, with partly beneficial effects on underlying lipid disorders [103][104][105][106]. In a recent phase IIb trial in patients with NASH, administration of semaglutide led to higher rates of NASH resolution and no worsening of fibrosis compared with placebo, which was accompanied by favorable effects on triglyceride levels at the highest administered dose [104].
Studies on SGLT-2i in patients with NAFLD showed a decrease in hepatic fat, improved liver function tests, and decreased triglyceride levels, while results on LDLcholesterol levels were inconsistent [107,108].
When treating arterial hypertension in patients with NAFLD and dyslipidemia, it should be taken into account that some antihypertensive drugs may have an adverse effect on plasma lipid levels (e.g., thiazide diuretics, beta blockers), whereas others have neutral or beneficial effects (e.g., angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, calcium channel blockers, selective alpha-1 blockers) [109].
Emerging Drugs for Treating Dyslipidemia in Patients with NAFLD
In recent years, new classes of lipid-lowering agents have been developed and approved that will be of increasing importance in everyday clinical practice in the future.
Bempedoic Acid
Bempedoic acid decreases LDL-cholesterol levels by the inhibition of adenosine-triphosphate (ATP) citrate lyase in the liver. ATP-citrate lyase is a cytosolic enzyme upstream of the HMG-CoA reductase in the cholesterol biosynthesis pathway. Unlike statins, bempedoic acid is administered as a prodrug and is converted to its active form by enzymes found only in the liver and not in skeletal muscles. The lack of active metabolites of bempedoic acid in skeletal muscles makes it a promising alternative for patients with statinassociated myopathy. Decreasing LDL-cholesterol synthesis with bempedoic acid leads to an attenuation of atherosclerosis [110]. Bempedoic acid monotherapy and the combination therapy of bempedoic acid with ezetimibe are approved for the treatment of adults with familiar hypercholesterolemia in adults or patients with established ASCVD who require additional LDL-cholesterol lowering after maximally tolerated statin therapy or statin intolerance [111]. Bempedoic acid in a dose of 180 mg daily reduces LDL-cholesterol by up to 20% from baseline either as monotherapy or in combination with statin use. Combination treatment of 180 mg bempedoic acid with 10 mg ezetimibe daily, LDLcholesterol can be reduced by 50% [74,112,113]. However, there is no data from studies investigating bempedoic acid in patients with NAFLD. In addition, trials investigating major cardiovascular endpoints are missing and need to be performed in the future.
PCSK9 Inhibitors
PCSK9 protein is an important regulator of circulating LDLcholesterol levels. Secreted PCSK9 binds to the LDL receptor on hepatocytes, leading to an internalization and degradation of the receptor in lysosomes. This leads to a reduction of the LDL receptor numbers on the hepatocyte cell surface and a reduced uptake of low-density lipoproteins. In turn, inhibition of PCSK9 in turn increases the number of LDL receptors and an increase uptake of LDL-cholesterol into cells [114]. The average reduction in LDL-cholesterol with PCSK9 inhibitor therapy is approximately 60% [39,40]. In combination with high-intensity or maximum tolerated statins, PCSK9 inhibitors reduce LDL-cholesterol up to 75%, and up to 85% when ezetimibe is also added [39,115,116]. Further, PCSK9 inhibitors can additionally lower triglycerides [115,116]. Currently, there are two PCSK9 inhibitors available in clinical practice, alirocumab and evolocumab. Both are administered subcutaneously and lower LDL-cholesterol levels in patients who are at high or very high CVD risk, including those with DM, and induce a substantial reduction in ASCVD events. However, they are currently indicated as rescue therapy for otherwise untreatable patients with severe hypercholesterolemia [117,118].
Recently presented results from the HUYGENS trial could also demonstrate that treatment with evolocumab had incremental benefits on high-risk features of coronary artery plaques and significantly improved plaque stability by increasing the fibrous cap thickness as measured by optical coherence tomography [119].
Since patients with NAFLD frequently suffer from coronary heart disease with vulnerable coronary plaques [120], the reported positive effects of PCSK9 inhibitors on plaque stabilization seen in the HUYGENS trial may also be of particular relevance in this ASCVD-risk population [121].
While comprehensive studies with long-term follow-up on the effect of PCSK9 inhibitors on the clinical course of patients with NAFLD are lacking, preliminary data suggest beneficial effects and indicate that PCSK9 inhibitors may ameliorate NAFLD via different mechanisms [122]. In a small retrospective study, PCSK9 inhibitors led to an amelioration of hepatic steatosis in patients with NAFLD as measured by computer tomography [123]. However, prospective studies are needed to validate these results.
Small Interfering RNA (siRNA) Molecules
Inclisiran, a small interfering RNA (siRNA) molecule, is a novel promising agent for the management of hypercholesterolemia which increases the number of LDL receptors in the hepatocyte membranes by blocking the transcription of PCSK9. It provides advantages because of an infrequent dosing interval of only twice a year to reduce LDL-cholesterol by 50 to 60% [124,125]. However, currently, no data in patients with NAFLD has been published.
Conclusion
The concept of NAFLD as a cardiovascular risk factor on its own has been challenged in a study of the Danish general population using a Mendelian randomization design as well as a large matched cohort study including 18 million European adults where an increased risk did not persist after adjusting for established cardiovascular risk factors [126,127]. As recently pointed out, increasing rates of antihypertensive and lipid-lowering drug treatment in large published cohort studies of patients with NAFLD have been associated with substantially lower cardiovascular mortality [46]. A holistic approach including practical recommendations for lifestyle interventions, diabetes control, blood pressure treatment goals, and risk-related LDL-cholesterol targets seems to be promising. There are no prospective randomized studies looking for effects of lipid-lowering treatments on hard cardiovascular endpoints in NAFLD populations. However, based on the large body of evidence for a significant reduction in cardiovascular morbidity and mortality, it is reasonable to follow the treatment algorithms laid down in current guidelines on lipid modification to reduce the cardiovascular risk in dyslipidemic patients with NAFLD [39,40].
Author Contributions A.M., P.K., and H.M.S. drafted the manuscript. A.M. and P.K. performed literature review. T.G., S.L., and M.D. critically revised the manuscript for important intellectual content. All authors critically reviewed and approved the manuscript in its final form.
Funding Open Access funding enabled and organized by Projekt DEAL.
Conflict of Interest
The authors declare that they have no conflict of interest.
Human and Animal Rights and Informed Consent
This article is a review article in the field of the management of combined dyslipidemia in patients with non-alcoholic fatty liver disease. The review contains previous publications based on human and animal studies. This article does not contain any studies with human or animal subjects performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
v3-fos-license
|
2019-05-12T14:24:23.117Z
|
2018-12-01T00:00:00.000
|
149981882
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.hrpub.org/download/20181130/UJER3-19512441.pdf",
"pdf_hash": "52e7ea08ad6ba25a49fa28c35459fdc7a38b4108",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44152",
"s2fieldsofstudy": [
"Education"
],
"sha1": "2f7b6e434b7f711b8dd2662e825bd3f4e2ae79d3",
"year": 2018
}
|
pes2o/s2orc
|
Comparison of the Metacognitive Awareness Levels between Successful and Unsuccessful Teams in the Turkish Men's Second Volleyball League
This study aimed to compare the metacognitive skill levels between successful and unsuccessful teams in the Turkish Men's Second Volleyball League in the season of 2017 – 2018. Volunteer participants consisted of 133 volleyball players from eight Clubs' teams. The metacognitive skill inventory was used for data collection. Two components of this inventory are knowledge about cognition and regulation of cognition. First includes the sub-dimensions of declarative knowledge, procedural knowledge and conditional knowledge while second consists of the sub-dimensions of planning, information management strategies, comprehension monitoring, debugging strategies (DS) and evaluation. Mann Whitney U tests were used for two group comparisons while Spearman rank order correlations were performed to analyze the relations between team success and metacognitive awareness skills. This study showed that there was no significant difference with respect to metacognitive skills between successful and unsuccessful teams’ mean values except DS. The mean of DS in top three ranking team was 10.58 % lower than unsuccessful teams. The difference indicates that top three rankings teams made fewer mistakes and thus have low DS while unsuccessful teams experienced more errors and they had higher DS. Execution volleyball skills without errors are critical factor for high performance.
Introduction
Implementation of tactical skills in sporting activities is performed possible by consciously arranging decision-making processes. The concept of metacognition is defined as the ability of the individual to regulate, control and guide the cognitive processes at the highest level [1].
Metacognitive awareness has two components [2]. These are knowledge about cognition and regulation of cognition. First component includes the sub-dimensions of declarative knowledge, procedural knowledge and conditional knowledge. Second component includes the sub-dimensions of planning, information management strategies, comprehension monitoring, debugging strategies and evaluation.
In most sporting activities, perceptual motor skills, knowledge and decision making are important components of intellectual processes. These components are at the center of sport performance with successful technical and tactical applications. This is of great importance by taking into account the sports that include open-motor skills that the volleyball player must act according to the changing game positions and situations.
Closed skill is performed automatically in a relatively unchanging environment while open skills are performed in a changing dynamic environment. While open skills are usually made against a competing athlete or team, closed skills usually cover competitors taking turns in sport environments. Allard and Starkes [3] stated that for closed skills it is critical for player to reproduce motor patterns in a consistent and reliable, defined and standardized manner. Open skills are the effective production of skill in a certain environmental condition. The cognitive demands for closed skills are generally less due to lower external monitoring and relatively constant environmental conditions. In general, the demands are essentially internal and the athletes are trying to execute and reproduce the technical skill and movement structure in a perfect way. On the other side, open skill demands the involvement of a broader range of processes due to the changing spatial-temporal requirements of the key stimuli in the environment. In this situation, the athletes must deal with both external and internal information. The performers are still trying to perfect and perform consistent movement skills but must now appropriately apply this movement pattern within an ever-changing environment. In this 2716 Comparison of the Metacognitive Awareness Levels between Successful and Unsuccessful Teams in the Turkish Men's Second Volleyball League performance, they must be able to be aware of their capabilities to effectively accomplish this selection. Volleyball is known as a fast game, and efficient players need fast reactions, quick ball detection, and a good anticipation of the correct cues, and also understanding game tactics [3].
Inexperienced players often perform many errors in volleyball. They may have developed the physical skills properly but they become frustrated when they cannot perform these same skills in a game. Due to increasing their declarative knowledge, these frustrations may soon change into success. First they begin to recognize certain events and then get to know them quickly. Consequently, they understand where to move on the court, how to receive the ball, they know their responsibilities and their teammates' roles. Coach must also help players to learn how to focus on the ball better and not to be easily distracted by other players' actions. The increasing of declarative knowledge by the coach is an essential part of skill development in volleyball or any other sport. As Wall, states rich declarative base in a given sport might enhance the learning of specific skills simply because such knowledge might provide a better context for learning and problem solving [4].
Sport knowledge within the perception of motor learning; It can be divided into three information structures including declarative, procedural and strategic [5]. Declarative knowledge includes to structural information related to rules, facts and definitions. Procedural knowledge implies to how a athlete is doing something in his individual stages and strategic knowledge is information surrounding how to learn and recall in the correct context [6]. Declarative information is a prerequisite for increasing procedural knowledge in sport because during the game, an athlete must have sufficient knowledge of the declaration during the problem-solving and decision-making process [5][6][7]. On the other side, regulation of cognition is necessary to control actions in improving problem-solving skills and efficient performance.
During the match and training, volleyball player asks others for help when he or she doesn't understand something. So, a player changes his or her strategies when he or she fails to understand. Then it is necessary to re-evaluate assumptions when player gets confused. Finally, a player stops and goes back over new information that is not clear. Also a player stops and rereads the position or match when he or she gets confused.
Eventually, evaluating volleyball player from a knowledge based perspective may help coach understand more about how their athletes learn and the effectiveness of their own teaching. The coach may also be able to use the questionnaire as a tool to help adapt their teaching methodology to meet the different needs of all players. Self-reported evaluation of metacognitive awareness could potentially be a very valuable teaching and learning tool for coach and volleyball players during match and training. This approach makes possible to develop, implement and evaluate the self-evaluation of metacognitive awareness in volleyball players. Thus, this study aims to compare the metacognitive skill levels between first three ranking teams and unsuccessful teams in the Turkish Men's Second Volleyball League in the season of 2017 -2018.
Material Method
This descriptive study focus to compare the metacognitive awareness levels between top three ranking teams and unsuccessful teams in in Turkish Men's Second Volleyball League in the season of 2017-2018.
Participants
Data were collected from 133 voluntary volleyball players during their camps before match in Turkish Men's Second Volleyball League in the season of 2017-2018.
Data Collection Instrument
Metacognitive Awareness Inventory (MAI) was used for comparison of the metacognitive skill levels between top three ranking teams and unsuccessful teams in the Turkish men's second volleyball league This inventory was developed by Schraw and Dennison (1994) and translated into Turkish by Akın, Abacı and Çetin [7,8]. MAI is self-report inventory and has two components. These are knowledge about cognition and regulation of cognition. Knowledge about cognition as a first component of MAI has 17 questions and includes the sub-dimensions of declarative knowledge, procedural knowledge and conditional knowledge. Second component of MAI includes the sub-dimensions of planning, information management strategies, comprehension monitoring, debugging strategies and evaluation. MAI consisted of a total 52 questions, 17 questions for knowledge about cognition and 35 questions for regulation of cognition. Highest score is 260 points and lowest score is 52 points from MAI consisting of 5 point Likert type scale, ranges from 1 which is always false to 5 is always true. High scores indicate strong metacognitive awareness, while low scores indicate weak metacognitive awareness.
Lower scores indicate weak metacognitive awareness while higher scores demonstrate strong metacognitive awareness. No reverse coding was used in MAI. In the reliability study of the inventory, the Cronbach alpha coefficient was found as 0.95, test retest reliability results was found as 0.95, too [8].
Data Analysis
Kolmogorov-Smirnov values showed that there was no normal data distribution in this study. So, nonparametric Mann Whitney U tests were used for comparison between two groups. Spearman rank order correlations were performed to test relations between Team ranking and metacognitive skills levels.
Results
This research aims to compare the level of metacognitive skills of male volleyball players between first three ranking teams and eliminated teams in Turkish Men's Second Volleyball League. In this context, classification of top three teams and unsuccessful teams in Turkish Men's Second Volleyball League in the season of 2017 -2018 was presented in Table 1. In addition, metacognitive awareness between top three teams and unsuccessful teams was shown in Table 2 while Table 3 displays the Spearman rank order correlation coefficients among success level and metacognitive variables in Turkish Men's Second Volleyball League.
Discussion
The findings of the present investigation indicate that metacognitive processes may be fundamental to effective cognitive control during the match and training in elite volleyball players. Metacognitive processes, such as planning, monitoring, reviewing and evaluating, and metacognitive experiences were central to the adoption and initiation of cognitive strategies during playing [9]. The present study highlights the effect of metacognitive monitoring and control functions to cognitive regulation in the context of Turkish Men's Volleyball Second league.
Schraw and Dennison reported that metacognitive awareness has two components including knowledge about cognition and regulation of cognition [7]. First component consists of the sub-dimensions of declarative knowledge, procedural knowledge and conditional knowledge. Second component includes the sub-dimensions of planning, information management strategies, comprehension monitoring, debugging strategies and evaluation.
Kallio, Virta and Kallio assumed that self-evaluation is a link between the knowledge of cognition and the regulation of cognition. They greatly confirmed that planning and knowledge of conditions predict success through the learning process [9].
So, the aim of this study was to compare the metacognitive skill levels between successful and unsuccessful teams in the Turkish Men's Second Volleyball League in the season of 2017 -2018. According to Brown, [10] declarative knowledge refers to knowledge about the self and about the personal strategies, procedural knowledge refers to knowledge about how to use these strategies, whereas conditional knowledge refers to knowledge about when and why to use strategies [10]. Regulation of cognition includes activities that are aimed at regulating or controlling learning such as planning, information management strategies, comprehension monitoring, debugging strategies and evaluation of the learning process [11,12] Results of this study showed that there were no significant differences between top three ranking teams and eliminated teams with respect to all sub-dimensions except debugging strategies in Turkish Men's Volleyball Second League. Also, Spearman rank order correlations analysis showed that there was significant negative correlation between team success and metacognitive awareness skills at 0.01 significant level (r=-,272**). The level of debugging strategies was increasing from success performance to unsuccessful team performance. Although top three ranking teams had higher means in knowledge about cognition of MAI first component and in the sub-dimensions of declarative knowledge, procedural knowledge and evaluation, these differences weren't significant at 0.05 levels. Also unsuccessful teams had higher means in the regulation of cognition as first component of MAI and in the sub-dimensions of conditional knowledge, planning, information management strategies, comprehension; these differences weren't significant at 0.05 levels too.
Only significant difference was observed between successful and unsuccessful teams in the variable of debugging strategies. This means that especially lower expertise volleyball player asks others for help when he or she doesn't understand something. So, this player changes his or her strategies when he or she fails to understand. Then it is necessary to re-evaluate assumptions when player gets confused. Finally, a player stops and goes back over new information that is not clear. Also a player stops and rereads the position or match when he or she gets confused.
Unsuccessful volleyball players may need to develop strategies for debugging due to the large number of errors that occur during their performance. On the other hand, successful volleyball players with demonstrating higher technical and tactical efficiency in training and matches do not need to develop strategies for debugging or to be effective in the game because, they play with little errors during the match.
The present study indicates that the average debugging strategies was only differentiated depending on success level. Unsuccessful teams had a higher mean value of debugging strategies than top three ranking teams' mean value. This can be explained by the superiority of top three ranking teams in the physical, technical and tactical performance compared to eliminate teams in Turkish Men's Second Volleyball league. This advantage does not allow them to develop their skills for debugging strategies. The successful performances of top three teams do not require consciously detecting, selecting and correcting errors during the game. Clearly, elite players' skill levels are at the autonomous phase. In this final stage of learning, skill has become almost automatic or habitual [13]. Players at this stage do not think consciously through what they are doing while performing the skill because they can perform it without conscious thought [13,14]. MacIntyre, Igou, Campbell, Moran and Matthews claimed that expertise in any field makes possible metacognitive inference. They also suggested that expertise itself may be composed of metacognitive inference among a variety of non-metacognitive processes, including working memory and motivation [15].
Efficient, extensive and well organized procedural knowledge base is developed by participation of extensive quality practice. A skilled person performing a large number of automatized skills will have to use deliberate attentional control much less often and so will execute actions more readily and efficiently than a novice. Therefore, experts are able to handle a wider variety of challenges with more proficiency than novices [16].
It is not possible to explain expert sports performance with the relation of automaticity and procedural information because declarative knowledge and metacognitive skills can also play a role in the acquisition of expertise [17,18]. It should be remembered that while procedural knowledge is inherently linked to optimum sport performance, declarative knowledge may play both negative [19] and positive role [20][21][22].
After a lot of practice at the end of the learning process, there is a striking change in athletic behavior. What was controlled and slow at an earlier stage then becomes more automatic, fluent, and fast. This is accepted as a higher level of proficiency expertise. The information processing at this expert stage is usually referred to as automatic processing [23]. There is qualitatively difference between automatic information processing and controlled processing. Contrary to controlled processing, it is fast, not attention demanding, parallel in nature, and not "volitional" in that processing is often unavoidable [24]. It is clear that number of errors decrease as performance increases. Elite athletes with fewer errors will not need the ability to debugging. Increasing metacognitive awareness may enhance training and match performance and better prepare volleyball players for developing their technical and tactical skills.
Further research is required to test the relationship between volleyball success and metacognitive awareness skills in male and female volleyball in different leagues. In sports branches requiring the intense decision making abilities the effect of metacognitive awareness education should be investigated on athlete's technical and tactical skills in martial arts, soccer, volleyball, hockey, wrestling and boxing. This study should be repeated on a larger scale and may confirm that raising metacognitive awareness levels among volleyball players is desirable.
|
v3-fos-license
|
2022-05-20T15:05:09.573Z
|
2022-05-01T00:00:00.000
|
248902848
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cureus.com/articles/97214-acute-hydronephrosis-secondary-to-methadone-induced-constipation.pdf",
"pdf_hash": "ebc7f2272e521ae82239425932eabcd3ed9c4e65",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44153",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d5a82f6c2a9af2ebc5dfbb777bbda43563aac900",
"year": 2022
}
|
pes2o/s2orc
|
Acute Hydronephrosis Secondary to Methadone-Induced Constipation
Opioid-induced constipation is a significant medical problem accounting for over 40% to 60% of patients without cancer receiving opioids. We report a unique case of a 71-year-old male with a history of opioid use disorder now on methadone maintenance presenting with severe opioid-induced constipation and fecal impaction causing extrinsic compression on the right-sided ureter resulting in right hydronephrosis and hydroureter that improved with aggressive bowel regime with the stool softener, laxatives and enemas. Methadone alone can predispose to hydroureter with hydronephrosis due to external compression from the severe intestinal dilation secondary to opioid-induced constipation.
Introduction
Opioids are commonly prescribed analgesics for acute or chronic pain. Their use has increased in the past years, and the efforts to reduce their consumption are because of their addictive nature. Their side effects include lethargy, nausea, CNS depression, pruritus, and constipation [1]. While opioid effectiveness in treating pain is high, 18.9% of patients discontinued their use due to significantly worsening quality of liferelated opioid side effects [2].
Addiction to these medications has been a burden to the healthcare and criminal justice system of the United States [3]. The deescalation of opioid therapy is a challenge due to the pain control they provide and the need to minimize withdrawal symptoms [4]. This addiction requires both medical treatment and psychological treatment to cease its use. Research has found that medications for opioid use disorder in addition to an opioid receptor agonist (methadone), partial agonist (buprenorphine), or opioid antagonist (extended-release naltrexone) can facilitate recovery [5]. Methadone, a synthetic opioid, is a mu-opioid receptor agonist and NMDA receptor antagonist. Unlike other opioids, it possesses a longer half-life and fewer withdrawal symptoms [4]. This long-acting medication is used for moderate to severe pain nonresponsive to non-narcotic drugs, detoxification, treatment of opioid use disorder, and treatment of neonatal abstinence syndrome [6]. Multiple challenges to the discontinuation of Methadone therapy exist. The first one is the limited evidence on optimal treatment duration, although studies show that tapering and discontinuation of this therapy lead to high rates of relapse and increased risk of death [7].
Here we present a case of a patient on methadone who presented with abdominal pain with severe constipation and was found to have hydronephrosis and hydroureter due to fecal compression.
Case Presentation
We report a case of a 71-year-old male with a past medical history of opiate dependence, currently on methadone maintenance (85mg daily) for the past year; he presented to our emergency room with persistently progressive diffused abdominal pain for three days, it is 6/10 intensity, associated with constipation for the same period and one period of fecal incontinence. On further inquiry, the patientreported of altered bowel habits with episodes of diarrhea alternating with severe constipation for the last two to three months. The patient also stated that he had an episode of fecal impaction, requiring recurrent visits to multiple emergencies. His last bowel movement (BM) was three days ago. He denied symptoms of fever, melena, hematochezia, nausea, vomiting, weight loss, lack of appetite, burning micturition, changes in urinary frequency/urgency, hematuria, or urinary habit changes. The patient denied any other significant medical co-morbidities, past surgical, family history, tobacco, alcohol, or current use of recreational drugs. His only home medication happens to be methadone 85mg, with the last dose being on the day of admission.
In the emergency department, the patient was found to be afebrile and vitally stable. Physical Examination was significant for distended abdomen with a tympanic note to percussion, generalized tenderness in all quadrants of the abdomen, and decreased bowel sounds on auscultation, without palpable masses. His initial labs were significant for only mild normocytic normochromic anemia and normal thyroid function level ( Table 1). Computerized tomographic imaging (CAT scan) of the abdomen and pelvis showed severe fecal retention with associated significant bowel dilation, a large right-sided hydroureter with hydronephrosis (Figures 1-3).
Lab Tests On Admission
The right-sided hydronephrosis and hydroureter were secondary to extensive stool burden in the distal colon, causing extrinsic compression on the ureter. The patient underwent manual fecal disimpaction along with soap suds and fleet enemas while still in the emergency and was then admitted to the medical floor eventually for management of his severe constipation. While on the floor, the patient refused to be weaned of his methadone dose, he was started on an aggressive bowel regimen involving polyethylene glycol 17g qd, senna 30mg hs, docusate 200mg q12h, and serial mineral oil enemas. Then the patient had a large BM, and continue to have 2-3 BM a day. On day 3, the patient abdominal pain and distention improved. A kidney and bladder ultrasound showed normal right kidney and ureter with no hydronephrosis or hydroureter (Figure 4). The patient was discharged asymptomatic with regular BM, to be followed by his primary doctor.
Discussion
The Rome IV criteria define opioid-induced constipation (OIC) as new-onset or worsening constipation at the initiation or change of opioid therapy. In addition, the patient must present two or more symptoms of straining, hard stools, tenesmus, anorectal blockage, the need for manual maneuvers to defecate, or less than three BMs per week [7]. The incidence of OIC varies substantially from 15% to 81% [3]. It also accounts for over 40% to 60% of patients without cancer receiving opioids. Risk factors that predispose patients to developing OIC are advanced age, female gender, reduced mobility, hypercalcemia, altered nutritional intake, and anal fissures [8].
The most common symptoms of opioid-induced constipation are abdominal discomfort, nausea, gas, decreased appetite, reflux, bloating, straining, incomplete evacuation, and hard BMs [9]. In addition, it is essential to know that patients do not develop tolerance to constipation [8]. The binding of opioids causes these side effects on the μ and δ receptors located across the gastrointestinal system in the myenteric and submucosal neurons, which results in reduced neuronal activity and neurotransmitter release [2]. Therefore, leading to increased anal sphincter tone, inhibition of water and electrolyte secretions, decreased peristalsis, increased non-propulsive contractions, and decreased rectal sensitivity [9]. Also, opioids increase water absorption via prolonged stasis, which causes dried-up feces and straining difficulties [10]. Moreover, the increased anal sphincter tone further contributes to difficulties in the initiation of defecation [2].
In Lugoboni et al. study from 2016, a high prevalence of constipation and reduced quality of life (QoL) was found among patients treated with methadone. Similarly, Mattick et al. reported that the prevalence of constipation in patients utilizing methadone was 14% [11]. Furthermore, in Habe et al. in 2017, the primary symptoms reported with methadone were constipation, sweating, nausea or vomiting, insomnia, drowsiness, and sexual difficulties in about 8%-20% of patients [12]. Compared to non-opioid users, methadonedependent patients also had a significantly higher rate of retained solid stool in the study by Verma et al. in 2012 [13].
A significant complication of the OIC is fecaloma. This is a mass of inspissated stool that results from the accumulation of feces in the rectum or rectosigmoid colon. They may cause distension and increased pressure on the colonic wall. The increased intraluminal pressure may lead to ulceration, localized ischemic necrosis of the colonic wall, and eventual perforation [14]. The fecaloma can lead to acute urinary tract obstruction with extrinsic ureteral compression due to the anterior displacement of the bladder base induced by the dilatation of the rectosigmoid colon [15]. Acute urinary tract obstruction by giant fecaloma is rare, but cases have been reported.
Other complications of chronic constipation include abdominal compartment syndrome (ACS), defined as organ dysfunction caused by an increase in intra-abdominal pressure greater than 20 millimeters of mercury. The World Society of the Abdominal Compartment Syndrome (WSACS) classifies this condition by its underlying cause: decreased abdominal compliance (e.g., burns); increased intra-abdominal contents (e.g., hemoperitoneum, ascites); increased intraluminal contents (e.g., intestinal volvulus, ileus, and constipation), capillary leak/fluid resuscitation, and miscellaneous causes such as obesity and peritonitis [16].
The first step of treatment for OIC involves preventive measures such as increasing water and fiber intake [17]. However, severe symptomatic cases of OIC requiring medical attention are usually managed using a laxative. The choice of agent and the dosing are empiric -with most cases responding to osmotic agents (polyethylene glycol) or stool stimulants (bisacodyl/senna) over stool softeners (docusate). However, there are no adequately powered randomized trials comparing sennosides, docusate, lactulose, or PEG. Patients with suspected fecaloma or fecal impactions secondary to severe constipation have significantly improved with manual fecal disimpaction and rectal enemas (mineral oil enema, irritant enema). Other approved therapies for OIC include methylnaltrexone, naloxone, naldemedine, and lubiprostone. These agents are reserved for only severe refractory cases of OIC that have not responded to initial laxative therapy/enema therapy or manual disimpaction [18][19][20].
We did not find reported literature on hydronephrosis caused by fecalomas secondary to methadone use. Our patient is a unique case of reversible overflow hydronephrosis with hydroureter due to methadone-induced constipation with fecal impaction.
Conclusions
Severe constipation secondary to maintenance methadone dose alone can predispose to hydroureter with hydronephrosis. This external obstructive disorder could lead to renal failure or a severe intestinal dilation to a bowel rupture; this is why an early approach is fundamental to avoid potential complications. These patients need a close follow-up and an intense reinforced education about opioids and methadone side effects.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
v3-fos-license
|
2019-05-12T14:02:50.468Z
|
2019-04-24T00:00:00.000
|
149637877
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/amse/2019/4239486.pdf",
"pdf_hash": "d633f2f6c2cd36d84586b4fdc6e2a4e5b0d0da92",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44154",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "d633f2f6c2cd36d84586b4fdc6e2a4e5b0d0da92",
"year": 2019
}
|
pes2o/s2orc
|
Development , Performance , and Microscopic Analysis of New Anchorage Agent with Heat Resistance , High Strength , and Full Length
To solve the difficult problems of failure of pretensioned bolt supports under high ground pressure and temperature, a new kind of anchorage agent with excellent performance is developed. First, the selection and compounding of raw materials were conducted. .e new anchorage agent was obtained by modifying the PET resin by mixing with a phenolic epoxy vinyl ester resin (FX-470 resin) and adding a KH-570 silane coupling agent. .en, the viscosity, thermal stability, compressive strength under different temperatures, and anchorage capacity of the new anchorage agent were tested. Moreover, the best proportion ratio of anchorage agent by mixing resin : coarse stone powder : fine stone powder : accelerator : curing agent : KH-570�100 : 275 : 275 :1 : 32.5 :1 is obtained. .e test results showed that, with the addition of a KH-570 silane coupling agent, the viscosity decreased significantly, thereby solving the difficult technical problems of pretensioned bolt supports in full-length anchorage support. Compared with the conventional anchorage agent, the compressive strength of the new anchorage agent increased by 20.4, 82.5, 118.2, and 237.5% at 10, 50, 80, and 110°C, respectively, and the anchorage capacity increased by 4.7, 8.7, 40.2, and 62.9% at 30, 50, 80, and 110°C, respectively. Finally, the enhancement in compressive strength and heat-resistant mechanism are revealed through microanalysis.
Introduction
With the increase in mining depth and crustal stress every year, the quantity of broken rock mass increases, which leads to a series of problems such as difficulty in roadway maintenance, cost increase, and safety issues [1,2].Fulllength anchorage support can maintain the roadways well [3][4][5].However, the emerging technology of full-length anchorage support is not yet popular, and with an increase in mining depth, the ground temperature increases, and the anchoring force of the resin anchor becomes lower than the theoretical value [6]; therefore, the anchoring safety decreases.
Many scholars have studied the effect of full-length anchor support and temperature on the anchoring force.Zhou et al. studied the shear stress distribution and load transfer at the interface of a full-length bonded bolt and deduced the constitutive model of double exponential curve and the exponential form of the load-displacement curve at the top of the bolt.He established that load-displacement distribution at the interface of the bolt and anchor could be a solution [7].In Nemcik's study, the nonlinear bond-slip constitutive model is combined with FLAC2D to simulate the failure transmission law of a full-length anchored rock under tension [8].Hu reported a low-viscosity and highstrength anchoring agent using fine stone powder as an aggregate, increasing the amount of resin to improve the consistency of the anchoring agent and compensating for the decrease in resin strength due to a lack of coarse aggregates by increasing the degree of polymerization of the resin [9].Fan studied the feasibility of introducing a bisphenol A structural reinforcement resin as an anchorage agent into an unsaturated polyester resin to induce high-temperature stability [10].Lin carried out optimization test and resin simulation research on the shape and size of a full-length anchor bolt and optimized the height, width, and spacing of the cross-ribs of the bolt [11].Zhang et al. simulated the resin anchor-hold at different temperatures and obtained the temperature around the drill hole can influence the resin anchor-hold [12].Hu used a combination of laboratory tests and numerical simulation to study the effect of temperature on the anchoring properties of resin anchors [13].To solve the problem of adaptability of the full-length anchorage agent technology and ensure the strength of the anchoring support in high ground temperatures, it is urgent to develop a new type of heat-resistant, high-strength, and full-length anchoring agent.
Based on the commonly used full-length anchoring agent comprising unsaturated polyester resins as a binder, the optimum raw materials were selected and samples were prepared.e new type of heat-resistant, high-strength, and full-length anchoring agent was developed by mixing a phenolic epoxy vinyl ester resin (FX-470) with modified PET and adding a silane coupling agent (KH-570).Finally, the physical and mechanical properties of the new anchorage agent were tested.
Resin.
e unsaturated polyester resins used in an anchoring agent can be classified into the following types: o-phthalate, phenylene, m-phthalate, and PET.According to Hu and Wang [14], PET resin is widely used in the production of mine-support materials because of its excellent performance, inexpensiveness, and short gelling time.e liquid index of PET unsaturated polyester resin is shown in Table 1.
In this study, polyblend modification was used to improve the poor temperature resistance of the PET resin and the blend system was solidified under the existing solidifying system of unsaturated polyester resin.Vinyl ester resin combines the advantages of unsaturated polyester resin and epoxy resin.Its epoxy framework imparts excellent heat resistance and corrosion resistance to the resin, which can be solidified using the peroxide curing system [15].erefore, a phenol epoxy vinyl ester resin was used to modify the PET resin.e chemical formula of the phenol epoxy vinyl ester resin is shown in Figure 1, and the liquid index is shown in Table 1.
Aggregate.
e aggregate in an anchorage agent significantly affects the consistency, strength, and thermal stability of the anchoring agent.In this study, river sand, quartz sand, cement, and stone powder were chosen to carry out the proportion test.It was found that river sand and quartz sand fillers decrease the polymer strength and cause fragmentation, while cement fillers rapidly increase the viscosity of polymer and lower the thermal stability.However, stone powder fillers imparted better strength and viscosity to the anchorage agent.e main components and grain composition of stone powder are selected as shown in Tables 2 and 3.Because wet aggregates can destroy the bonding between the binder and aggregate and reduce the strength of the anchoring agent, the aggregate must be dried to 0.1% or less water content [16].
Curing Agent and Accelerator.
As per the requirements of the existing anchoring agent preparation technology, the curing agent used in this study was a mixture of benzoyl peroxide (BPO), calcium carbonate, and ethylene glycol.e content of BPO in the mixture was fixed at 6%, and the amount of curing agent used in the experiment was 5% of the total weight of the anchoring agent cement.e accelerator used was N,N-dimethylaniline (DMA), and the amount was 1% of the resin mass in the anchoring agent cement.
Silane Coupling Agents.
e silane coupling agent is a kind of organosilicon compound that contains both carbon and silicon functional groups.It is an organic polymer composite that plays the role of an auxiliary for reinforcing, increasing the viscosity, compatibilizing, and imparting moisture resistance.In this paper, the polymer is unsaturated polyester and a silane coupling agent (KH-570) is selected [17].e compatibility and reinforcement of this silane coupling agent are mainly used to increase the amount of stone powder in the anchoring cement and reduce the consistency of anchoring cement.However, there is an ideal amount of coupling agent to produce the ideal effect [17,19].
e amount of silane coupling agent (KH-570) in this test ranges from 0.5 to 2% by weight of the resin.
Test Method.
e existing conventional full-length anchoring agent is composed of the resin, stone powder, accelerator, and curing agent.Its composition ratio is PET resin/coarse stone powder/fine stone powder/accelerator/ In order to adapt to the full-length anchorage construction technology and reduce the pushing resistance of the anchor during construction, the method of reducing the content of stone powder is usually used to reduce the self-consistency of the anchoring agent.On the other hand, with the increase in the amount of resin, the compressive strength of the anchoring agent decreases after solidification.In addition, the heat resistance of the PET resin in the anchoring agent is generally poor, and it deteriorates with the increase in the amount of the resin.
In view of the existing problems of the conventional fulllength anchoring agent, FX-470 resin and silane coupling agent KH-570 are introduced to improve the performance of conventional anchoring agent.e design and test scheme are shown in Table 4. Based on the China coal industry standard MTl46.1.2011[20] and GB/T 2567.2008[21], the viscosity, thermal stability, compressive strength, and anchorage capacity of capsule resin are tested.
Experimental Analysis of Physical and Mechanical Properties of Anchoring Agent
3.1.Viscosity Test.Full-length anchoring support technology requires that the anchorage agent fills the whole anchor hole.
If the anchorage agent has a high viscosity, the anchorage resistance will be large, which will lead to the anchorage of the anchor not reaching the required depth and thus greatly reducing the support.erefore, a reasonable viscosity of the anchorage agent is necessary.
Test Method.e hollow cone for determining the standard consistency of cement and the circular mould for determining the setting time of cement according to MT 146.1-2011 are used [20].Record the sinking depth of the hollow cone within 1 min.e experimental results are shown in Figure 2. As shown in Figure 2, A1 is a conventional full-length anchoring agent.e proportion of stone powder in group B and group C is increased, and KH-570 is added at the same time.Compared to the viscosity of B1-B3 or B5-B7 in group B, it can be concluded that when the ratio of resin to stone powder is fixed, the consistency of the anchoring agent decreases with the increase in the amount of KH-570.By comparing B1 and B5 with the condition of the same amount coupling agent, the higher the content of the stone powder, the smaller is the consistency value; compared to the viscosity of C1-C6 in group C, it can be concluded that with the increase in the proportion of FX-470 resin in the mixed resin, the consistency of the anchoring agent decreases because FX-470 resin has a lower viscosity than PET resin; therefore, with increasing FX-470 resin content with the same ratio of resin to stone powder, the consistency deteriorates.By adjusting the amount of silane coupling agent KH-570, the proportion of stone powder in the anchoring agent can be increased while the consistency of the anchorage agent can be reduced at the same time.e consistency value of full-length anchoring agent should be 50-60 mm. e test results show that B2, B6, B7, and C1-C4 in group B are suitable for full-length anchoring support.
3.2.
ermal Stability Test.As supporting material, the storage period of the anchorage agent should not be less than three months.
e thermal stability directly affects the storage time of the anchoring agent at room temperature, so it is an important index of anchoring agent performance.e test method is as follows [20]: the anchorage agent is heated in a 101A-2 electric drying oven with forced convection and maintained for 20 h at (80 ± 2) °C.After removal, it is placed for 4 h at (22 ± 1) °C.e consistency value of the observed specimens, which is greater than 16 mm, is normal.
Because the influence of the aggregate on the thermal stability was considered and determined at the material selection stage, the new anchoring agent is mainly used to test the influence of FX-470 resin and KH-570 on the thermal stability of anchorage agent.In this paper, B6, C3, and C6 are selected to carry out the thermal stability test.As shown in Table 5, the viscosity is greater than 16 mm and the thermal stability performance is qualified.It can be seen that the introduction of FX-470 resin and KH-570 does not affect the thermal stability of the anchoring agent.
Compressive Strength Test.
In the full-length anchoring support system, the anchoring agent acts as the bond between the bolt and the rock, and its own strength affects the stability of the anchor.e stability of the anchorage agent's strength under different temperatures is verified by performing the compressive strength test.e test method is as follows: three 40 mm cube blocks were prepared using a standard mould, as shown in Figure 3.After curing at a standard temperature for more than 24 h, the specimen was placed in a 101A-2 electric drying oven with forced convection.e test blocks were heated at different temperatures for more than 6 h to ensure that the temperatures Advances in Materials Science and Engineering 3 inside and outside of the specimen were the same.e compressive strength was measured on a universal material testing machine immediately after removing the specimen from the oven so that the specimen temperature does not change by more than 3 °C.e temperature of the test piece was measured using an F8380 infrared thermometer.e test results are shown in Figures 4 and 5.
As shown in Figure 4, when the coupling agent KH-570 is added to the group B anchoring agent, the proportion of stone powder becomes higher than that of conventional fulllength anchorage agent A1 and the compressive strength significantly increases.Compared to the compressive strength of B1-B4 or B5-B8, with the increase in the amount of the silane coupling agent KH-570 in the anchoring agent, the compressive strength increased slightly first at a rate of 3-4%.en, when the content of silane coupling agent reach 1.0% of the resin mass, the strength of the anchoring agent did not increase.It can be seen that adding 0.5%-1.0%coupling agent can improve the bonding between the resin and stone powder in the anchoring agent, and the addition of an excessive amount will not improve the compressive strength of the anchoring agent.erefore, adding an appropriate coupling agent in the anchoring agent can not only adjust the consistency of the cement and optimize the resin ratio but also improve the compressive strength of the anchoring agent itself.
From Figure 5, it can be concluded that the compressive strength of A1 is 64.4 MPa at 10 °C.e compressive strength decreases by 38.8% at 50 °C, 57.3% at 80 °C, and 76.7% at 110 °C.It can be seen that the compressive strength of the A1 anchoring agent is strongly affected by temperature.e B6 anchoring agent is based on A1 ratio to increase the proportion of stone powder and adding KH-570 silane coupling 4 Advances in Materials Science and Engineering agent; thus, the consistency of B6 anchoring agent meets the requirements of full-length anchoring and improves the strength of the anchoring agent at room temperature.However, due to the poor temperature resistance of the PET resin, the performance of the B6 anchoring agent decreases by 74.6% at 110 °C.e C6 anchoring agent is composed of FX-470 resin instead of PET resin.e strength of the anchoring agent at room temperature is 67.5 MPa, and its performance remains unchanged up to 50 °C and decreases by 25.9% at 110 °C.Samples C2-C4 are mixed modified resin anchoring agents that comprise the FX-470 resin, and their compressive strengths at room temperature are higher than that of the conventional resin anchoring agent.Further, with an increase in the proportion of FX-470 resin, the temperature resistance of the anchoring agent gradually improved.e temperature resistance and compressive strength of the C4 anchoring agent are excellent.e compressive strength at 10 °C is 77.5 MPa, which decreases by 7.3% at 50 °C, 22.6% at 80 °C and 34.6% at 110 °C.e compressive strength of C4 increases by 20.4,82.5, 118.2, and 237.5% over that of A1 at 10, 50, 80, and 110 °C, respectively.It can be seen that the temperature resistance of the C4 anchoring agent greatly improved after the FX-470 resin was mixed and modified.
Anchorage Capacity Test.
e anchorage force is the most intrinsic parameter to measure the bonding performance of the anchorage agent.In this study, universal material testing machine and temperature-controlled silicone rubber heating belt are used to draw specimens at different temperatures; the test specimens and setup are shown in Figures 6 and 7, respectively.A steel pipe with an outer diameter of 42 mm, inner diameter of 30 mm, and length of 400 mm and a MSGLW-355 bolt with a diameter of 22 mm and 450 mm length were selected for specimen preparation.e anchoring agent is used to seal one end of the steel pipe, and the amount of the anchoring agent is calculated so that the depth of the plug was 50 mm.After the plug was solidified, the anchor bolt was anchored into the steel pipe with a hand-held pneumatic drill.e anchoring depth is 350 mm.After solidification, the specimen is cured at a standard temperature (22 °C) for more than 24 h.en, the specimen is wrapped with a temperature-controlled silicone rubber heating belt and heated for 2 h to ensure that the temperatures inside and outside the specimen are the same.
e tests of viscosity, thermal stability, and compressive strength show that the group C anchorage agents have a good consistency and thermal stability, especially in a hightemperature environment.Among group C specimens, the comprehensive performance of C4 anchorage agent is outstanding.
erefore, C4 type and A1 type anchorage agents are selected to test the anchorage force at different temperatures.
e experimental results are shown in Figure 8.
erefore, C4 is a heat-resistant anchoring agent with superior performance.
In summary, according to the results of tests for the viscosity, thermal stability, compressive strength, and anchoring force, it is concluded that C4 is a heat-resistant, high-strength, and full-length anchoring agent.It has a mixed resin/coarse stone powder/fine stone powder/ accelerator/curing agent/KH-570 resin ratio of 100 : 275 : 275 : 1 : 32.5 : 1, and the ratio of mixed resin is PET/FX-470
Microscopic Test and Mechanism Analysis of New Resin Anchorage Agent
Compared with the conventional full-length anchorage agent, the C4 anchorage agent shows excellent performance in physical and mechanical tests.e reasons for this are investigated by performing contact angle test and scanning electron microscopy (SEM) imaging.
Test of Contact Angle between Resin and Stone Powder.
In this study, the contact angle between the PET resin and stone powder was measured using a contact angle tester (SL2000C, Corno Company).Figure 9(a) shows the contact angle between the PET resin and stone powder, and Figure 9(b) shows the contact angle between the stone powder and the PET resin with 1% silane coupling agent KH-570 added into it.As can be seen, the contact angle decreases by about 20 °from 120.81 °to 101.03 °after the addition of KH-570, and the wettability of the resin and stone powder improved with the addition of the coupling agent.It is explained that when KH-570 is added to the anchorage agent of groups B and C, the proportion of stone powder is increased and the viscosity of anchorage agent is decreased.
Bolt pullout system Silocone rubber heating belt
Hydraulic control system
Software control system Figure 10(a) shows a conventional A1 full-length anchorage agent.e filling of fine particles and resin between the coarse particles in the anchoring agent is not compact.
e smooth surface of the coarse particles on the shear fracture surface indicates that the resin and particles are not closely bonded.Figure 10(b) shows the C4 anchoring agent, and the resin is closely bound to fine stone powder and is densely packed around coarse grains.
e shear fracture surface is rough, and the resin is adsorbed tightly on the surface of the stone powder particles.
Mechanism Analysis.
e wetting effect and chemical bonding of KH-570 between resin and stone powder are verified by performing contact angle measurement and SEM imaging [22].e contact angle decreased, and the bonding between the resin and stone powder increased with the addition of the coupling agent.
e proportion of stone powder in the anchorage agent increased while meeting the viscosity requirement of full-length anchorage support technology, which makes the new anchorage agent more compact and improves its compressive strength.
FX-470 phenolic epoxy vinyl ester resin itself contains unsaturated double bonds and a high-temperature-resistant epoxy skeleton.Under the catalysis of curing agent free radicals, the unsaturated double bonds break to form a solid network with the PET resin.At the same time, the epoxy skeleton itself is embedded in it, which enhances the heat resistance of the anchorage agent after curing.erefore, on the basis of the original full-length anchorage agent ratio, the heat-resistant full-length anchorage agent with the appropriate strength and consistency can be prepared by adding KH-570 and FX-470 resins.Advances in Materials Science and Engineering
Figure 2 :Figure 3 :
Figure 2: Variation in viscosity for different types of anchorage agents.Figure 3: Cube blocks.
Figure 4 :Figure 5 :
Figure 4: Compressive strength corresponding to different coupling agent contents.
Figure 8 :
Figure 8: Variation in anchorage capacity with temperature for resin capsule.
( 1 )
A new type of anchoring agent with heat resistance, high strength, and full length has been successfully developed.e optimum ratio is mixed resin/coarse stone powder/fine stone powder/accelerator/curing agent/KH-570 � 100 : 275 : 275 : 1 : 32.5 : 1, and the ratio of mixed resin is PET/FX-470 � 30 : 70. e viscosity, thermal stability, compressive strength, and anchorage force results prove that the new anchorage agent has superior physical and mechanical properties, especially in high-temperature environments compared with the conventional anchorage agent.(2) Experiments show that adding the KH-570 coupling agent in the new anchorage agent has an infiltration effect, which can reduce the amount of resin in the anchorage agent and the viscosity of the anchorage agent.is makes it more suitable for the full-length anchorage support technology and improves the compressive strength of the anchorage agent itself.e reason why the KH-570 coupling agent optimizes the anchorage agent performance is investigated by performing contact angle measurement and SEM imaging.(3) e results of compressive strength test and anchorage force test under different temperatures show that the introduction of the FX-470 resin into the new anchorage agent to modify the original PET resin can greatly improve the thermomechanical properties of the cured anchorage agent.e compressive strength of the new resin anchorage agent increased by 20.4%, 82.5%, 118.2%, and 237.5% at 10 °C, 50 °C, 80 °C, and 110 °C, respectively, while the anchorage strength increased by 4.7%, 8.7%, 40.2%, and 62.9% at 30 °C, 50 °C, 8 °C, and 110 °C, respectively.e results show that the heat-resistant epoxy group in the FX-470 resin molecule is embedded in the polymer cured by the anchoring agent through the interpolymer reaction between the mixed resins, which improves the overall heat resistance of the anchorage agent.
Table 1 :
Liquid index of resin.
Table 3 :
Screening of stone powder particles.
Table 2 :
Main composition of stone powder.
Table 4 :
Test proportion of new resin anchorage agent.
|
v3-fos-license
|
2020-12-18T14:07:59.615Z
|
2020-12-17T00:00:00.000
|
229305678
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2020.608886/pdf",
"pdf_hash": "6916b7df6662d22c2a9ec6c6554f9dc66fb013c1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44155",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6916b7df6662d22c2a9ec6c6554f9dc66fb013c1",
"year": 2020
}
|
pes2o/s2orc
|
Fluconazole for Hypercortisolism in Cushing’s Disease: A Case Report and Literature Review
Background Cushing’s disease is associated with an increased risk of pulmonary fungal infection, which could be a relative contraindication for pituitary adenoma excision surgery. Case We report a case of a patient with Cushing’s disease and pulmonary Cryptococcus neoformans. A 48-year-old woman was admitted to our hospital because of moon face and edema. Laboratory and radiological findings suggested a diagnosis of Cushing’s disease and pulmonary cryptococcus infection. Fluconazole 400 mg per day was administered intravenously and continued orally for 3 months. Both cryptococcus infection and hypercortisolism relieved and transsphenoidal resection was performed. Conclusion Cushing’s disease can be effectively treated with fluconazole to normalize cortisol concentration prior to pituitary surgery. Fluconazole is an alternative treatment especially in Cushing’s disease patients with cryptococcal pneumonia.
INTRODUCTION
Cushing's syndrome (CS) is a rare disorder caused by chronic hypercortisolism with multisystem morbidity, increased mortality and decreased quality of life (1). Surgical excision of the pituitary, adrenal or ectopic lesion is recommended as the first-line therapy of CS, but not all patients are eligible for surgery (2). Guideline recommends medical treatment in patients who are not candidates for surgery, or have recurrent disease, and in patients awaiting the effects of radiotherapy (2). Medication used in patients with CS can be classified into steroidogenesis inhibitors, pituitary-targeting agents, and glucocorticoid receptor antagonists (3). Steroidogenesis inhibitors can block various steps of the steroid synthesis pathway. Ketoconazole, the most widely used medication, is unavailable in most countries and regions because of the risk of severe hepatotoxicity (4). New steroidogenesis inhibitors, including osilodrostat and levoketoconazole, have shown efficacy and acceptable safety profiles in clinical trials (5,6). Though osilodrostat is approved by US Food and Drug Administration (FDA), it has not been used widely due to unavailability (7). The present work demonstrates the effect of fluconazole in controlling hypercortisolism in a patient with Cushing's disease (CD) and pulmonary cryptococcus infection. All cases previously reported in the literature are also reviewed. In addition, a comprehensive bioinformatics analysis has been done to identify the potential targets of fluconazole in CD.
CASE PRESENTATION
A 48-year-old woman visited the hospital for evaluation of facial swelling and fatigue lasting for more than one year. She reported 5 kg of weight gain, hypertension, insomnia, weakness, and easy bruising. She denied any fever, oligouria, chest distress or shortness of breath. She went to the local hospital and the examinations were unremarkable except "hypokalemia." She had been treated with irbesartan at a daily dose of 150 mg for 1 year. The social history and family history was unremarkable.
On examination, the blood pressure was 164/84 mm Hg, the pulse 56 beats per minute, the weight 53.7 kg, the height 146 cm and the body mass index (BMI) 25.19 kg/m 2 . She had moon face, dorsal fat pad, abdominal obesity and ecchymosis, but no striae. The remainder of the examination was normal.
Initial investigations were summarized in Table 1. A midnight serum cortisol was 16.2 ng/dl and adrenocorticotropic hormone (ACTH) was 94.9 pg/ml 24-h urinary free cortisol (UFC) was 813.5 mg/day. Cortisol failed to suppress during a 48-h low-dose dexamethasone suppression test (16.1 ng/dl). These findings were consistent with an ACTH-dependent CS. No suppression was seen with high-dose dexamethasone test. A pituitary magnetic resonance imaging (MRI) demonstrated a 1.2 cm adenoma ( Figure 1A). Inferior petrosal sinus (IPS) sampling confirmed a pituitary source of ACTH secretion. At the same time, the chest computed tomography revealed multiple lung nodules ( Figure 1B). Bronchoalveolar lavage fluid (BALF) and serum cryptococcal antigen was positive. Lung biopsy histology confirmed pulmonary Cryptococcus neoformans. Cerebrospinal fluid (CSF) cryptococcal antigen was negative. Immediate transsphenoidal selective adenomectomy (TSS) was not an option due to cryptococcus infection and possible postoperative cryptococcal meningitis. Intravenous fluconazole was started with 400 mg daily. One week later, the patient was discharged and continued with oral fluconazole 400 mg per day. At 3 months follow-up the pulmonary infection improved and the UFC was 42.2 mg/day. Fluconazole was withdrawn and TSS was performed. Histopathology confirmed ACTH-secreting pituitary adenoma.
Three months after TSS, the patient was in good general health and her serum cortisol and UFC normalized.
DISCUSSION
TSS performed by an experienced neurosurgeon is the first-line treatment for CD in most patients (2). Patients with CD have an increased risk of cyrptococcal infection (8). Undiagnosed cryptococcal pneumonia can lead to postoperative cryptococcal meningitis and poor prognosis (9). Medical therapy is needed in patients with hypercortisolism when surgery is not possible. Ketoconazole, which is used to control both hypercortisolism and fungal infections, is unavailable due to severe adverse effect. Fluconazole, another azole antifungal, is an alternative to ketoconazole with less hepatotoxicity.
The present work revealed only 5 other CS patients treated successfully by fluconazole when surgery was not possible or was noncurative (as showed in Table 2) (10-14). All of patients were female, which could be attributed to the female predominant prevalence of CS (female-to-male ratio 3:1) (1). Three of them were diagnosed as CD, two with ectopic ATCH-secreting syndrome (EAS), and one had adrenal carcinoma. The median UFC levels was 310 µg/day (range, 112-813.45 µg/day). Hypocortisolism was more prone to occur in CD patients instead of other kinds of CS. The reported dose of fluconazole ranged from 200 to 1200 mg/day. Two of them developed liver dysfunction with the daily dose of fluconazole more than 400 mg. Liver enzymes normalized after the dose decreased to 400 mg/day.
In previous reports, inhibitory effect of fluconazole on glucocorticoid production was controversial and less potent than ketoconazole (10,15,16). Some case reports showed adrenal dysfunction with fluconazole in patients with server comorbidities (17)(18)(19)(20). However, another study suggested that fluconazole was not associated with adrenal insufficiency compared with placebo (21).
The different in vitro effect of fluconazole could be attributed to the different experimental cell lines and different expression of steroid synthesizing enzymes of the cells. Previous studies demonstrated potent inhibitory effect of fluconazole on cortisol production in several human cell culture lines, but only a weak effect in rat cell culture lines (10,15,16). A previous study has revealed that the polymorphism in the CYP17A1 gene is associated with the responsiveness to steroidogenesis inhibitors in CS patients (22). However, the relationship of genetic variants with the efficacy of fluconazole is unknown. We predict potential targets of fluconazole and obtained a network from STITCH ( Figure 2). STITCH (http://stitch.embl.de) is a unique system pharmacological database describing the relationships between drugs, targets, and diseases (23). The results showed brain-derived neurotrophic factor (BDNF) was a target of fluconazole. BDNF is the neurotrophin mediating proneuronal survival and plasticity. Fiocco's study showed BDNF regulated the activity of hypothalamic-pituitary-adrenal axis (24). Similar studies showed BDNF polymorphism had an influence on individual cortisol response to stress (25,26). In Cushing disease patients, decreased BDNF level was observed after remission indicating a potential role of BDNF in Cushing disease (27). Over the past two decades, studies showed that BDNF and its receptor tropomyosin receptor kinase B (TrkB) were up-regulated in many types of cancers such as breast cancer and cervical cancer (28)(29)(30). The activated BDNF/TrkB signal stimulates a series of downstream pathways, including phosphoinositide 3-kinase/protein kinase (PI3K), Ras-Raf-mitogen activated protein kinase kinaseextracellular signal-regulated kinases, the phospholipase-C-g pathway and the transactivation of epidermal growth factor receptor (28). Dworakowska's study showed PI3K pathway was upregulated in ACTH-pituitary adenomas (31). Song's study showed PI3K/AKT signaling pathway could affect migration and invasion of pituitary adenoma (32). In this series, two patients with fluconazole therapy showed decreased ACTH levers, which could not be attributed to effects on adrenocortical steroidogenesis (14). We propose the following hypothesis, BDNF/TrkB and PI3K pathways are potent therapeutic mechanisms for fluconazole in Cushing disease. In the future, more in vitro and in vivo studies are needed to verify our findings.
CONCLUDING REMARKS
In summary, this case report and the bioinformatics analysis suggest that fluconazole might be effective in controlling hypercorticolism in CD patients. Further studies on the mechanism by which fluconazole inhibits cortisol production are needed to develop more potent and less toxic agents.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The patient provided written informed consent for research participation as well as for the publication of indirectly identifiable data (age, gender, and medical history).
AUTHOR CONTRIBUTIONS
YZ and WL wrote the first draft of the manuscript. YZ, QW, FC, and YW made contributions to the acquisition of the clinical data. YZ and YW made critical revisions and approved final version. All authors contributed to the article and approved the submitted version.
FUNDING
This study was supported by the Natural Science Foundation of Zhejiang Province (LQ19H160024). FIGURE 2 | Potential targets of fluconazole and Cushing's disease from STITCH. The node size reflects the degree of relationship: the smaller the degree value, the smaller the node size is.
|
v3-fos-license
|
2023-09-14T15:42:33.814Z
|
2023-09-01T00:00:00.000
|
261766478
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-2615/13/18/2868/pdf?version=1694407254",
"pdf_hash": "87c9587fa07dfb10e4313606901e92adb214e9d4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44156",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Economics"
],
"sha1": "909f5118f627a849b51e3d7db52741fcd210d080",
"year": 2023
}
|
pes2o/s2orc
|
Farmers’ Perspectives of the Benefits and Risks in Precision Livestock Farming in the EU Pig and Poultry Sectors
Simple Summary Smart farming is a concept of agricultural innovation that combines technological, social, economic and institutional changes. It employs novel practices of technologies and farm management at various levels (specifically with a focus on the system perspective) and scales of agricultural production, helping the industry meet the challenges stemming from immense food production demands, environmental impact mitigation and reductions in the workforce. Precision Livestock Farming (PLF) systems will help the industry meet consumer expectations for more environmentally and welfare-friendly production. However, the overwhelming majority of these new technologies originate from outside the farm sector. The adoption of new technologies is affected by the development, dissemination and application of new methodologies, technologies and regulations at the farm level, as well as quantified business models. Subsequently, the utilization of PLF in the pig and especially the poultry sectors should be advocated (the latter due to the foreseen increase in meat production). Therefore, more significant research efforts than those that currently exist are mainly required in the poultry industry. The investigation of farmers’ attitudes and concerns about the acceptance of technological solutions in the livestock sector should be integrally incorporated into any technological development. Abstract More efficient livestock production systems are necessary, considering that only 41% of global meat demand will be met by 2050. Moreover, the COVID-19 pandemic crisis has clearly illustrated the necessity of building sustainable and stable agri-food systems. Precision Livestock Farming (PLF) offers the continuous capacity of agriculture to contribute to overall human and animal welfare by providing sufficient goods and services through the application of technical innovations like digitalization. However, adopting new technologies is a challenging issue for farmers, extension services, agri-business and policymakers. We present a review of operational concepts and technological solutions in the pig and poultry sectors, as reflected in 41 and 16 European projects from the last decade, respectively. The European trend of increasing broiler-meat production, which is soon to outpace pork, stresses the need for more outstanding research efforts in the poultry industry. We further present a review of farmers’ attitudes and obstacles to the acceptance of technological solutions in the pig and poultry sectors using examples and lessons learned from recent European projects. Despite the low resonance at the research level, the investigation of farmers’ attitudes and concerns regarding the acceptance of technological solutions in the livestock sector should be incorporated into any technological development.
Introduction
The challenge of food production in the 21st century may materialize, as the world's population is expected to increase to 9.4-10.1 billion people by 2050 [1], implying an increase of at least 35%.To meet the demands for increased food production, a significant increase in the number of livestock is expected, especially in the BRIC countries (Brazil, Russia, India, and China) [2].Moreover, the livestock sector plays a significant economic and social role in the European Union (EU), which accounts for 4.1 million livestock in farms and 36% of the total agricultural activity [3].However, according to a Deloitte © discussion paper [4], if global warming is to be kept within 2 • C above pre-industrial levels, which requires the emissions associated with the production of meat to be decreased to 3.2 Gt by 2050, only 41% of global meat demand can be met by this date.If global warming is not restricted, heat stress will have increasingly adverse effects on meat and milk production, particularly in developing countries exposed to high temperatures [5,6].
Anthropologically induced environmental changes place constant pressure on animal production due to new and re-emerging pathogens resulting from the natural evolution of microorganisms.This could also potentially reduce the ability of farming communities to develop new crops in already deteriorated ecosystems.Likewise, growing urbanization reduces the labor force availability in areas typically involved in food production, increases costs and reduces the sector's productive capacity [7].Moreover, the recent COVID-19 pandemic crisis has clearly illustrated the emerging necessity of building a sustainable and stable agriculture that can sustain its resilience and secure reliable food supplies both regionally and globally amidst a global critical situation.
Agriculture is increasingly becoming knowledge-intensive, digitalized and influenced by technological developments at the supplier and consumer levels [8].The overwhelming majority of these new technologies originate from outside the farm sector.The adoption of new technologies is affected by the development, dissemination and application of new methodologies, technologies and regulations at the farm level, as well as quantified business models, all of which have implications for farm capital and other inputs.Additionally, farmers' collective knowledge derives from the knowledge of the individual farmers or stock people, which in turn reflects their training, acquired advice and information.All of these aspects make the adoption of technologies for sustainable farming systems a challenging and dynamic issue for farmers, extension services, agri-business and policymakers.Considering the wide range of objectives related to new technology adaption in the context of livestock farming, it is necessary for farmers, scientists and companies to work together collaboratively.
Smart farming is a concept in agricultural innovation that combines technological, social, economic and institutional changes [9].It employs novel practices of farm management at various levels (specifically focused on the system level) and scales of agricultural production, helping the industry to meet the challenges stemming from the growing food production demands and reduction in the workforce [10].The approach of Precision Livestock Farming (PLF) for a sustainable farming system refers to the continuous capacity of agriculture to contribute to overall human and animal welfare by using available information more effectively on farms.In turn, the better utilization of information enables farmers to provide sufficient goods and services in ways that are economically efficient and socially and environmentally responsible [11,12].PLF uses smart farming technology, which includes the utilization of various types of sensors to collect data, which is thereafter usually transferred collectively by communication networks to servers using Information and Communications Technology (ICT).In this Internet of Things (IoT), by generally accepted definition, large amounts of data from interconnected devices are recorded and analyzed by management information systems, data analysis solutions [13,14] and data analytics [15] domains.The use of the data provided by smart farming potentially helps boost productivity and minimize waste by allowing the necessary actions to be carried out at the right time and in the right place [16].An FAO report [17] highlighted the importance of ICT as a tool to help meet future food and feed requirements.
Digital technologies have been developed to continuously track real-time production performance and environmental conditions in various livestock facilities [18].In this sense, they facilitate an improved response to humans' and animals' needs by (a) maximizing production efficiency, (b) increasing product quality, (c) improving animal health and welfare, (d) reducing human occupational health and safety risk and (e) mitigating emissions from livestock.Policymakers can also benefit from increased information sharing, which allows them to gather a more complete overview of the situation at the national and regional levels.An additional major benefit connected with ICT use lies in the potential to reach all the layers of society.Moreover, recent technological developments in areas relevant to IoT facilitate an easier adoption of smart farming and its use by farmers [19].As farmers and their attending veterinarians, nutritionists and advisors become increasingly aware of the benefits of ICT, it will hopefully motivate them to upload data to central repositories on, for example, disease incidence, the number of live-born piglets in individual sows, feed intake and weather variables.The collection of animal-based data is advancing rapidly, with behavior data alerting farmers to health and productivity problems, as well as the physiological status of animals, such as when they are in estrus.Although the fundamental value of such data has been known for several decades (e.g., [20]), the miniaturization of recording systems has only recently made widespread use possible.For example, early versions of pedometers for dairy cows uploaded data to a computer attached to the cow's back [21].Still, it took several decades before pedometers were small enough to be feasible for mainstream use in dairy herds.
Processed data may collectively benefit animal production as patterns emerge or individually as perturbations in the individual animal or group of animals are detected in response to environmental variables.Furthermore, some of the world's largest agricultural producers are promoting the use of IoT in smart farming by creating incentive programs and public policies to fund research and training [22].Several recent reviews have been published on IoT solutions for smart agriculture, suggesting that this research field is constantly receiving new contributions and improvements [23].Technologies used for communication and data collection solutions are presented in [24], as well as several cloud-based platforms used for IoT solutions for smart farming.An IoT architecture with three layers (perception, network and application) was enforced to analyze the application of sensor and actuator devices and communication technologies within several farming domains, such as agriculture, food consumption and livestock farming [25].
On the other hand, it was suggested that European farmers lack the knowledge to understand the benefits of ICT-based PLF [26].The acceptance of (new) IT technologies, such as big data, computer vision, artificial intelligence, blockchain and fuzzy logic in the smart agriculture field were evaluated in [27].A study of the consumer perceptions of PLF technologies showed that consumers expect that PLF technologies will enhance the health and welfare of farm animals while generating environmental improvements and increasing transparency in livestock farming [28].The researchers, however, also expressed the fear that PLF technologies will lead to more industrialization in livestock farming, that PLF technologies and data are vulnerable to misuse and cyber-crime and that PLF information may be inadequately communicated to consumers.Public opposition to the industrialization of livestock production is encouraging de-intensification by farmers, either to meet government standards or to capture higher product prices.However, less intensive livestock farming utilizes more land, a commodity in short supply with a growing world population and competition from carbon farming to offset increased emissions.Recently, a book published by Wageningen Academic Press detailed the on-farm experiences (both positive and negative) of 90 authors from 16 different countries: all users, developers and academics working in the PLF field [29].
To ensure that agriculture supplies secure and nutritious food while minimizing environmental threats, farmers need specific economic incentives, help with incorporating innovation into their enterprise and knowledge exchange to encourage the use of advanced and smart technologies.Coherent agricultural, environmental, trade and R&D policies must be presented by the government.It is also vital to base policy decisions on robust, wellestablished scientific criteria so that the decisions are justified and can be explained to all stakeholders.The EU has been fostering PLF through funding and investment since the FP7 program.The EU CORDIS service (cordis.europa.eu,accessed on 31 May 2023) provides details on 77 forefront projects dealing with animal production systems and animal health, which have received an EU contribution of € 508 million under Horizon 2020 and Horizon Europe programs [3,30].In light of the ongoing significant European investments in animal research and trends in the EU's food chain [31], we recognize the necessity of a state-of-art review describing the latest technological developments (last decade) in the poultry and pig sectors, all in context and based on the actual funded and operative European projects as published in the CORDIS service.Previous studies [12] indicated that farmers initially have concerns about the usefulness of PLF tools and typically do not fully exploit them.However, this can positively change when implementing extension/education processes [32].This literature review aims to identify how digital technologies are implemented in European livestock farms by (i) presenting a review of the state-of-the-art adoption in (1) pig and (2) poultry farms and (ii) reviewing farmers' attitude toward and concerns surrounding the acceptance of PLF technologies, based mainly on past EU projects.
Technologies in Livestock Farming
Generally, sensors such as thermal imagery, microphones, GPS and others are used in PLF to collect real-time data [19].Due to the significant amount of raw data collected, algorithms are often applied to aid analysis.The data can either be directly processed or immediately relayed to the farmer, or it can be transferred to the server of a service provider company where it is analyzed, and the feedback is sent to the farmer.ICT can promote learning, which in turn can facilitate technology adoption among farmers, and it has the potential to revolutionize early warning systems through better quality data and data analysis.However, the information relayed by ICT should be properly targeted and relevant if it is to affect farmers' production decisions.The evidence [33,34] suggests that content quality and relevance are crucial.Building up human capacity, as well as the infrastructure needed to facilitate better connectivity, is also critical.In this way, the use of contact time between humans and livestock can be more productive, but it should aid good stockpersonship rather than being a replacement.
The manner by which information is delivered is also a crucial determinant of effectiveness.ICT encompasses many different technologies, from computers and the Internet to radio and television to mobile phones.Their impact varies widely depending on which specific technology is used but also on farmers' level of technological literacy.A growing body of evidence suggests that in many circumstances, mobile phones can increase access to both information and capacity-building opportunities for rural populations in developing countries [35].Farmers can get access to timely and high-quality information on products and inputs, as well as on environmental and market conditions.Short message services (SMS), voice messages, short video trainings, audio messages, social media interventions and virtual extension platforms that can improve peer networks (through online platforms/websites) can effectively enable farmer-to-farmer and farmer-to-experts information sharing.Audio-or voice-based question-and-answer services may overcome the limitations of text-based platforms.SMS messages can be effective for sharing simple price or weather information, but to facilitate and revolutionize learning and make knowledge widely accessible, especially in the context of adapting agriculture to climate change, other methods and modes will be necessary.
Within the framework of the AutoPlayPig project [36], funded by the EU's Horizon 2020 program under the Marie Skłodowska-Curie grant, a comprehensive review was published on information technologies (ITs) developed for welfare monitoring within the pig production chain, evaluating the ITs developmental stage and how these ITs can be related to the Welfare Quality ® (WQ) assessment protocol [37].Of the 101 publications included in the systematic literature analysis, 49% used camera technology, 18% used microphones and 15% animal attached sensors, including accelerometers and radio frequency identification (RFID) tags.The sensor technology used to measure environmental biomarkers included thermometers, an Environmental Monitoring Kit, an anemometer, an air-speed transmitter and a weather station.Most publications investigated feature variables on individual or pen levels of behavioral animal biomarkers.Most publications investigated ITs for welfare monitoring in growing pigs and lactating sows, whereas almost no publications investigated pigs during transport or sows in the insemination unit.Nearly all (97%) publications investigated welfare issues in real-time; however, only 23% properly validated their results.An analogous systematic review was published on validated PLF technologies for pig production in the context of animal welfare [38], within the framework of the ongoing EU project ClearFarm [39], funded by the EU's Horizon 2020.Eighty-three technologies with a potential link to animal-based pig welfare assessment were found, based on 10 different types of sensors (in descending order of frequency of use): camera, load-cells (with and without RFID), accelerometer, microphone, thermal camera, photoelectric sensors, flow meter, RFID and non-contact body-temperature sensors.Of these technologies, 39% was used for fattening pigs, 33% for sows and 28% for piglets and weaned piglets.Monitored indicators included activity and posture-related behavior, feeding and drinking behavior, physical condition, health-related traits and other behaviors.
In a review of PLF in the poultry sector [40], as part of the completed EU-funded ERA-NET project ANIHWA [41], a similar segmentation was demonstrated.Fifty-two percent of the 264 reviewed publications described sensor technology, 42% described the use of cameras and only 14% described the use of microphones.Animal health and welfare constituted the most popular field of study (64%), followed by production (51%) and, by a large margin from third place, sustainability (only 8%).Most measurements used to evaluate animal health and welfare were behavior-based, with 44% of publications using locomotory behavior, followed by bird sounds (21%).Out of the 264 reviewed publications, a mere 4% described commercially available systems.
All of the data generated by the aforementioned sources need to be exploited to validate and further develop useful algorithms; however, this requires the availability of advanced infrastructure [42].As such, big data generated from technological sources require advanced analytics for effective exploitation.Advanced infrastructure is also needed for the timely and efficient execution of these big-data-enabled algorithms prior to delivery to the farmer.The recently completed EU project CYBELE [43], funded under the Horizon 2020 Programme, aimed to introduce to all stakeholders along the agri-food value chain an ambitious and holistic large-scale High-Performance Computing (HPC) platform, offering services in data discovery, processing, combination and visualization and solving computationally-intensive challenges requiring very high computing power and capable of actually generating value and extracting insights from the data [42].
The future may enable ICT to bring even greater improvements in animal welfare and productivity.A machine learning framework to predict the next month's daily milk yield, milk composition and milking frequency of cows in a robotic dairy farm has been developed [44].The self-selection of rewards can contribute to animals' freedom of choice using digital technology such as touchscreen monitors, which have already pioneered for animals in zoos [45].The selection of foods from a variety of possible plants on offer has evolved, and allowing animals to choose resources via smart devices may improve their welfare.In rodents, enrichment leads to greater exploratory behavior and better coping with stressful conditions [46].In pigeons, free choice is preferred to forced selection [47,48], and comparable benefits may be demonstrable in poultry and other farm animals.Primates have been most often demonstrated benefits from mastery over their environment [49], but reliable testing for livestock is yet to be undertaken.
Scientific and Commercial Review of Operational Concepts and Technological Solutions in the Pig Sector
Various areas of research are reflected in the European studies.Among them, several areas are prominent.
Weighing optimization-The completed European project ALL-SMART-PIGS [50], funded by Horizon FP7, was one of the first EU projects to showcase commercialization as a main focus.The Weight-Detect TM application (PLF Agritech, Toowoomba, Australia) is an innovative video image analysis system that determines the group average weight of a pen of animals by a video observation system.It enables farmers to determine growth and any weight-based indexes without physically weighing the animals [51].Pig weighing optimization was selected for the evaluation and technical validation of a platform in the aforementioned CYBELE project [43].The tool is a convolutional neural network that takes images and captures videos above the pens of fattening pigs throughout their weight gain and encodes these images into a latent vector representation.Together with additional relevant information, it estimates the mean ± SD live weight of the pigs in the pen.Body weight recording was the subject of the ClearFarm project [39].The automated estimation of body weight was conducted by a depth camera (iDOL65, dol-sensors a/s, Aarhus, Denmark) [52] placed above the individual feeding station or three-partitioned feeder, which worked in combination with an RFID system installed in the feeding stations.
The performance of the depth camera and its underlying algorithm was satisfactory at both installations; however, a lack of frequent maintenance, changes in pens' uniformity and dietary shifts may compromise image sampling and body weight estimation.Similar results were reported by [53].
Play behavior-In general, the scientific literature supports the use of play behavior as an indicator of good animal welfare and affective states that are valanced [38,[54][55][56].The AutoPlayPig project [36] aimed at taking the first steps in developing a system for automatic detection of play behavior in young pigs as an indicator for welfare assessment.This was accomplished by developing an algorithm to extract heart rate (in beats per minute) from raw video data of an anesthetized and resting pig wearing an electrocardiography (ECG) monitoring system, thus combining ethology and computer science into one field of Computational Ethology (CE) [57].Play behavior frequency over the process of weaning piglets was investigated in the ClearFarm project [39] by analyzing the effects of two weaning methods [conventional weaning: two litters mixed in a weaner pen of different size and design vs. litter staying in the farrowing pen after removing the sow] and two genetic hybrids [DanBred Yorkshire × Landrace vs. Topigs Norsvin TN70 Yorkshire × Landrace] [58].The results showed that weaning stress in pigs may be reduced both by using a genetic hybrid pig breed with higher birth and weaning weights and by keeping litters intact in a familiar environment after weaning.A first attempt at the automatic detection of locomotor play behavior in young pigs from video by classifying locomotor play from other solitary behaviors, including standing, walking and running, is presented in [59].Two methods were tested: a method utilizing the Gaussian Mixture Model (GMM) for quantification of movement combined with standard machine learning classifiers and a method utilizing a deep learning classifier (CNN-LSTM) on the raw segmented video.The deep learning classifier obtained higher Recall, Precision and Specificity values.
Tail biting-In intensive piggeries, tail biting is common and is considered an indicator of negative welfare [59].This issue is addressed in the on-going European project Code Re-farm [60,61] using Duroc × (Landrace × Yorkshire) piglets in free-farrowing pens.In conclusion, the study's proposed method detected tail-biting behavior from video sequences of entire pig pens, claiming its CNN-LSTM model to be superior to the CNN-CNN model.Moreover, the study found that implementing principal component analyses on the extracted spatial feature vectors can increase the performance compared to using all extracted features.
Virus detection-A novel and affordable field diagnostic device, based on advanced, proven, bio-sensing technologies to tackle six important swine diseases has recently been developed within the Horizon 2020 SWINOSTICS project [62].The diagnostic device allows threat assessment at the farm level, with the analytical quality of commercial laboratories.The SWINOSTICS mobile device can simultaneously analyze four samples to detect six of the most important swine viral pathogens: Porcine Parvovirus (PPV), Porcine Circovirus 2 (PCV−2), Classical Swine Fever Virus (CSFV), Porcine Respiratory and Reproductive Syndrome (PRRSV), Swine Influenza Virus (SIV) and African Swine Fever Virus (ASFV) [63][64][65].
According to [63], the SWINOSTICS device can be used at the farm level to assess the health status of newly purchased animals and to identify PPV-and PCV−2-infected animals before the onset of clinical disease, thus supporting evidence-based disease control strategies.
Additional research areas-Two smart farming applications ready for commercialization on European pig farms were evaluated within the ALL-SMART-PIGS project [50]: a feed intake measurement device (Feed-Detect TM , PLF Agritech, Toowoomba, Australia) and an environmental monitoring (Enviro-Detect TM , PLF Agritech, Toowoomba, Australia) device [18,66,67].In addition, a sound monitoring device (originally developed within the Catholic University of Leuven) was also evaluated to facilitate early detection of respiratory diseases on pig farms [51].The sensor outputs of these technologies have been combined in FarmManager management system (schauer-agrotronic.com, accessed on 30 June 2023).Chain feed optimization was realized by using traceability in an online digital logbook, including a SMS-based warning system for the farmer [50].Technical and technological issues and their adequate implemented solutions, the technological impact of installed PLF and the business impact of their usage were all considered.Sustainable pig production was another demonstrator selected in the CYBELE platform [43].It utilized data from different barn sensors to monitor individual pigs' feeding and drinking behavior on a continuous basis.Based on multivariate algorithms, problems at the individual and group levels could be detected.The tool exhibited an improvement in the average health prediction precision and sensitivity in warning systems for pigs compared to a previous model using the same dataset.In Table 1, we present some of the technological advancements and scientific research involved with PLF in the pig sector.As evident from Table 1, the volume of research in the pig industry concludes 42 projects and 79 peer-reviewed articles.FP7-Seventh Framework Programme ALL-SMART-PIGS Weight-detect; video image analysis; group average weight [45]; feed sensor; weight of feed [45]; cough monitor; microphones; sound talks online application [45]; camera system; activity and the distribution of pigs; occupation-density index; activity index [45]; air quality sensor; airborne pollutants [45]; farm traceability system; production data; health monitoring; transportation [45] FP7-Seventh Framework Programme EU-PLF Pig cough identification; health monitoring [73]; camera recording; animal shapes; animals' position and movement [74] Flemish Camera-based data of: feeding behavior [136]; postures and drinking behavior [137]; tail-biting behavior [138], deep learning models; water and feed consumption; weight; weaned piglets; connected feeders, connected drinkers, automatic weighing stations, RFID ear tags; early detection; diarrhea [139]; behaviors of a group; video data [140]
Scientific and Commercial Review of Operational Concepts and Technological Solutions Used in the Poultry Sector
PLF development in the EU has most commonly focused on broiler farming, followed by laying hens.Modern broiler strains in intensive production systems reach their target weight in just 5-6 weeks or less [141].This short life span means that it is difficult to maintain a balance between production objectives and bird welfare.A review of the trends in PLF in the broiler production industry, supported by the Irish Innovation Partnership Pathway, [142] elaborated that while the use of electro-chemical sensors in precision farming is quite common, the use of state sensors measuring physical properties such as temperature, acceleration or location is still at a preliminary state of deployment.As the cost of wearable sensors decreases, the option of fitting a large number of birds with these physical state sensors seems more and more feasible.However, a recent study in Flanders, Belgium [143], showed that the broiler chickens' behavior was substantially interrupted after the wearable sensors were fitted.Within the remote sensors technologies, the Near Infrared (NIR) sensors may provide advanced data such as the thermal profiles and physical properties of the chickens, as well as measurements of CO 2 .Non-point sensor datasets, mainly video and still image datasets for continuous monitoring, have been implemented for monitoring bird performance and estimating average bird weight [142].
The opportunity exists in the poultry sector to monitor ammonia concentrations using multiple sensors feeding data into a central console.However, developing an adequate ammonia sensor is still a challenge that has not been resolved in a satisfactory way.One of the main outputs of the European project EU-PLF [144], funded by Horizon FP7, was an embryonic blueprint for commercializing PLF type technologies.Within EU-PLF, broiler activity was defined as a key indicator for welfare and health.The remote camera detection of broiler behavior enabled the development of an early warning system to alert managers to unexpected broiler behavior with 95% true positive events [145].Relationships between leg problems, such as Foot Pad Dermatitis, and environmental variables (i.e., temperature humidity index, THI) were detected, which aided in developing an automated prediction system [146,147].An analysis of behavioral responses, avoidance distances and gait scores, to human (farmer) presence yielded an indicator for broilers' fear of humans.Finally, indoor particulate matter concentrations (dust) detected by sensors showed a strong correlation between emissions and bird activity [148].
Alerting farmers to welfare problems in real-time, especially during winter nights when ventilation is low, allows for fast and targeted interventions, which will immediately benefit the flock compared to traditional welfare assessments that have usually occurred on the next morning [149].Ammonia concentrations are often higher during the day-time in livestock buildings due to increased evaporation via higher temperatures, greater movements of birds and increased airflow [150].
Research and development in poultry disease identification and control should be prospective and incorporate new technologies and should pay special attention to zoonotic diseases.Such a perspective was demonstrated by [151], in the framework of two completed Horizon 2020 projects-SMARTDIAGNOS [152] and VIVALDI [153]-with the study of two Campylobacter species-C.jejuni and C. coli-in poultry flocks.These two species account for most human campylobacteriosis [154], and poultry and poultry products are considered to be the main sources of disease transmission [155].To tackle this, a simple and rapid Loop-Mediated Isothermal Amplification (LAMP) assay was used to detect C. jejuni and C. coli in chicken feces.
Broiler production systems must be optimized to enhance their energy/resource efficiency, minimize carbon footprint and create sustainable supply chains by developing the necessary infrastructure across all stages of production, including breeding, hatching, rearing, processing and distribution to consumers.Collaborative research and advanced technologies can help tie together the different components of the system and their relationships.The consequences of not supporting farmers in implementing new technologies may result in the loss of social licence and even threaten the poultry industry's premier position in the global marketplace and the ability of the industry to provide safe and nutritious poultry products to consumers worldwide [156].The lack of collaboration between the private and public sectors and the lack of innovative ways to articulate concerns from producers and consumers to policymakers remain barriers to technological adoption [13].
In Table 2, we present some of the technological advancements and scientific research involved with PLF in the poultry sector.As evident from Table 2, the volume of research in the poultry industry is more than twofold lower than that of the pig industry, featuring 16 projects and 27 peer-reviewed articles.
Table 2. Examples of funding programs and agencies using PLF technologies in the poultry sector developed within the last decade in the framework of European funded projects.Studies without clear acknowledgment of the funding source were not included.
Scientific Review of Farmer's Attitudes and Obstacles in Acceptance of Technological Solutions in the Livestock Sector
Qualitative and quantitative assessments of the attitudes and barriers to PLF technology adoption have shown the manifold factors that influence farmers' technology decisions and highlighted the economic, socio-demographic, ethical, legal, technological and institutional aspects that need to be considered for widespread technology acceptance [178][179][180].They also showed that "innovation uncertainty" has led to a rather slow uptake of precision technologies by farmers thus far [181].Kling-Eveillard et al. [182] mentioned that farmers using PLF depict a stockpersonship that has not fundamentally changed but which involves new components such as tasks, skills and schedules.Moreover, we know that the manner of PLF usage varies between farmers, i.e., the degree to which the farmer delegates tasks to the equipment [183].Noting this, specific attention should be given to the study area, as the attitudes and barriers to technology adoption may vary depending on the socio-economic and cultural situation of the region.Studies in countries with strong educational institutions and high standards of living may experience different barriers than farmers in low-income countries that lack more basic needs for technology adoption (such as internet access, education, monetary funds, social capital, etc.).This review section focuses primarily on European and American study sites and gives an overview of prominent farmers' perspectives relevant for technology adoption.
The most reported aspect in almost any region is the fear of high investment costs that are needed to enable PLF [180,[184][185][186][187][188][189].This barrier is particularly prominent for smaller production sites, as the expected investment returns are more limited compared to big farms [185,188,190].
Aack of trust in the technological capabilities and robustness of the technology was another frequently reported factor that affected technology adoption [179,180,189,191].As trust has many different notions, there are several associated aspects that directly or indirectly influence the confidence in specific technologies.Farmers that are in close proximity to other farms and are part of a wider network tend to adopt novel technologies quicker and more optimistically [180].Trust in technology is higher if colleagues have used the technology effectively before [192].This networking effect was also shown by [184], who found that 68% of farmers make adoption decisions based on information obtained from colleagues.Trust may also be a relevant factor in the sense of security and privacy.As modern PLF technologies are embedded in an IoT environment and are often accompanied with decentralized data storage (e.g., cloud or edge devices), concerns about data safety may arise.Privacy and security concerns are one of the most prominent barriers that inhibit digital technology adoption by farmers in Wisconsin [179].However, a study by [180] found that the participants did not express any concerns about data privacy issues but are optimistic about the positive influence of collective data processing.
Another important factor influencing the purchasing decision in PLF technologies is the perceived usability of PLF products.This includes closely related factors such as the complexity of technologies, the necessary education to install, interpret and use of the technology as well as the external dependencies on service providers and vendors that are associated with it [180,185,188].Scholars [14] and farmers [189] have both highlighted that the necessary technology integration and interoperability among PLF relevant systems further hinder usability and therefore harden existing barriers.Some other factors, such as technological relevance or lack of awareness have been identified by individual studies [179,180,189].These are believed to be of minor importance, and in practice, most attitudes are closely linked to the already mentioned farmers' characteristics and associated barriers.This also highlights the potential of positive side-effects if one addresses the individual fears and needs of farmers in terms of technology adoption.
Interviews and surveys constitute the main methodology through which the voices of farmers are heard, but it is always important to consider their accuracy, especially in relation to sensitive animal welfare concerns.In the EU-PLF project [144], farmers voiced their hesitation to purchase PLF technologies, unless its derived benefits are clear and unequivocal, and they also had concerns about maintenance.The issues and importance of training on-site, providing professional on-demand and continuous support, especially concerning animal welfare assessment [193], and establishing demonstration farms were stressed.Almost all farmers were afraid of losing direct contact with the animal-their "care relationship" (particularly pig farmers rather than poultry farmers).Prospected environmental and welfare regulations hindered the farmers from investing due to uncertainty about whether future market conditions will allow their investment to be repaid.Further negative associations with PLF were its perceived complex operation and a partial ability (at best) of the farmer to understand the information in a simple and coherent way and act upon it.As one farmer stated: "The data doesn't have to be 100% accurate, but 100% reliable" [193].
Farmers from the pig and poultry sectors in the UK and Spain interviewed in the EU project Feed-a-Gene [194] stressed the importance of providing accurate and complete information to farmers and the need for a detailed evaluation of novel technologies in a commercial setting before more widespread adoption.Interviewees from the pig sector favored the concept of precision feeding and the resultant improvement in feed conversion efficiency and improved animal welfare; however, farmers from the poultry sector (in Spain) were largely unenthusiastic.In the pig sector, the benefits seemed clearer for gestating sows than for breeding pigs.
The expected high costs for investment have led to scepticism about whether gains in feed efficiency necessary to justify the investment would be realized.This would particularly apply if existing buildings, infrastructure and feeding systems could not be simply adapted, as most farmers believed to be the case.Concerns were raised about the necessary skill level of operating such precision feeding systems, as it would require skilled labor, which is expensive and could increase labor costs.Additionally, equipment suppliers must be able to provide a fast and reliable on-farm repair service, which requires enough skilled staff.
One of the targets of the SusPigSys European initiative [195] (part of the ERA-Net Cofund activity SUSAN) is to promote farmers' wellbeing.Farmers of pig production systems in seven EU member states (Austria, Germany, Finland, Italy, Poland, the Netherlands and the United Kingdom) participated in national workshops with stakeholders, where the important social implications for farmers themselves were discussed.The participants from Germany, Finland and the UK stressed the importance of consumers' power along the supply chain and societal acceptance of the public image of pig production and the farming profession, highlighting the disconnect between the industry and the consumers.Most EU pig farmers are concerned with animal welfare and environmental impacts, as well as the economic survival of their businesses.Farmers also would like people outside the industry to better understand the demanding work of pig farmers in producing food sustainably.This latter point is echoed in the completed FP7 project PROHEALTH [196], where [197] examined the attitudes of the public in five EU countries (Finland, Germany, Poland, Spain and the UK) toward intensive animal production systems and production diseases in the broilers, layers and pig sectors.Most alarming is that a significant portion of the public is not familiar with modern animal production; nonetheless, they perceive intensive production systems negatively, which subsequently influences their consumer behavior.
To counter this and other negative associations with PLF technologies, the "Livestock-Sense" project [198] was implemented in seven different EU countries to encourage PLF technology adoption and increase the general understanding of these technologies.An online quantitative survey was undertaken, and follow-up interviews, as well as focus group discussions (FGDs), were organized to obtain qualitative results.The quantitative questionnaire results demonstrated that the existing level of automatization on the farms, the average age of the livestock buildings (and associated production technologies) and the availability of internet connectivity were clear indicators of livestock producers' "readiness levels" to adopt PLF technologies.In the second half of the project, complex software development was undertaken to create an integrated cloud-based ICT tool that captured the key outcomes of this project, including the (1) user classifier, (2) the benefit calculator and (3) advice generator software applications.
The possible advantages mentioned by interviewees included the possibility of working with large animal groups (however, this is only possible for big farms), gains in space by removing passageways and walls, retaining young people in rural areas, as they might find careers in pig husbandry more attractive, and a reduction in the environmental footprint of pig farming.A Swedish farmer in the egg production sector who was interviewed as part of the completed EU project SURE-Farm [199] advocated that new machines and robots have helped to eliminate heavy physical work, which previously limited the opportunities for older farmers and farm workers to continue working [200].The Horizon Europe Thematic network BroilerNet [201], with 25 partners in 13 countries, is aiming to create 12 innovation networks at national level and three EU level networks of broiler farmers, advisors, supply chain integrator companies, farmers' organisations, researchers, and veterinarians.The project will focus on environmental sustainability, animal welfare and animal health management.By identifying the most urgent needs of broiler farmers, the network will collect and evaluate good practices that are able to meet these needs.
Nevertheless, it is important for future research in both sectors to focus not only on the technological improvements of tools and sensors but also on the aspects of environmental, economic and social sustainability of livestock production that impact both farmers and the community and consumers [202].
6.
PLF systems can help to increase production efficiency and meet consumer expectations of more environmentally and welfare-friendly production at a time when there is extreme pressure on land availability for food production that deters farmers from reducing the intensity of their operations.Several EU-funded projects have helped to identify and develop PLF technologies that could benefit the livestock industries, particularly in the pig and poultry sectors.The large volume of research in the pig industry is welcomed, but attention must be paid to the global and European trends of an increase in broiler-meat production, which is growing faster than any other meat type, including pork production.Therefore, greater research efforts are required in the poultry industry, with particular recommendation for the development of enhanced research infrastructures in the sectors of laying hens, turkeys and quails, etc., on the basis of their being underrepresented in the plethora of active research projects in Europe.
The EU continues to lead in this field, although it is expected that other regions, such as PR China and the USA, could be important in scaling up the production of PLF technology once its value is proven.
Considerable obstacles to widespread PLF adoption have been identified and must be addressed.High investment costs, a lack of trust in the technology, uncertainty in the future market for their products and the usability of the technologies have all been identified as impediments to adoption.Fifty-eight percent of European farm managers are 55≤ years of age, and of them 33%, are over 65 years.Most of them work on small-size farms, mainly family farms, which constituted a staggering 94.8% of EU farms in 2020.These data illustrate the magnitude of the challenge in embedding and implementing PLF in contemporary livestock agriculture in Europe.It can be concluded that there is enormous importance in the integration and involvement of stakeholders from the fields of social sciences in order to mediate the farmer-technology interface.Given the growing frequency of crises, such as the recent COVID-19 epidemic, it is imperative to supply farmers with adequate platforms for installation, service-oriented accompanying and a significant financial support network that will allow them to more realistically and competently deal with prevailing barriers in terms of investment and innovation uncertainty.In particular, the investigation of farmers' attitudes and obstacles to acceptance of technological solutions in the livestock sector should be integrally coupled with any technological development.
We perceive the increased use and uptake of PLF in the future as imperative.The future management of issues concerning the unknown costs and benefits of PLF systems should be the responsibility of stakeholders.This need is not only urgent for farmers but also important for financial institutions and governments offering support and subsidies.To eliminate distrust in technology and provide solutions that meet farmers' needs and are well-suited for farming conditions, developers must act.Novel devices, sensors and technologies that have a clear and quantifiable advantages for farmers of the pig and broiler sectors should be demonstrated, targeting environmental monitoring and the AI-driven analysis of livestock behavior in order to maximize the economic production and the environmental and welfare performance of the animals.To effectively implement PLF adoption, it is crucial that farmers are clearly informed of the minimum farm infrastructural requirements.Additionally, to address any concerns related to using advanced technologies, enhanced collaboration between farmers, scientists and engineers is required, coupled with targeted education, training and information-sharing.
Table 1 .
Examples of funding programmes and agencies using PLF technologies in the pig sector developed within the last decade in the framework of European-funded projects.Studies without clear acknowledgment of the funding source were not included.
|
v3-fos-license
|
2018-04-03T04:35:46.820Z
|
2013-10-02T00:00:00.000
|
18377641
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/2193-1801-2-499",
"pdf_hash": "6abaf9d2d426a834a5c5ab8e05f748e4f9e88f45",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44158",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "f8324ae2318b065e201b9c9c832cd4739fa28643",
"year": 2013
}
|
pes2o/s2orc
|
Trace elements can influence the physical properties of tooth enamel
In previous studies, we showed that the size of apatite nanocrystals in tooth enamel can influence its physical properties. This important discovery raised a new question; which factors are regulating the size of these nanocrystals? Trace elements can affect crystallographic properties of synthetic apatite, therefore this study was designed to investigate how trace elements influence enamel’s crystallographic properties and ultimately its physical properties. The concentration of trace elements in tooth enamel was determined for 38 extracted human teeth using inductively coupled plasma-optical emission spectroscopy (ICP-OES). The following trace elements were detected: Al, K, Mg, S, Na, Zn, Si, B, Co, Cr, Cu, Fe, Mn, Mo, Ni, Pb, Sb, Se and Ti. Simple and stepwise multiple regression was used to identify the correlations between trace elements concentration in enamel and its crystallographic structure, hardness, resistance to crack propagation, shade lightness and carbonate content. The presence of some trace elements in enamel was correlated with the size (Pb, Ti, Mn) and lattice parameters (Se, Cr, Ni) of apatite nanocrystals. Some trace elements such as Ti was significantly correlated with tooth crystallographic structure and consequently with hardness and shade lightness. We conclude that the presence of trace elements in enamel could influence its physical properties. Electronic supplementary material The online version of this article (doi:10.1186/2193-1801-2-499) contains supplementary material, which is available to authorized users.
Background
Tooth enamel is composed of both an organic and an inorganic phase. The organic phase is composed of proteins such as amelogenin, ameloblastin and tuftelin, as well as minor concentrations of proteoglycans and lipoids (Belcourt and Gillmeth 1979;Eggert et al. 1973;Glimcher et al. 1964). The enamel inorganic phase is composed of well-packed nanocrystals made of calcium phosphate apatite (HA) with small amounts of incorporated trace elements (Sprawson and Bury 1928). The organization and size of apatite crystals in tooth enamel affects its hardness (Jiang et al. 2005) and optical properties (Eimar et al. 2011(Eimar et al. , 2012. These findings raise the following question: what determines the size of apatite crystals in tooth enamel? One possibility is that the tooth protein content could affect its crystal domain size, however we had found that the concentration of protein in enamel is not associated with the crystallographic structure of mature teeth (Eimar et al. 2012). Therefore this study was designed to investigate other factors, namely the presence of trace elements that can influence the size of apatite crystals in enamel.
Unlike the effect of trace elements on synthetic HA, their role on crystallographic properties of enamel is unknown in the literature. Despite the very low concentration of trace elements in our body, they play a significant role in human body healthiness (Carvalho et al. 1998). Trace elements can enter our body through digestion of food or by exposure to the environment (Lane and Peach 1997) and they can be incorporated into the structure of enamel HA. Trace elements in tooth enamel have been investigated for their role in caries (Curzon and Crocker 1978) and it was found that the presence of F, Al, Fe, Se and Sr is associated with the low risk of tooth caries, while Mn, Cu and Cd have been associated with a high risk (Curzon and Crocker 1978). However, despite their apparent importance on tooth enamel homeostasis, the effect of trace elements on the crystallography and physical properties of enamel remains unknown.
The aim of this study is to find the correlation between the concentration of trace elements detected in tooth enamel and its crystallography and physicalchemical properties. We hypothesized that the incorporation of trace elements in the structure of enamel can affect its crystallography and consequently alter the physical properties of enamel.
Physical chemical properties of tooth enamel
Among all tooth enamel samples, cell lattice parameters varied along a-axis between 9.40 and 9.47 Å (mean = 9.43 ± 0.004 Å) and along c-axis between 6.84 and 6.92 Å (mean = 6.86 ± 0.004 Å). Unlike the cell lattice parameter along c-axis, the cell lattice parameter along a-axis followed a normal distribution among all samples (Figure 1a, 1b). Crystal domain size along a-axis ranged between 10.31 and 18.08 nm (mean = 13.49 ± 0.349 nm) and along c-axis varied between 18.09 and 25.85 nm (mean = 21.7 ± 0.33 nm) following a normal distribution (Figure 1c, 1d).
The tooth enamel hardness, crack length and shade lightness followed a normal distribution among the samples analyzed (Figure 1e, 1f and 1g). The hardness values varied between 2.91 and 4.36 GPa (mean = 3.64 ± 0.08 GPa), the crack length varied between 14.60 and 30.02 μm (mean = 23.31 ± 1.49 μm) and the tooth shade lightness values ranged between 59.0 and 96.1% (mean = 79.0 ± 1.8). The relative content of apatite inorganic carbonate type A and type B among the enamel samples followed a normal distribution, while the relative organic content did not (Figure 1h,1i,1j).
Trace elements in tooth enamel
A total of 19 trace elements were detected in the tooth enamel samples by ICP (see Additional file 1: Table S1). The concentration of the different trace elements varied considerably among tooth enamel samples ( Figure 2a). Cr, Mo, Co and Sb had the lowest concentration in tooth enamel compared with other trace elements while Zn, Na and S had the highest one. In order to assess how concentrated were the trace elements in enamel compared to the rest of the body, they were normalized to the average elemental composition of the human body (Frieden 1972;Glover 2003;Zumdahl and Zumdahl 2000). Among the 19 trace elements detected, some of them had a similar concentration in tooth enamel compared to the rest of the body (K and Fe), while others were concentrated by 1 (S, Sb, Pb, Si, Na and Mg), 2 (Mo, Co, Zn, Mn, Cu, Ti and Cr), or 3 orders of magnitude (Se, B, Al and Ni). The largest difference in concentration of trace elements between tooth enamel and the rest of the body belongs to Ni, which Ni is close to 3500 times more abundant in tooth enamel than in the rest of the body.
The correlation analyses of trace elements
The simple linear regression was used to find the correlation between each possible pair combination of trace elements in tooth enamel (see Additional file 1: Table S2). It was found that the following correlations between trace elements were significant: Al-to-B, Al-to-Sb, Al-to-Si, B-to-Sb, B-to-Si, B-to-Cu, Co-to-Cr, Cr-to-Cu, Cr-to-Ni, Cr-to-Si, Cu-to-Sb, Cu-to-Se, Cu-to-Si, Fe-to-K, Feto-Na, Fe-to-Ni, Fe-to-S, Fe-to-Ti, Fe-to-Zn, K-to-Mg, K-to-Mo, K-to-Na, K-to-Ni, K-to-Pb, K-to-S, K-to-Zn, Mo-to-Ni, Na-to-Ni, Na-to-Pb, Na-to-S, Na-to-Zn, Ni- to-S, Pb-to-S, S-to-Zn, Sb-to-Si and Ti-to-Zn (see Additional file 1: Table S2). Stepwise multiple regression was done to find the correlation between pairs combination of trace elements adjusting for the presence of other trace elements in tooth enamel. The significant correlations among trace elements concentration using stepwise multiple regression were: Al-to-B, Al-to-Si, Bto-Si, Cu-to-Si, K-to-Mg, K-to-Na, B-to-Cu, Mg-to-Na, Na-to-S, S-to-K, S-to-Mg. The findings showed three independent groups of elements that were directly or indirectly correlated to each other ( Figure 3). In the first group, there was a positive correlation among Al-to-B, B-to-Si, B-to-Sb, Si-to-Cu, Cu-to-Se, and a negative correlation between Si and Se. In the second group, Coto-Cr, Cr-to-Ni, Ni-to-Fe, Fe-to-Ti and Ni-to-Mo were correlated positively; while Ni-to-Ti were correlated negatively. In the third group, all of the correlations (Pb-to-Mg, Mg-to-Na, Na-to-K, K-to-Zn and Na-to-S) were positive. Mn was the only element that was not correlated to any other element.
Simple linear regression between the concentration of elements and tooth properties
The correlations between trace element concentration and physical-chemical parameters (crystallographic parameters, hardness, crack length, shade lightness and carbonate content) of enamel samples are presented in Additional file 1: Table S3 and the significant ones are summarized in Table 2. There was a strong positive correlation between the concentration of Ti in enamel and tooth hardness, lightness and apatite crystal domain size along c-axis. The correlations between Fe concentration in enamel and both tooth lightness and carbonate type A content in tooth enamel were significant. Also, the correlations between Cu concentration in enamel and both crystal domain size and cell lattice parameter along a-axis were significant. There was a negative association between Al concentration and tooth enamel crack length. A strong inverse relationship was observed between the concentration of Pb and crystal domain size along c-axis. The correlations between the concentration of Se and cell lattice parameter along both a-axis and caxis were significant. A negative strong correlation was seen between the concentration of Cr and cell lattice parameter along c-axis. Also, there were significant correlations between the concentration of Ni and both, cell lattice parameter along c-axis and carbonate type B. The association between the concentration of S and carbonate type B was significant. The remaining chemical and crystallographic parameters were not correlated to each other.
Stepwise multiple linear regression between elements and tooth properties
The results of stepwise multiple regression between the concentration of trace elements in enamel and the mechanical properties, optical properies, crystallographic structure and carbonate content of the teeth are presented in Table 2. Most of the results from the stepwise multiple regression analysis confirmed the results of the simple linear regression. It was found that the concentration of Ti was associated with tooth enamel hardness, lightness and crystal domain size along c-axis. The concentration of Pb and Ti in enamel had a negative correlation with tooth enamel crystal domain size while Mn had a positive one. The correlation between the concentration of Al and crack length in tooth enamel was negative. The concentration of Se was associated positively with the cell lattice parameters along both a-axis and c-axis. The concentration of Cr and Ni had an inverse correlation with the cell lattice parameter along c-axis. The concentrations of Fe and Co were negatively correlated to the carbonate type A and type B, respectively. The concentration of Ni and carbonate type B were associated positively. The protein content in tooth enamel was correlated to none of the trace elements. Some of the trace elements such as Fe, S and Cu were correlated to the enamel properties by stepwise multiple regression but not by simple linear regression and vice versa (such as Mn).
Discussion
In this study 12 trace elements were found to be associated with tooth composition, structure and physical properties. Underneath we compared our findings with previous studies and we discussed the possible sources of these contaminants.
Trace elements in tooth enamel
Trace elements enter the human body coming from different sources such as food, water, air, etc. (Kampa and Castanas 2008;Malczewska-Toth 2012). They can be incorporated in tooth enamel structure and in this study we show that they could affect the physical chemical properties of enamel. Below the association between each detected trace elements in tooth enamel with the tooth physical and chemical properties are detailed.
Selenium
Selenium is a non-metallic element widely distributed in nature that can be absorbed by the body through oral intake or breathing (Malczewska-Toth 2012). Selenium is incorporated in synthetic HA through anionic exchange of phosphate with selenite in a one-to-one (1:1) substitution ratio (Monteil-Monteil- Rivera et al. 2000).
In our study, we found that the presence of Se in enamel was associated with increased lattice parameters along a-axis and c-axis. This was logically expected because the ionic radius of Se 4+ (0.50 Å) is larger than the ionic radius of P 5+ (0.35 Å), so by substitution of Se in synthetic HA, the lattice parameters increase (Ma et al. 2013). However ours is the first study that reports this phenomenon in tooth enamel.
Chromium
Chromium is a heavy metal that is essential for the body in small amounts (Kampa and Castanas 2008). Its main role is in controlling the fat and sugar metabolism (Kimura 1996), and it helps to increase the muscle and tissue growth (Schroeder et al. 1963). It can enter human body through water, air and food (Kampa and Castanas 2008).
Although the ionic radius of Ca 2+ (0.99 Å) is much bigger than Cr 3+ (0.69 Å), Cr can exchange with Ca in synthetic HA (Chantawong et al. 2003). Accordingly, the substitution of Cr 3+ in synthetic HA decreases the cell lattice parameters along both a-axis and c-axis, which might explain why we observed a significant association between the concentration of Cr and cell lattice parameter along c-axis in tooth enamel (Mabilleau et al. 2010).
Nickel
Nickel is a toxic metal that can be absorbed by the body through water, air or food (Kampa and Castanas 2008).
Ni is incorporated in HA through substitution of Ca 2+ (I) and bonds with O to form Ni 3 PO 4 (Zhang et al. 2010). The ionic radius of Ca 2+ (0.99 Å) is much bigger than Ni 2+ (0.72 Å) (Chantawong et al. 2003). Consequently, with addition of Ni 2+ , the cell lattice parameter along c-axis decreases in synthetic HA (Mabilleau et al. 2010). These observations are in agreement with our study which to best of our knowledge is the first to report the inverse association between the concentration of Ni and crystal domain size in tooth enamel. Also, we found that the substitution of Ni in tooth enamel had a strong positive association with the presence of carbonate type B. Future studies will have to be performed in order to understand this phenomenon.
Cobalt
Cobalt is a toxic metal that exists in the environment and can enter the body through water, air and food (Barceloux and Barceloux 1999;Duruibe et al. 2007). In HA, Ca 2+ is substituted by Co 2+ following the equation Ca 10-x Co x (PO 4 ) 6 (OH) 2 (Elkabouss et al. 2004) and the maximum exchange of Co with Ca is 1.35 wt% Co (Elkabouss et al. 2004). In this study, we report for the first time that the incorporation of Co in tooth enamel structure had a strong negative association with the substitution of carbonate type B. More studies are required to find the reason behind this phenomenon.
Lead
Lead is a poisonous heavy metal that can harm the human body (Shukla and Singhal 1984). It can enter the body through water, air or food (Kampa and Castanas 2008). In HA, at low concentrations, Pb 2+ ions (1.2 Å) replace Ca 2+ ions (0.99 Å) (Mavropoulos et al. 2002;Miyake et al. 1986;Prasad et al. 2001) at the calcium site II following the equation Pb (10-x) Ca x (PO 4 ) 6 (OH) 2 (Mavropoulos et al. 2002). Accordingly the sorption of low concentrations of lead by synthetic HA decreases its crystal domain size (Mavropoulos et al. 2002) and this could be the reason why we found in our study that the presence of lead had a negative correlation with the size of enamel apatite crystal.
Titanium
Titanium is a metal commonly used in the field of biomaterials and bio-applications (Niinomi 2002). Titanium characteristics such as high strength, low modulus of elasticity, low density, biocompatibility, complete inertness to body environment and high capacity to integrate with bone and other tissues make it widely used in implant applications (Niinomi 2002). Ti ions are absorbed by body through food like candies, sweets and chewing gums (Weir et al. 2012).
The ionic radius of Ti 4+ (0.68 Å) is much smaller than the ionic radius of Ca 2+ (0.99 Å) (Ribeiro et al. 2006), so the substitution of Ca by low concentrations of Ti in synthetic HA results in decreased cell lattice parameters and crystal domain size (Ergun 2008). We found that the concentration of Ti in tooth enamel HA was associated with decreased enamel crystal domain size, which is in agreement with the previous studies on synthetic HA (Ergun 2008;Hu et al. 2007;Huang et al. 2010).
In this study, the presence of Ti in tooth enamel was associated with increasing tooth hardness. Since Ti had an inverse association with the crystal domain size and the size of apatite nanocrystals in tooth enamel is inversely correlated to tooth hardness (Eimar et al. 2012), this could be the reason behind the positive association between Ti concentration in enamel apatite and hardness.
We previously showed that the tooth enamel crystal domain size was associated with its optical properties; when the enamel crystal domain size is larger, its lightness is lower (Eimar et al. 2011). This phenomenon is due to the fact that more light can be scattered from tooth enamel composed of small crystals (Eimar et al. 2011). In this study we found that the presence of Ti in tooth enamel structure was associated with both smaller crystal domain size and higher lightness, which confirmed our previous observation (Eimar et al. 2011).
Manganese
Manganese is another trace element that can be uptaken from food, air and water (Frieden 1984;Kampa and Castanas 2008). Mn 2+ replaces Ca 2+ in HA (Medvecky et al. 2006). Previous studies have shown that the incorporation of Mn 2+ in synthetic HA does not change the crystal domain size significantly (Medvecky et al. 2006;Ramesh et al. 2007). In our study we showed that the presence of Mn in tooth enamel was associated with its apatite crystal domain size. Further studies are needed to understand this phenomenon.
Iron
Iron is an essential element for human life, it can enter the body through food such as vegetables and it is one of the trace elements found in teeth (Cook et al. 1972). Fe affects the carbonate content in synthetic HA (Low et al. 2010). It was found that in low concentration of Fe, Carbonate type A can be substituted by Fe in synthetic HA (Low et al. 2010). We found that the incorporation of Fe in tooth enamel had lower relative content of carbonate type A. Further studies are needed to understand this phenomenon.
Aluminum
Aluminum is one of the elements found in the body that can be absorbed through air, food, and water (Campbell et al. 1957;Maienthal and Taylor 1968;Oke 1964). Its concentration in the body increases with age and at higher amounts can cause brain and skeleton disorders (Alfrey et al. 1976;Little and Steadman 1966). Also, the discoloration of tooth enamel can be seen in the presence of Al (Little and Steadman 1966). In this study, we found that the concentration of Al was correlated negatively with crack length in tooth enamel. Teeth with lower level of Al were more prone to have longer cracks and vice versa. More studies are needed to understand this phenomenon.
Sources of trace elements
Trace elements are distributed differently among tooth enamel and dentin. For example Cu, Pb, Co, Al, I, Sr, Se, Ni and Mn are more abundant in tooth enamel compared to dentine while Fe and F are more concentrated in dentine than enamel (Derise and Ritchey 1974;Lappalainen and Knuuttila 1981). Also trace elements vary among the different layers of enamel. Fe, Pb and Mn are more abundant in the outer layers of enamel compared to the inner ones (Brudevold and Steadman 1956;Reitznerová et al. 2000). These findings seem to indicate that certain trace elements could be coming from the environment (i.e. Mn and Fe) and are incorporated after eruption or deposed in tooth enamel during calcification (Brudevold et al. 1975;Nixon et al. 1966;Okazaki et al. 1986). Trace elements can enter the human body coming from different sources such as food, water and air (Cleymaet et al. 1991;Cook et al. 1972;Duruibe et al. 2007). Underneath we discuss the dental products and fluids (i.e. saliva, dental prosthesis and dental porcelain) as possible sources of trace elements in tooth enamel.
The most abundant trace elements in saliva (Na, Mg, K and Zn) are also the most abundant trace elements in tooth enamel (Borella et al. 1994;Grad 1954;Sighinolfi et al. 1989). Interestingly, in our study the concentration of each one of these trace elements was directly or indirectly correlated to each other. These observations indicated that saliva could influence the composition of enamel.
Dental prosthesis
Partial denture could be a possible source of trace elements found in teeth. The alloys commonly used to fabricate dental prosthesis include Cr, Co, Ni, Fe, Ti and Mo (Andersson et al. 1989;Asgar et al. 1970;Morris et al. 1979;Yamauchi et al. 1988). The concentration of Cr, Co, Fe and Ni in saliva of patients with partial dentures is higher than in patients without partial dentures (de-Melo et al. 1983;Gjerdet et al. 1991).
The concentration of Cr, Co, Mo, Ni, Ti and Fe in tooth enamel was correlated strongly to each other (Figure 3). This finding along with the fact that these materials are found in the composition of dental prosthesis seems to indicate that the source of these metals in enamel could be dental prosthesis. Therefore, the presence of denture in mouth can affect the concentration of trace elements in tooth enamel. Future studies will be performed to confirm the effect of trace elements found in dental prosthesis in tooth enamel.
Dental porcelain
Dental porcelain is composed of a leucite crystallite phase and a glass matrix phase (Panzera and Kaiser 1999) and its elemental composition includes Si (57-66%), B (15-25%), Al (7-15%), Na (7-12%), K (7-15%) and Li (0.5-3%) (Panzera and Kaiser 1999;Sekino et al. 2001). Interestingly, the concentration of these elements in tooth enamel was strongly correlated to each other ( Figure 3). This result seems to indicate that the dental porcelain might be another possible source of these elements in tooth enamel. Although future studies will have to be performed to confirm this possibility.
Clinical implications
Trace elements can enter the structure of tooth enamel and affect its physical-chemical properties. In this study, we found that there are several sources for trace elements to enter enamel structure such as saliva, dental prosthesis or dental porcelain. Future studies will have to be performed to determine the effect of saliva and dental prosthesis on tooth enamel structure.
Metallic components of dental prosthesis are usually based on Cr, Co and Ni. These metals can cause sensitivities and allergies (Blanco-Dalmau et al. 1984;Brendlinger and Tarsitano 1970) and for these reasons Ti based dentures have been developed (Andersson et al. 1989;Yamauchi et al. 1988). In our study, we confirmed that the presence of Ti in tooth enamel could be beneficial by rendering teeth whiter and harder. Therefore, dental materials containing Ti could have additional benefits, besides the ones that already are known and used in biomaterial sciences.
Limitations
One limitation of the present study is relatively small sample size and another one is the limited number of detected trace elements in each tooth sample. With increasing the number of sample size or using another technique in order to promote the detection limit, we might find more trace elements in the samples and more significant correlations between the concentration of trace elements and the physical-chemical properties of it.
Conclusion
The presence of trace elements in tooth enamel could influence the physical chemical properties of tooth enamel. In this study we found that the concentration of Ti in tooth enamel had a strong relationship with enamel hardness, lightness and crystal domain size along c-axis. The incorporation of Fe had a negative association with the presence of carbonate type A, while the incorporation of Co and Ni was correlated with the formation of carbonate type B. The concentration of Al in tooth enamel was inversely correlated with the length of cracks forming in enamel. Also, we found that the presence of Se in tooth enamel had a positive correlation with cell lattice parameters along both a-axis and c-axis while Cr and Ni had a negative correlation with cell lattice parameters along c-axis. Also, it was shown that the concentration of Pb and Mn in tooth enamel had a positive association with tooth enamel crystal domain size along c-axis.
Materials and methods
After obtaining ethical approval from McGill University Health Center (MUHC) ethical committee, a set of 38 extracted human teeth were collected from patients attending McGill Undergraduate Dental Clinic, scheduled for extractions. In this study, the included teeth were sound human upper anterior teeth. Teeth with caries, demineralized areas, cracks, cavitations, restorations, severe or atypical intrinsic stains, and/or tooth bleaching history were excluded.
Upon extraction, teeth were immersed in 10% formalin solution (BF-FORM, Fisher Scientific, Montreal, Canada) for 1 week, before cleaning them in an ultrasound bath (FS20D Ultrasonic, Fisher Scientific, Montreal, Canada) with de-ionized distilled water at 25°C for 60 minutes. Then, they were polished with a low-speed dental handpiece (M5Pa, KAB-Dental, Sterling Heights, MI) using SiC cups (Pro-Cup, sdsKerr, Orange, CA) and dental prophylaxis pumice of low abrasive capability (CPR™, ICCARE, Irvine, CA) for 1 minute. Teeth were rinsed again in an ultrasonic bath with de-ionized distilled water before storing them in labeled Eppendorf tubes with a 10% formalin solution.
Tooth spectrophotometry
Tooth shade was registered by tooth spectrophotometry (Easy shade®, Vita Zahnfabrik, Germany) which is the most accurate and reproducible technique used for tooth shade masurements (Chu et al. 2010;Paul et al. 2002). Shade measurements were collected using the parameters of Munsell's colour system (L*C*H*) and they were repeated three times for each tooth. The mean and the standard deviation for each shade parameter were calculated. Tooth dehydration might induce changes in tooth shade so, in order to avoid its dehydration, during shade measurement each tooth was kept wet at all times.
Vickers microhardness
A sagittal section was obtained from each tooth using a carbide bur (FG56, sds Kerr, Orange, CA) adapted to a high-speed dental handpiece (TA-98LW, Synea, Bürmoos, Austria) and cooled with de-ionized distilled water in order to prevent overheating. Each tooth section was fixed in clear methylmethacrylate resin (DP-Ortho-F, DenPlus, Montreal, QC). The resulting blocks were mirror polished using ascending grits of silicon carbide papers with de-ionized distilled water (Paper-c wt, AAAbrasives, Philadelphia, PA) (240, 400, 600, 800 and 1200) and were smoothed with a polishing cloth.
A Vickers microhardness device (Clark CM100 AT, HT-CM-95605, Shawnee Mission, KS) was used to make indentations on the polished surfaces of tooth enamel. The indentation load was 300 N with a loading time of 10 seconds. Due to the variation in microhardness values within each tooth enamel sample, six indentations were performed between the DEJ and external surface of each enamel sample (Bembey et al. 2005;Gutiérrez-Salazar and Reyes-Gasga 2003;Newbrun and Pigman 1960;White et al. 2000). A minimum distance of 50 μm was maintained, between the successive indentations. A computer software (Clemex Vision PE 3.5, Clemex Technologies Inc, Shawnee Mission, KS) was used to measure the microhardness value at the site of indentation from images captured with a built-in camera.
Enamel crack propagation
Indentations were made on the polished surfaces of tooth enamel that prepared as described above, with a Vickers microhardness device (Clark CM100 AT, HT-CM-95605, Shawnee Mission, KS). Half way between the DEJ and the surface of enamel 7 indentations were applied on each tooth enamel sample. Between the successive indentations, a minimum distance of 250 μm was maintained. Indentation load was 500 N with a loading time of 10 seconds. Upon indentation, cracks emanated from the corners of each indentation. Then, the samples were sputter-coated with gold and images of the cracks were captured with a VP-SEM (Hitachi S -3000 N VP, Japan) at 500 magnification (Figure 4c). The length of the cracks was measured using the ImageJ software (US National Institutes of Health, Bethesda, MD). The average crack length for each indentation was calculated by summing up the length of cracks and dividing by the number of cracks (Chicot et al. 2009;Roman et al. 2002).
The average HA crystal domain size along c-axis and a-axis for each enamel sample were calculated using the (002) and (310) Bragg peaks of the XRD spectrum and Scherrer's formula (Eq. 1) where D is the average of domain lengths, K is the shape factor, λ is the x-ray wavelength β is the line broadening at half the maximum intensity (FWHM) and θ is the Bragg angle. The enamel crystal cell lattice parameters, a-axis and c-axis were calculated using the XRD (002) and (310) Bragg peaks relying on the following equation (Hanlie et al. 2006), where d is the spacing between adjacent planes (interplanar spacing) in the crystal, hkl are the miller indices that are the reciprocal intercepts of the plane on the unit cell axes, a is the a-axis and c is the c-axis. We used the (002) and (310) Bragg peaks for our crystallography calculation, because they have been widely used in the literature and they do not overlap with other peaks (Hanlie et al. 2006;Leventouri et al. 2009;Simmons et al. 2011).
FTIR
The chemical composition of enamel was investigated by FTIR spectroscopy (Spectrum 400, Perkin-Elmer, Waltham, MA). The enamel powder samples were submitted to an FTIR spectrophotometer using a single bounce ZnSe diamond-coated ATR crystal. For each sample, a total of 64 scans per run at 2 cm -1 resolution were used (Figure 4b). FTIR studies were carried out in the range 700-1800 cm -1 . The collected spectra were normalized according to the absorbance of ν 3 PO 4 at 1013 cm -1 using the FTIR spectrophotometry software (Spectrum, Perkin-Elmer, USA). According to previous studies, the organic content of enamel was estimated from the Amide I-to-ν 3 PO 4 ratio (Aparicio et al. 2002;Bartlett et al. 2004;Bohic et al. 2000). The carbonate content within enamel mineral matrix was estimated from the ratios of ν 2 CO 3 type A (~878 cm -1 ) and B (~872 cm -1 ) to the ν 3 PO 4 and ν 1 PO 4 (~960 cm -1 ) absorption bands (Antonakos et al. 2007;Lasch et al. 2002;Rey et al. 1989).
Inductively coupled plasma-optical emission spectroscopy
The concentration of trace elements in tooth enamel samples was determined with Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES) (Thermo Scientific iCAP 6500, Cambridge, UK). Weighed tooth enamel powdered samples were dissolved in concentrated nitric acid (5 ml; 68% wt/wt) at a temperature of 95°. Yttrium (5 ppm) was added to the solution as an internal standard to make corrections for possible sample preparation errors and sample matrix corrections. After 2 hours of acid digestion, the liquids of the resulting solutions (0.25 ml) were diluted into both deionizeddistilled water (10 ml) and 4% nitric acid (25 ml), separately. Both diluted solutions were submitted to ICP-OES using the following setup: power of 1150W, auxiliary gas-flow rate of 0.5 L/min, nebulizer gas-flow rate of 0.5 L/min, sample flow rate of 0.7 ml/min, cooling gas of 12 L/min and integration time of 10 sec. The operating software ITEVA (version 8) was used to control the instrument function and data handling. The quality control checks were done prior to initial analysis and every 12 consecutive samples.
Data analysis
The correlations between the concentration of each trace element in tooth enamel and its physical chemical properties were determined using simple linear regression. Due to high correlations between trace elements in tooth enamel and their possible effect on the results, in addition to simple linear regression, stepwise multiple regression was performed. Stepwise multiple regression is more accurate than simple linear regression, because it provides us information about the correlation of each trace element with others while adjusting for their intercorrelations. Also, the association of trace elements with physical-chemical properties of tooth enamel were obtained using the stepwise multiple regression, adjusting for the inter-correlation between trace elements. The statistical significance was set at P <0.05 and all statistical analyses were done using SPSS 19 software (IBM, New York, NY).
|
v3-fos-license
|
2022-12-21T16:13:50.410Z
|
2022-12-01T00:00:00.000
|
254902176
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/19/24/16985/pdf?version=1671273746",
"pdf_hash": "e86375a795618a3f501f453f3199c2557907e17d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44159",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f55a5faadd5e035398f88d6d0a55f945a1f41bea",
"year": 2022
}
|
pes2o/s2orc
|
Epidemiology and Management of Proximal Femoral Fractures in Italy between 2001 and 2016 in Older Adults: Analysis of the National Discharge Registry
This study aims to determine the annual incidence of proximal femoral fractures in Italy in the period between 2001 and 2016 among older adults, and to describe the trends in the clinical management of these cases. Data were retrieved from the National Hospital Discharge records issued by the Italian Ministry of Health and from the Italian Institute for Statistics. The number of hospitalizations increased between 2001 and 2016, while the age-adjusted yearly incidence decreased from 832.2 per 100,000 individuals to 706.2. The median age was 83 years (IQR 78–88) with a large majority of females (76.6%). The type of fracture varied with age in female subjects, with older women more frequently reporting pertrochanteric fractures. Therapeutic strategies for the different types of fracture depended on patients’ age. During the study years, improvements in fracture classification and management strategies were observed, with a clear decreasing trend for non-operative solutions. In conclusion, the number of proximal femur fractures in older adults is growing, even if at a lower rate compared to population aging. The Italian surgical practice changed during the study period towards the implementation of the most recent guidelines.
Introduction
Fractures of the proximal femur are common events in older adults that lead to significant morbidity and disability [1]. In the absence of specific contra-indication, rapid surgical treatment is advised, but the type of intervention may significantly vary based on patient's characteristics, and therefore indications are still disputed [1-3]. Recently, total hip arthroplasty (THA) gained popularity, with guidelines suggesting this treatment as the optimal treatment in active patients [4]. Nevertheless, internal fixation (IF) and hemiarthroplasties (HA) are historically the most common surgical solutions, and they still represent the most frequently applied treatments [5]. In specific cases, presenting significant comorbidities, a non-operative management is considered [6]. Given the high incidence of these fractures in subjects older than 65 years, and given the population-aging phenomenon, the management of proximal femoral fractures represents an increasingly important topic for national healthcare systems worldwide, as well as for the orthopedic community. Several reports from different countries highlighted an increase in the total number of proximal femoral fractures [7][8][9][10][11][12][13][14][15], even if the age-adjusted incidence is reported to be stable or decreasing [8,[16][17][18][19][20][21][22]. The choice of treatment and other aspects such as the time interval between injury and surgery may significantly influence the clinical outcome [23][24][25], and Int. J. Environ. Res. Public Health 2022, 19, 16985 2 of 12 thus the investigation of these topics would be of paramount importance for the future planning of proximal femoral fractures management.
The present study aims to determine the annual incidence of proximal femoral fractures in the older Italian population between 2001 and 2016, as well as to describe the trends in the clinical management of these patients in order to provide a review of the past activity and hints about possible future scenarios. Data were provided by the Italian Ministry of Health. The dataset reported only index hospitalization for proximal femoral fracture, and thus no additional information about the patients' status, comorbidities and outcomes were available. In addition, the ICD-9-CM codification (2015 version) was used. Since it does not include a specific classification for displaced/non-displaced fracture and stable/unstable fractures, this information was not included in the analysis.
Data Collection
The manuscript was prepared following STROBE guidelines [26]. Data were retrieved from the National Hospital Discharge records (Scheda di Dimissione Ospedaliera, SDO) issued between 2001 and 2016. These anonymous data were initially collected by the Italian Ministry of Health and were then made available to the authors. The database includes the following information: age, sex, length of the hospitalization, public or private reimbursement, primary and secondary diagnoses, and primary and secondary interventions. A second database was used to retrieve data about the general population at the national level; these data were obtained from the Italian National Institute for Statistics (ISTAT) website [27]. These data report the total number of subjects living in Italy each year, with a breakdown by age and gender. A reliability analysis was performed by the statistician in charge of the analysis in regard to the hospitalization dataset: validity was evaluated by checking the format and range of values for each variable, and no issues were identified. Completeness was checked and all records contained the minimum entries to be considered in the analysis. A duplicate search was performed to test the uniqueness of each record, and no duplicates were found. The dataset provided by ISTAT respected the National Institute Standard of Quality [28].
The inclusion criteria was hospitalization in Italy with a diagnosis of fractures of the proximal femur, identified based on code 820.0-820.9 of the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) between 2001 and 2016. The exclusion criteria were the following: patients hospitalized in other countries, age <65 years old, diagnosis of polytrauma and diagnosis of a late effect of fracture of the proximal femur or lower extremity (ICD codes 905.3 and 905.4). A flow chart of the patient selection process is reported in Figure 1.
Statistical Analysis
The analyses were performed using SAS software v9.4 (SAS Institute, Cary, NC, USA). Data concerning categorical variables are reported as counts and percentages, while continuous data are reported as median and interquartile range (IQR), unless otherwise indicated. The raw incidence per year was calculated as the number of events divided by the number of people living in Italy in the year of interest, and it was reported as the relative frequency per 100,000 individuals. Age-adjusted incidence was calculated after normalizing the year-specific population by age category. In particular, the sum of persons at risk per year in each age category was divided by the total number of persons at risk during the study period to obtain the mean weight of each age category in the analyzed population. Then, the yearly age-specific incidence was calculated by dividing the number of events in each age category by the correspondent number of people at risk per year. This ratio was multiplied by the weight of the age category in the overall population, and then the sum of the numbers obtained for each age category was calculated to obtain the year-specific age-adjusted incidence. Supplementary Table S1 reports the numbers and calculations. Differences among proportions were assessed using either Fisher's exact
Statistical Analysis
The analyses were performed using SAS software v9.4 (SAS Institute, Cary, NC, USA). Data concerning categorical variables are reported as counts and percentages, while continuous data are reported as median and interquartile range (IQR), unless otherwise indicated. The raw incidence per year was calculated as the number of events divided by the number of people living in Italy in the year of interest, and it was reported as the relative frequency per 100,000 individuals. Age-adjusted incidence was calculated after normalizing the year-specific population by age category. In particular, the sum of persons at risk per year in each age category was divided by the total number of persons at risk during the study period to obtain the mean weight of each age category in the analyzed population. Then, the yearly age-specific incidence was calculated by dividing the number of events in each age category by the correspondent number of people at risk per year. This ratio was multiplied by the weight of the age category in the overall population, and then the sum of the numbers obtained for each age category was calculated to obtain the year-specific age-adjusted incidence. Supplementary Table S1 reports the numbers and calculations. Differences among proportions were assessed using either Fisher's exact test or the proportion trend test. Linear regression was used to evaluate the trend in total number of events during the study years.
Total Number and Incidence of Hospitalizations in the Analyzed Period
In the analyzed period, 1,490,142 hospitalizations for the diagnosis of proximal femoral fracture were recorded among the +65 y/o Italian population, with an overall incidence of 769.7 events per 100,000 person-years. The total number of hospitalizations showed an increasing trend between 2001 and 2016 from 81,648 to 100,998 ( Figure 2). This increase was linear (R 2 = 0.899), with a mean increase (slope) of 1261 events per year (CI95%: 1019-1503). On the other hand, the raw incidence slightly decreased between 2001 and 2016, from 766 to 755 events per 100,000 person-years, with a peak of 800 in 2008. Interestingly, in the same time-period, the age-adjusted yearly incidence showed a significantly decreasing trend (p < 0.001) from 832 hospitalizations per 100,000 individuals in 2001 to 707 in 2016. (Figure 2). The median age of patients hospitalized with a diagnosis of proximal femoral fracture was 83 years (IQR 78-88). Indeed, the incidence of this diagnosis exponentially increased with age, from 145.3 events per 100,000 person-years among patients aged 65-69 years old up to 3563 in subjects aged 95-99 y/o. The vast majority of subjects, 1,141,716 (76.6%), were females (Table 1). significantly decreasing trend (p < 0.001) from 832 hospitalizations per 100,000 individuals in 2001 to 707 in 2016. (Figure 2). The median age of patients hospitalized with a diagnosis of proximal femoral fracture was 83 years (IQR 78-88). Indeed, the incidence of this diagnosis exponentially increased with age, from 145.3 events per 100,000 person-years among patients aged 65-69 years old up to 3563 in subjects aged 95-99 y/o. The vast majority of subjects, 1,141,716 (76.6%), were females (Table 1).
Figure 2.
Left y-axis: raw incidence and age-adjusted incidence (events/100,000 adults >65 y/o) of hospitalization for proximal femoral fracture per year. Right y-axis: total number of fractures of the proximal femur per year.
Type of Fracture Differs Based on Gender and Age
Closed pertrochanteric fractures (icd9 code 820.2x) were the most frequently observed, representing 51.1% of the total events, especially a closed fracture of the trochanteric section of the neck of the femur (code 820.20, 34.2% of total events). Closed transcervical fractures were observed in 33.1% of the analyzed hospitalizations, with code 820.01 (closed upper transcervical fracture) the most frequent among the intracapsular fractures (14.3% of total events). Open fractures accounted for only 1.7% of events (0.8% transcervical, 0.9 pertrochanteric). The total number and frequency of the different types of fractures in the dataset are reported in Table 2.
Type of Fracture Differs Based on Gender and Age
Closed pertrochanteric fractures (icd9 code 820.2x) were the most frequently observed, representing 51.1% of the total events, especially a closed fracture of the trochanteric section of the neck of the femur (code 820.20, 34.2% of total events). Closed transcervical fractures were observed in 33.1% of the analyzed hospitalizations, with code 820.01 (closed upper transcervical fracture) the most frequent among the intracapsular fractures (14.3% of total events). Open fractures accounted for only 1.7% of events (0.8% transcervical, 0.9 pertrochanteric). The total number and frequency of the different types of fractures in the dataset are reported in Table 2.
The types of fractures are evenly distributed among the different age classes for male patients, while age is significantly associated with the type of fractures in female subjects. In particular, transcervical fractures are more frequent in younger females (41.5% among the 65-69 y/o and 27.6%), while older women experience pertrochanteric fractures at a higher rate (59.3% in the 99+ y/o category vs. 39.7% in 65-69 y/o) ( Figure 3). Complete data are reported in Supplementary Table S2.
During the 15 years of interest, a constant decreasing trend for "unspecified" fractures was observed, starting from 25.2% of cases in 2001 to 11.8% in 2016. At the same time, increases in specific diagnoses were observed, in particular for what concern upper transcervical fractures (820.01), from 6.2% to 15.6%. Notably, closed basicervical fractures steadily decreased during the study years (11.3% in 2001, 5.8% in 2016) (Supplementary Table S2). The types of fractures are evenly distributed among the different age classes for male patients, while age is significantly associated with the type of fractures in female subjects. In particular, transcervical fractures are more frequent in younger females (41.5% among the 65-69 y/o and 27.6%), while older women experience pertrochanteric fractures at a higher rate (59.3% in the 99+ y/o category vs. 39.7% in 65-69 y/o) (Figure 3). Complete data are reported in Supplementary Table S2. During the 15 years of interest, a constant decreasing trend for "unspecified" fractures was observed, starting from 25.2% of cases in 2001 to 11.8% in 2016. At the same time, increases in specific diagnoses were observed, in particular for what concern upper transcervical fractures (820.01), from 6.2% to 15.6%. Notably, closed basicervical fractures steadily decreased during the study years (11.3% in 2001, 5.8% in 2016) (Supplementary Table S2).
Fracture Management in Different Patients and Types of Fracture
A total of 20.3 % of hospitalizations for proximal femoral fracture were not associated with surgeries or reductions.
Transcervical fractures (open or closed) were more frequently treated by HA (49.7% of cases) or THA (17.5%) compared to open or closed reduction with internal fixation (14.5%). Specifically, basicervical fractures were treated by HA and THA less frequently compared to upper cervical and midcervical fractures, while reduction with internal fixation or non-surgical management were applied more frequently. HA was performed more frequently in older individuals (>55.0% in subjects >80 years old vs. 20.0% in 65-69 y/o), while the contrary was observed for THA (41.7% in 65-69 y/o, <10.0% above 85 y/o).
Pertrochanteric fractures were frequently treated with an open or closed reduction with internal fixation (75.7% of cases), with HA and THA representing rare treatment choices (2.0% and 0.9%, respectively). Differences were observed in the management of intratrochanteric fractures compared to trochanteric and subtrochanteric fractures, with a higher frequency of closed reduction with internal fixation (22.4% vs. 11.5% and 11.6%, respectively) and a lower frequency of open reduction with internal fixation (55.4% vs. 63.3% and 67.4%, respectively). Table 3 reports the absolute and relative frequency of each management strategy for the different types of fractures. Younger patients underwent closed reduction with internal fixation less frequently than older subjects (12.4% in 65-69 y/o, up to 16.4% in +99 y/o). Open reduction with internal fixation showed a similar
Fracture Management in Different Patients and Types of Fracture
A total of 20.3 % of hospitalizations for proximal femoral fracture were not associated with surgeries or reductions.
Transcervical fractures (open or closed) were more frequently treated by HA (49.7% of cases) or THA (17.5%) compared to open or closed reduction with internal fixation (14.5%). Specifically, basicervical fractures were treated by HA and THA less frequently compared to upper cervical and midcervical fractures, while reduction with internal fixation or nonsurgical management were applied more frequently. HA was performed more frequently in older individuals (>55.0% in subjects >80 years old vs. 20.0% in 65-69 y/o), while the contrary was observed for THA (41.7% in 65-69 y/o, <10.0% above 85 y/o).
Pertrochanteric fractures were frequently treated with an open or closed reduction with internal fixation (75.7% of cases), with HA and THA representing rare treatment choices (2.0% and 0.9%, respectively). Differences were observed in the management of intratrochanteric fractures compared to trochanteric and subtrochanteric fractures, with a higher frequency of closed reduction with internal fixation (22.4% vs. 11.5% and 11.6%, respectively) and a lower frequency of open reduction with internal fixation (55.4% vs. 63.3% and 67.4%, respectively). Table 3 reports the absolute and relative frequency of each management strategy for the different types of fractures. Younger patients underwent closed reduction with internal fixation less frequently than older subjects (12.4% in 65-69 y/o, up to 16.4% in +99 y/o). Open reduction with internal fixation showed a similar frequency among different age categories. Figure 4 shows the treatment choices in different age classes and type of fractures.
Gender does not influence the treatment choice, even if a higher percentage of nonsurgical treatment was recorded for males compared to females, independently from the type of fracture. Supplementary Table S3 reports the specific frequencies of surgical treatment in different age categories and types of fractures. Reductions without fixation are rare treatment choices, reported with a frequency <1%.
Trends in Treatment Strategies during Study Years
During the 15 study years, changes in the management choices were observe both pertrochanteric and transcervical fractures. Age-adjusted estimations were obt in order to account for the variability in treatment choice among patients of differen and are expressed as treatments per 100 events. While HA was the main choice fo treatment of transcervical fractures in the whole period, its application slightly incr from 46.
Trends in Treatment Strategies during Study Years
During the 15 study years, changes in the management choices were observed for both pertrochanteric and transcervical fractures. Age-adjusted estimations were obtained in order to account for the variability in treatment choice among patients of different age, and are expressed as treatments per 100 events. While HA was the main choice for the treatment of transcervical fractures in the whole period, its application slightly increased from 46.3 to 51.7 between 2001 and 2016. The same was observed for THA, increasing from 13.2 to 19.1. Conversely, other solutions, including closed and open reduction with internal fixation or non-surgical management, decreased during the study years ( Figure 5A). 2001 to 14.5 choices per 100 events in 2016, while closed reduction with internal fixati remained similar between the first and last study years (2001: 16.6; 2016:17.3) with reduction observed between 2002 and 2009 with an incidence of less than 12.0 surger per 100 pertrochanteric fractures. HA and THA were already rare in 2001 and th incidence in this type of fracture further reduced during the study years ( Figure 5 Supplementary Table S4 reports the incidence of each treatment over the study period.
Associated Diagnosis
The most frequent associated diagnoses reported for these hospitalization were acu posthemorrhagic anemia (19.7%), hypertension (17.5%), heart disease (8.9%) and diabet (8.3%). Table 4 reports the demographic data in detail. In general, acute posthemorrag anemia, hypertension and osteoporosis were more frequent among female subjects, wh heart, respiratory and Parkinson's diseases were more frequent among males. Table S4 reports the incidence of each treatment over the study period.
Length of Hospitalization
The length of hospitalization showed a median of 12 days, with an interquartile range of 9-18 days. A progressive reduction in this parameter was observed in the median length of hospitalization from 14 (IQR: 9-20) days in 2001 to 11 (8)(9)(10)(11)(12)(13)(14)(15)(16) days in 2016. Similar hospitalization lengths were observed in males (12, and females (12, and in different age classes, with a minimum median of 11 in patients older than 99 y/o and a maximum of 13 (IQR 9-18) in patients 74-84 y/o. Again, no differences were observed between pertrochanteric and transcervical fractures, with medians equal to 12 days in both cases and slightly different IQR (8-18 and 9-17, respectively).
Associated Diagnosis
The most frequent associated diagnoses reported for these hospitalization were acute posthemorrhagic anemia (19.7%), hypertension (17.5%), heart disease (8.9%) and diabetes (8.3%). Table 4 reports the demographic data in detail. In general, acute posthemorragic anemia, hypertension and osteoporosis were more frequent among female subjects, while heart, respiratory and Parkinson's diseases were more frequent among males.
Discussion
The present study shows that proximal femoral fractures in Italy are increasing in number and that the choice for operative solutions became more frequent over non-surgical management between 2001 and 2016.
The increase in total numbers of proximal femoral fractures appears strictly related to the population ageing, since the age-adjusted incidence has decreased in the same period. This appears to be a common phenomenon worldwide [8,16], possibly due to the reduction in post-fracture mortality rate, as well as the implementation of policies aimed at preventing osteoporosis [22,[29][30][31][32]. Nevertheless, the increase in the overall number of fractures suggests that the magnitude of this decrease is insufficient to compensate for the effect of population ageing [11,33]. Indeed, the increase in the absolute numbers of proximal femoral fractures, as well as the higher incidence of these events in females and older subjects, are consistent with the evidence provided by several authors from different countries [7,9,10,12,13,15,34]. The present study confirms these findings in a larger cohort (entire Italian population) and considers a longer time-period (16 years) compared to most of the previous studies.
During the study period, an increasing trend towards the use of surgical solutions has been observed in both pertrochanteric and transcervical fractures. In addition, the incidence of specific surgeries (hemi-and total arthroplasties for transcervical fractures and closed/open reduction with internal fixation for pertrochanteric fractures) grew over time, suggesting an optimization of management strategies towards the adherence to the most recent guidelines [35]. Indeed, the growing incidence of THA observed in Italy for the treatment of transcervical fractures is consistent with reports from Canada, Australia, South Korea, Finland and the United States [36][37][38][39][40][41]. The use of THA is recommended by the AAOS practical guidelines [35], even if it does not provide advantages compared to HA in older patients [42]. Nevertheless, this mainly applies to relatively younger and healthy patients thus limiting its use [43]. In the Italian cohort, THA was the main treatment choice in subjects younger than 74 years old for transcervical fractures. Aside from the type of fracture and the age of the patient, it should be considered that the management of proximal femoral fractures may depend on external factors, such as the surgeon's specific expertise, the volumes of specific procedures usually performed at the hospital and insurance, with patients with private insurance undergoing THA more frequently than those without [44,45].
The length of hospitalization progressively decreased during the study period. This could be due to the effects of two distinct policies, one focused on cost reduction and shortening post-operative hospitalization [46], and the other aimed at reducing the time between hospitalization and surgery. A reduced time to surgery provides better results in terms of survival and clinical outcomes [47,48], while the reduced hospitalization time after surgery did not produce negative effects on post-operative mortality rate [49]. Unfortunately, no data were available regarding the time interval between hospitalization and surgery in the analyzed cohort, and thus we cannot confirm the effect of this parameter on total hospitalization length in the Italian cohort. The same decreasing trend in the length of hospitalization has been reported by other authors, suggesting the existence of a global trend [49,50].
Given the high median age of patients, severe co-morbidities are frequent in patients suffering from proximal femoral fractures. In our cohort, hypertension, heart disease and diabetes were frequent associate diagnoses, while the high incidence of post-hemorrhagic anemia may represent a direct consequence of the injury and/or the treatment. The frequency of these pathologies is comparable to that observed in similar patients from Denmark, where these associated diagnoses were also identified as risk factors for 1-year mortality after hospitalization for proximal femoral fracture [49]. A relevant percentage of patients (5.8%) was affected by dementia or Alzheimer disease, conditions that are known to require special attention [51]. Osteoporosis appears to be under-diagnosed in this cohort, possibly due to a lack of interest in reporting this associated diagnosis from the perspective of the healthcare providers.
Interestingly, the "unspecified" type of fracture decreased significantly during the time period, and this is suggestive of improvements in diagnostic techniques over time [52,53]. In addition, the rate of basicervical fractures, whose diagnosis is often difficult [54], dropped from 11.3% in 2001 to 5.8% in 2016, and this is possibly an indicator of a decreased rate of the misdiagnosis of upper perthrocanteric fractures as basicervical fractures. This possible bias should be considered when comparing a series of cases between different decades.
The present study has limitations. The main limitation is the reliance on administrative data, not allowing for the evaluation of outcomes such as mortality and relapses, and accounting for a certain amount of missing information, as confirmed by the large percentage of the unspecified type of fracture observed in the sample, especially in the early 2000s. Unfortunately, the ICD-9 classification used in the dataset (2015 version) does not distinguish between displaced and non-displaced fractures, thus limiting the description of these fracture types' incidence and management. In addition, the dataset contains only index hospitalizations and, thus, the associated diagnosis is not representative of comorbidities, since their inclusion in the records is at the discretion of the operator.
Conclusions
In conclusion, the number of fractures of the proximal femur in older adults grew in the analyzed period, even if at a lower rate compared to what would be expected based on the increase in population age. The surgical approach changed during the study period following the implementation of up-to-date guidelines.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijerph192416985/s1, Table S1: Calculation of total events, raw incidence and age-adjusted incidence; Table S2: Type of fracture and therapeutic strategy; Table S3: Therapeutic management in different types of fracture and age categories; Table S4: Incidence of different treatments during the study years in pertrochanteric and transcervical fractures.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2023-03-04T16:17:18.585Z
|
2023-01-01T00:00:00.000
|
257329264
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2023/04/epjconf_isrd2023_01004.pdf",
"pdf_hash": "231651e3c0bd8ba2a59dc44b7401c730c773ae7a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44162",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"sha1": "c7ac41ebff1653f68f49ba854eb59761667090d2",
"year": 2023
}
|
pes2o/s2orc
|
Fast neutron fluence estimation by measurement of 93m Nb and 93 Mo
. The possibility for estimation of fast neutron fluence from 93m Nb produced from Nb contained in structural component materials of nuclear power plants was investigated as an evaluation tool for extending plant operating lifetime. First, to establish a measurement method for 93m Nb in structural component materials, chemical separations and measurements of 93m Nb were carried out using specimens of fast neutron-irradiated XM-19 and X-750 alloy; these are used as structural component materials and contain Nb as a component. The 93m Nb activities obtained from the measurements of both XM-19 and X-750 alloy agreed well with the calculated values of 93m Nb, which were simply calculated from the evaluated fast neutron fluence. On the other hand, in the case of type 316L stainless steel, which contains Mo, Mo derived 93m Nb presents a problem, so an approach was used that estimates the amount of Mo derived 93m Nb based on 93 Mo measurements with the third specimen, neutron-irradiated type 316L stainless steel. The difference between the measured and calculated 93m Nb values was about two-fold, indicating that the estimation method for 93m Nb produced from Mo needs to be improved.
Introduction
One of the approaches for improving the estimation accuracy of fast neutron fluence experienced by nuclear power plant structural component materials is to estimate the fast neutron fluence based on the measurement of the activity of the activated nuclides in the materials. This means that, the results of measurements made after chemical separations of nuclides produced by the reaction between elements contained in the structural materials and fast neutrons are reflected in the estimation of the fast neutron fluence. 93m Nb produced by the 93 Nb(n, n') 93m Nb reaction between Nb and fast neutrons has a relatively long half-life of 16.13 years and is used to estimate long-term fast neutron fluence. Types 304 and 316L stainless steels (SSs) are mainly used for structural components such as core shrouds and upper grid plates in BWRs, and it has been reported that type 304 SS contains several ppm to several tens of ppm of Nb [1], which may allow estimation of fast neutron fluence by the n') 93m Nb reaction have been made using Nb present in structural components on the order of several tens of mass ppm [2][3][4]. On the other hand, in the case of a material containing Mo such as type 316L SS, 93m Nb is also produced from Mo in that structural material [3,4], so the amount of 93m Nb derived from Mo must be estimated. 93m Nb from Mo is produced by the electron capture decay of 93 Mo that was produced from the thermal neutron capture reaction of 92 Mo, which is one of the stable isotopes of Mo. Therefore, in the case of a material containing Mo, 93m Nb produced from both Mo and Nb is included, and if the activity of 93m Nb produced from Mo is not negligible, a correction is necessary to subtract the activity of 93m Nb produced from Mo from the 93m Nb measured activity. (Of course, if the activity of 93m Nb produced from Mo is overwhelmingly greater than the activity of 93m Nb produced from Nb, the fast neutron flux from 93 Nb(n, n') 93m Nb cannot be estimated.) As a method for accurately estimating the produced amount of 93m Nb from Mo, an estimation from the measured value of 93 Mo is effective, as 93 Mo is an intermediate product in the production of 93m Nb from Mo.
In this study, the 93m Nb measurement method was established using irradiated specimens of XM-19 and X-750 alloy containing Nb as a constituent element whose amount is within the material specifications such as set by the ASTM. The validity of the measurement method was verified by comparing the measured value with the calculated value of 93m Nb estimated from the evaluated fast neutron fluence. The validated 93m Nb measurement method was used to measure 93m Nb for the third specimen, an irradiated type 316L SS specimen. To estimate the amount of 93m Nb produced from Mo, 93 Mo was also measured for the above three specimens and the amount of 93m Nb produced from Mo was estimated. The activity of 93m Nb and 93 Mo of each specimen were measured by X-ray measurements with a Ge semiconductor detector after dissolving each specimen and separating 93m Nb and 93 Mo from each other by an ion exchange separation method. The chemical separation between 93m Nb and 93 Mo is necessary, because both are measured by X-ray measurements and both are measured at the same X-ray energy of 16.6 keV.
Specimens
Irradiated XM-19, X-750 alloy and type 316L SS, which are used as structural component materials, were used as specimens and there was one test piece for each. XM-19 and X-750 alloy were used as specimens containing Nb as a component to establish the 93m Nb measurement method. Table 1 summarizes their compositions listed on the respective mill test reports. In the absence of Nb or Mo concentration data on the mill test reports, the concentration was determined by measuring the solution of each specimen by inductively coupled plasma mass spectrometry (ICP-MS). The concentration measurements were performed after confirming sufficient accuracy in advance.
All specimens were irradiated in the Japan Materials Testing Reactor (JMTR) under the conditions shown in Table 2. The fast neutron fluence integrated by the specimens is equivalent to the cumulative irradiation in the core shrouds of Japanese BWRs during a period of 60 years.
Separation of Mo and Nb
The separation of Nb and Mo was performed by applying an ion exchange separation method using a hydrofluoric acid system, with reference to the 93m Nb separation method of Serén and Kekki [4] and the report of Fujimoto and Shimura [5] about element adsorption behaviors on an anion exchange resin in hydrofluoric acid and hydrochloric acid systems. Approximately 20 mg of each specimen was dissolved in a mixture of nitric acid (HNO3), hydrochloric acid (HCl), and hydrofluoric acid (HF) while heating on a hot plate. Then, the acidic solutions of dissolved specimens were evaporated to dryness by further hot plate heating. The residues were dissolved in 2 mol/L HF, and passed through an anion exchange column filled with DOWEX 1X-8 200-400 mesh. In the next step, 2 mol/L HF was passed through the column. This step flushed out the main structural material elements such as Fe, and the major radionuclides such as 60 Co, from the column. Next, Mo was eluted from the column using a mixture of 8 mol/L HF and 4 mol/L HCl. Finally, Nb was eluted from the column using 1 mol/L HCl.
Measurement of 93 Mo
The eluant of 8 mol/L HF and 4 mol/L HCl containing Mo was evaporated to dryness by heating on a hot plate and the residue was dissolved in 1 mol/L HNO3. Then, 1 mg of lanthanum was added to the solution, and lanthanum hydroxide precipitate was formed by adding ammonia water. In this case, Mo remained in the liquid phase, while 60 Co, which could not be removed by ion exchange separation, was included on the precipitate side. After a vacuum filtration, 100 μg of Mo was added to the collected filtrate, which was then adjusted to a slightly acidic condition as confirmed by pH paper testing by adding HCl. Then bromine water and an ethanolic solution of α-benzoin oxime were added to get Mo precipitate. The precipitate was collected on a membrane filter (47 mm diameter; 0.45 μm pore size). The activity of 93 Mo in the precipitate was determined with a high-purity germanium low energy photon spectrometer (LEPS) by measuring the 16.6 keV radiation emitted by 93 Mo. To estimate the recovery, the amount of Mo in the precipitate was determined by X-ray fluorescence (XRF). The LEPS and XRF apparatus were calibrated using standard samples.
Measurement of 93m Nb
To remove HF, the 1 mol/L HCl solution containing Nb was evaporated to dryness by heating on a hot plate and the residue was dissolved with 1 mol/L HCl. After 500 μg of lanthanum was added to the solution, ammonia water was added to precipitate lanthanum hydroxide, and Nb was coprecipitated. The precipitate was collected using a membrane filter (47 mm diameter; 0.45 μm pore size). The activity of 93m Nb in the precipitate was determined with the LEPS by measuring the 16.6 keV radiation emitted by 93m Nb. To estimate the recovery of Nb, the activity of 94 Nb in the precipitate was determined with a germanium semiconductor detector. The LEPS and germanium semiconductor detectors were calibrated using standard samples.
Activity of 93m Nb from 93 Nb
The activity of 93m Nb produced from 93 Nb (ANb93m(Nb)) is calculated as follows: where, NNb93 is the number of atoms of 93 Nb in the specimen, σNb93 is the cross section of the reaction 93 Nb(n, n') 93m Nb, φf is the fast neutron flux, λNb93m is the decay constant of 93m Nb, t is the irradiation time, and tm is the time from the end of irradiation to the measurement.
In the calculations of this study, the cross section of 93 Nb(n, n') 93m Nb and the fast neutron flux were assumed to be constant. For the fast neutron flux values, the values shown in Table 2 were used. For the cross section values, the values corresponding to 2 MeV, the average energy of the fission spectrum, were used. The activity of 93m Nb calculated from equation (1) was compared with the measured activity of 93m Nb for each specimen to confirm the validity of the measured values.
92
Mo produces 93 Mo by thermal neutron capture, and 93 Mo disintegrates to 93m Nb. In this study, it was assumed that all the 92 Mo reacted with neutrons to become 93 Mo, and all the 93 Mo formed decayed to 93m Nb. The activity of 93m Nb produced from 93 Mo (ANb93m(Mo)) is calculated as follows: Mo93 (e − λMo93 m − e − λNb93m m ) + Nb93m e − λNb93m m (3) where, N'Nb93m is the number of atoms of 93m Nb produced from 92 Mo during the period between irradiation and measurement, λMo93 is the decay constant of 93 where, NMo92 is the initial number of atoms of 92 Mo in the specimen, σMo92 is the cross section of 92 Mo(n, γ) 93 Mo, and φth is the thermal neutron flux. Here, φth is the thermal neutron flux that contributes to the reaction of 92 Mo(n, γ) 93 Mo, and it was estimated from the measured activity of 93 Mo along with σMo92. NMo93 in equation (4) can be estimated from equation (6) based on the measured activity of 93 Mo (AMo93).
When NMo93 is obtained, σMo92 φth can also be obtained from equation (4). Furthermore, once σMo92 φth is obtained, NNb93m can also be obtained from equation (5). Therefore, the procedure to obtain the activity of 93m Nb produced from Mo based on the measured activity of 93 Mo is the following.
First, the number of atoms of 93 Mo at the end of irradiation, NMo93, is obtained from the measured activity of 93 Mo by equation (6). Next, after substituting the obtained NMo93 into equation (4), the product of the cross section and thermal neutron flux, σMo92 φth, is obtained. Then, by substituting σMo92 φth into equation (5), the number of atoms of 93m Nb produced from 92 Mo at the end of irradiation, NNb93m, is obtained. By substituting NMo93 and NNb93m into equation (3), the number of atoms of 93m Nb produced from 92 Mo during the period between irradiation and measurement, N'Nb93m, is obtained. Finally, N'Nb93m is converted to the activity of 93m Nb, ANb93m(Mo), by equation (2).
In this study, the activity of 93m Nb produced from Mo in each specimen was evaluated using the measured activity of 93 Mo in each specimen and the above procedure. Table 3 lists the concentrations of Nb and Mo and the activities of 93m Nb and 93 Mo for each specimen. The Nb/Mo ratio shown in Table 3 Table 4 allows a comparison of the measured activity of 93m Nb (M) and the calculated activity of 93m Nb produced from Nb (C). The ratios of the measured and calculated values of XM-19 and X-750 alloy are 1.1 and 1.2, which are in relatively good agreement, and the measured values of 93m Nb obtained by the measurement method in this study are reasonable.
Results and discussion
On the other hand, in type 316L SS, the ratio of measured to calculated values is 2.8, and the agreement is poor. Focusing on the Nb/Mo ratio in each specimen, the ratio of type 316L SS is lower than the ratios of the other two. Baers and Hasanen [3] reported that the Nb/Mo ratio must be greater than a certain value (Nb/Mo ratio > 0.01 in their study) for the 93m Nb produced from Nb to be dominant. This means that when the Nb/Mo ratio is low, the proportion of 93m Nb produced from Mo in the activity of 93m Nb in the specimen becomes large, and 93m Nb produced from Mo cannot be neglected. In type 316L SS, the Nb/Mo ratio is low and the measured activity is higher than the calculated activity. This difference between measured and calculated activities suggests that the actual activity (i.e., the measured activity) was affected by 93m Nb produced from Mo, resulting in a discrepancy between the measured and calculated values. Table 5 shows the results of estimating the activity of 93m Nb produced from 92 Mo based on the measured activity of 93 Mo shown in Table 3. The calculated activity of 93m Nb produced from Mo in XM-19 and X-750 alloy is sufficiently small compared to the measured activity of 93m Nb as shown in Table 4. On the other hand, in type 316L SS, the calculated activity of 93m Nb produced from Mo is only about one order of magnitude smaller than the measured value, which is a non-negligible amount. Table 6 is a comparison of the measured activity of 93m Nb after subtracting the activity of 93m Nb produced from 92 Mo (M') and the calculated activity of 93m Nb produced from Nb (C). Even after subtracting the activity of 93m Nb produced from Mo, the ratio of measured to calculated values in type 316L SS is 2.4, which is a slight improvement, but there is still a discrepancy compared to the XM-19 and X-750 alloy values. The reason for the discrepancy between the calculated and measured values may be due to uncertainty in the measured value of 93 Mo, uncertainty in the calculation model for the amount of 93m Nb produced from 92 Mo, or other factors. The calculation of 93m Nb produced from 92 Mo should be done in detail using an analysis code such as ORIGEN2 [6].
Conclusions
In this study, the measurement method of 93m Nb was confirmed using three irradiation specimens of XM-19, X-750 alloy and type 316L SS. In addition, as an estimation method of Mo derived 93m Nb when the influence of 93m Nb produced from Mo cannot be ignored, an estimation method of 93m Nb produced from Mo based on the measured 93 Mo was examined. The measured activity of 93m Nb for XM-19 and X-750 alloy agreed well with the calculated activity assuming that 93 Nb is the source, and it was confirmed that the method used in this study can obtain reasonable 93m Nb measurements.
On the other hand, in type 316 L SS with low Nb concentration relative to Mo concentration, compared to XM-19 and X-750 alloy, the measured activity of 93m Nb did not agree well with the calculated activity assuming that 93 Nb is the source, and the effect of 93m Nb from Mo was confirmed. Therefore, 93m Nb from Mo was evaluated based on 93 Mo measurements and subtracted from the 93m Nb measurements. However, the difference between the measured and calculated values of 93m Nb was still about two-fold, indicating that the estimation method for 93m Nb produced from Mo needs to be improved. It is planned to investigate the amount of 93m Nb produced from Mo in detail in the future.
In addition, type 304 SS, which was not treated in this study, does not contain Mo as a constituent element, and there is a possibility that 93m Nb produced from Mo can be neglected, and it is planned to study the feasibility of fast neutron fluence estimation from Nb contained in type 304 SS also.
|
v3-fos-license
|
2023-02-09T15:18:52.559Z
|
2022-02-08T00:00:00.000
|
256690684
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-022-28358-w.pdf",
"pdf_hash": "98606d22c97173d50003d6128af91cae555e5e3b",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44164",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "98606d22c97173d50003d6128af91cae555e5e3b",
"year": 2022
}
|
pes2o/s2orc
|
In vivo mitochondrial base editing via adeno-associated viral delivery to mouse post-mitotic tissue
Mitochondria host key metabolic processes vital for cellular energy provision and are central to cell fate decisions. They are subjected to unique genetic control by both nuclear DNA and their own multi-copy genome - mitochondrial DNA (mtDNA). Mutations in mtDNA often lead to clinically heterogeneous, maternally inherited diseases that display different organ-specific presentation at any stage of life. For a long time, genetic manipulation of mammalian mtDNA has posed a major challenge, impeding our ability to understand the basic mitochondrial biology and mechanisms underpinning mitochondrial disease. However, an important new tool for mtDNA mutagenesis has emerged recently, namely double-stranded DNA deaminase (DddA)-derived cytosine base editor (DdCBE). Here, we test this emerging tool for in vivo use, by delivering DdCBEs into mouse heart using adeno-associated virus (AAV) vectors and show that it can install desired mtDNA edits in adult and neonatal mice. This work provides proof-of-concept for use of DdCBEs to mutagenize mtDNA in vivo in post-mitotic tissues and provides crucial insights into potential translation to human somatic gene correction therapies to treat primary mitochondrial disease phenotypes. Mutations in mitochondrial DNA can lead to clinically heterogeneous disease. Here the authors demonstrate in vivo base editing of mouse mitochondrial DNA in a post-mitotic tissue by AAV delivery of DddA-derived cytosine base editor (DdCBE).
M itochondria play a central role in energy provision to the cell and in several key metabolic pathways, such as thermogenesis, calcium handling, iron-sulfur cluster biogenesis, and apoptosis 1,2 . Mitochondria produce energy in the form of ATP, which is synthesized in the process of oxidative phosphorylation (OXPHOS), involving sequential redox reactions coupled with proton pumping performed by mitochondrial membrane-embedded respiratory chain complexes (I-IV) and ATP synthase (complex V). In mammals, the mitochondrial proteome comprises~1200 proteins 3 , with most of them being nuclear DNA (nDNA)-encoded. However, 13 essential OXPHOS polypeptides, and 22 tRNAs, and 2 rRNAs required for their translation reside inside the mitochondrial matrix, encoded by the mitochondrial DNA (mtDNA)-a 16.5 kb, maternally inherited multicopy circular genome.
Mitochondrial diseases are genetic disorders, caused by mutations, either in nDNA or mtDNA, that lead to mitochondrial energy production impairment and perturbations in other aspects of cellular homeostasis. With a prevalence of~23 in 100,000, mitochondrial disorders are among the most common inherited diseases, and are often associated with severe disability and shortened lifespan 4 . There are currently no effective treatments for these disorders and clinical management focuses on treating complications 5 . Mutations in mtDNA and mitochondrial dysfunction have also been implicated in many common diseases with high societal impact and ageing 6,7 . Mammalian cells can contain 100-1000 s copies of mtDNA 8 . Pathogenic variants in mtDNA can either be present in all copies (homoplasmy) or only in a portion of genomes (heteroplasmy), with mutant load varying across cells, tissues, and organs 9 . In heteroplasmic cells, the mutant load required for clinical expression must exceed a threshold, which is highly variable and is dependent on the mtDNA variant, affected tissues/organ(s), and genetic/environmental contexts, but is usually more than 60% 10 .
Despite the ongoing genome-engineering revolution enabled by the CRISPR/Cas-based systems, mammalian mtDNA has been resistant to genetic modifications, in a large part owing to ineffective nucleic acid import into the mitochondria 11 . The inability to edit mtDNA sequences in mammalian mitochondria within cells has hampered the research of normal mtDNA processes, the development of in vivo models and therapies for mtDNA diseases. For many years the approaches towards manipulation of mtDNA in mammals have been mainly limited to mitochondrially targeted restriction enzymes [12][13][14] and programmable nucleases [15][16][17][18][19][20][21] . These nucleases have been used to eliminate undesired mtDNA molecules from heteroplasmic populations, to move the mutant mtDNA heteroplasmy below the pathogenicity threshold 22 . Following extensive trials in vitro, the mitochondrial nuclease-based approaches have reached in vivo proof-ofconcept. Delivered by adeno-associated virus (AAV) in heteroplasmic mouse models, mtRE, mtZFN, mitoTALEN, or mitoARCUS demonstrated specific elimination of the mutant mtDNA in the target tissues 23,24 , which in some models and approaches was accompanied by the molecular and physiological rescue of disease phenotypes [25][26][27] .
While programmable nucleases have proven useful in changing the existing heteroplasmy, they are unable to introduce novel mtDNA variants. However, recently a novel tool has emerged: DddA-derived cytosine base editor (DdCBE), which catalyzes site-specific C:G to T:A conversions in mtDNA with good target specificity in human cultured cells 28 . DdCBE is based upon a modified bacterial toxin DddA tox (separated, non-toxic halves fused to TALE proteins) which is targeted to the mitochondrial matrix to catalyze the deamination of cytidines within dsDNA at sequence determined by TALE design 28 . Current DdCBEs deaminate cytidines (in the TC:GA sequence context) to uracil leading to a TC:GA > TT:AA mutations upon subsequent replication 28 . An initial proof-of-concept of successful installment of mtDNA edits by delivering DdCBEs mRNA into embryos in the mouse 29 and zebrafish 30 has also recently been provided.
In this study, we provide proof-of-concept for the use of DdCBEs in vivo in somatic tissue. We used the mouse heart as a surrogate of a post-mitotic tissue and showed that DdCBE delivered using AAV can install the desired mtDNA mutations in adult and neonatal mice. To the best of our knowledge, such a result has not been reported in the literature thus far. This work demonstrates that the DdCBE platform could be used for tissuespecific mtDNA mutagenesis in vivo and potentially for future therapies based upon somatic mitochondrial gene correction to treat mtDNA-linked mitochondrial diseases.
Results
Design of DdCBE and mtDNA editing in mouse cultured cells.
With the intention of testing the emerging mtDNA editing DdCBE technology, we set off to induce de novo mutations in mouse mitochondrial complex I in cultured cells and in somatic tissues upon AAV delivery. We aimed at editing the GGA glycine 40 codon in mouse MT-Nd3 (mtDNA positions: m.9576 G and m.9577 G) by targeting the complementary cytosine residues with DdCBE ( Fig. 1a, C 12 C 13 ). To enable this, we designed four DdCBE pairs, containing TALE domains binding the mtDNA light (L) and heavy (H) strands (mtDNA positions m.9549-m.9564 and m.9584-m.9599, respectively) and different combinations of DddA tox splits (G1333 or G1397), targeting a 19 bp-long sequence in the mouse MT-Nd3 gene (mtDNA positions: m.9565-m.9583) ( Fig. 1a-b) and named them DdCBE-Nd3-9577-1 to 4. The previous study has shown that for the sites containing two consecutive cytosines (preceded by a thymine), both of these cytosines can be edited by DdCBEs, with the following potential consequences TCC:GGA > TTC:GAA, TCC:GGA > TCT:AGA, TCC:GGA > TTT:AAA (edited C underlined) 28 . In line with this, we predicted three possible outcomes of m.9576 G and m.9577 G editing ( Fig. 1b): [i] deamination of both complementary cytosines (Fig. 1b, C 12 and C 13 ), would result in glycine to lysine mutation (G40K), [ii] deamination of C 13 would lead to a glycine to glutamic acid mutation (G40E), whereas [iii] exclusive editing of C 12 would lead to a premature AGA stop codon (G40*), according to the mitochondrial genetic code (Fig. 1b). The three predicted mutations are located in the conserved ND3 loop, involved in active/deactive state transition of complex I ( Fig. 1c-d) 31,32 .
Next, we transiently delivered DdCBE-Nd3-9577 pairs into mouse cultured NIH/3T3 cells and selected the transfectants using fluorescence-activated cell sorting (FACS) at 24 h posttransfection and allowed for a 6 day-long recovery (Fig. 2a). After 7 days, we detected efficient editing of target cytosines by Sanger sequencing for the DdCBE-Nd3-9577 pairs 1, 2 and 3 (Fig. 2b). Although the targeted 19 bp sequence region between the TALE binding sites contained five cytosine residues with the correct thymine-cytosine (TC) consensus (Fig. 1a, C 6 , C 12 , C 13 , C 17 , and C 18 ), positioning of the DddA tox deaminase domain allowed for editing of C 12 , C 13 leading to the expected changes in the MT-Nd3 GGA glycine 40 codon (Fig. 2b). Next, we performed nextgeneration sequencing (NGS) analysis of the DdCBE-Nd3-9577mediated mutagenesis to quantify the editing in each target cytosine and measure the proportion of editing resulting in the G40K, G40E, G40* mutations. This analysis revealed up to~43% editing of C 12 and C 13 for DdCBE-Nd3-9577 pair 1,~20-35% editing for pairs 2 and 3, and confirmed negligible editing activity of pair 4 (Fig. 2c). Furthermore, most of the NGS reads for pair 1 contained editing of both C 12 and C 13 translating to G40K mutation (~92.5%), whereas DNA reads corresponding to the G40E and G40* changes constituted only~4.5% and 3%, respectively (Fig. 2d). A similar editing pattern, but with a higher proportion of reads corresponding to G40E, was observed for pair 3 (G40K:~83%, G40E: 14% G40*: 3%) (Fig. 2d). However, the mutagenesis pattern detected for pair 2 was skewed towards G40E, as compared to pairs 1 and 3, with reads corresponding to G40E accounting for 45.5% of C 12 and C 13 -edited reads (G40K: 53% and G40*: 1.5%) (Fig. 2d). The NGS analysis revealed a low level (below 3%) of C 6 (mtDNA position m.9570 G) for pairs 1-3, which is predicted to install the E38K change (Fig. 2c). To confirm that the observed mtDNA editing is indeed a result of the catalytic activity of DddA tox , we used the DdCBE-Nd3-9577 pairs harboring a catalytically inactive DddA tox (E1347A) in a control transient transfection experiment. The NGS analysis showed that none of the catalytically inactive DdCBEs exerted detectable deamination activity that would lead to mtDNA editing, with the mutagenesis frequency being at the level of wild-type cells (Fig. 2c). Based on these in vitro results, we concluded that the DdCBE-Nd3-9577-1 pair is the most suitable for efficient installation of G40K and decided to proceed with in vivo experiments using this set.
AAV-based in vivo DdCBE editing of mtDNA in adult mice.
To provide a proof-of-concept for in vivo mtDNA gene editing of somatic cells, we encapsidated the catalytically active and inactive versions of the DdCBE-Nd3-9577-1 monomers into the cardiotropic AAV9.45 serotype and administered them systemically via tail-vein injection at 1 × 10 12 viral genomes (vg) per monomer per 8-week-old adult mouse (Fig. 3a). At 3-and 24-weeks post-injection, we confirmed successful DdCBE DNA delivery to the cardiac tissue by quantitative PCR (Supplementary Fig. 1a-b) and detected its expression in total mouse heart tissue by western blotting (Supplementary Fig. 1c) and immunohistochemistry (Supplementary Fig. 1d-e). At 3 weeks after DdCBE-Nd3-9577-1 AAV injections, we detected low-level editing (1-2%) of the target C 12 and C 13 bases (corresponding to m.9576G > A and m.9577G > A) by NGS, but not by Sanger sequencing (Fig. 3b-c). The NGS analysis revealed C 12 and C 13 mutagenesis in the hearts of animals injected with catalytically active DdCBE-Nd3-9577-1, but not in those injected with a vehicle or catalytically inactive DdCBE, with the C 12 and C 13 editing pattern resembling the one observed in in vitro experiments (Fig. 3d). Notwithstanding, at 24-weeks post-injection robust editing (10-20%) of C 12 and C 13 was observed in the cardiac tissue of mice injected with catalytically active DdCBE-Nd3-9577-1 by Sanger sequencing and NGS, with no detectable changes being detected for vehicle and catalytically inactive controls ( Fig. 3e-f). Further analysis confirmed that in the majority (94%) of the NGS reads both C 12 and C 13 were edited, translating into the G40K MT-Nd3 mutation (Fig. 3g). We did not detect any adverse effect on mtDNA copy number in DdCBE AAV-transduced cardiac tissue as compared to vehicle-injected controls (Supplementary Fig. 1f-g). Taken together, these results established that DdCBE editing can be applied to install mtDNA mutations in vivo in post-mitotic tissues.
AAV-based in vivo DdCBE editing of mtDNA in neonates.
Having established that de novo mutations can be installed in vivo in the post-mitotic tissue of adult mice following AAV-assisted DdCBE transduction, we set off to investigate whether transgene delivery to younger subjects could enhance the editing. To this end, we injected neonatal subjects (first 24 h of life) with DdCBE-Nd3-9577-1 AAV9.45 and its catalytically inactive version via temporal vein injection at 1 × 10 12 vg per monomer per animal (Fig. 4a). Fig. 2b-c). At this time-point we observed high efficiency of editing of mtDNA in mouse heart, with Sanger sequencing and NGS revealing 20-30% of C > T (G > A) changes of C 12 and C 13 in the DdCBE-Nd3-9577-1-targeted spacing region ( Fig. 4b-c). No editing of these bases was observed in the control pups injected with a vehicle and the catalytically inactive DdCBE-Nd3-9577-1 AAVs (Fig. 4c). The vast majority of NGS reads (95%) contained simultaneous edits of C 12 and C 13 (corresponding to m.9576 G > A and m.9577 G > A), translating into the G40K MT-ND3 mutation (Fig. 4d). We did not observe any significant changes in mtDNA copy number in the AAV-treated mice as compared to the vehicle-injected controls ( Supplementary Fig. 2d). Taken together, these data not only further confirm that DdCBE-mediated mtDNA editing is possible in post-mitotic tissues upon AAV delivery, but also show that treatment of the younger subjects is beneficial for the efficacy of mtDNA modification.
Off-target editing by DdCBE in adult and neonatal mice followed AAV delivery. To score mtDNA-wide off-targeting in the mice, we analyzed mtDNA from hearts of vehicle-injected controls and mice injected with active or inactive versions of the DdCBE-Nd3-9577-1 pair. Vehicle-injected and catalytically inactive editor samples were used as a control in order to distinguish DdCBE-induced C:G-to-T:A single-nucleotide variants (SNVs) from natural background heteroplasmy. The average frequencies of mtDNA-wide off-target C:G-to-T:A editing by DdCBE-Nd3-9577-1 in the adult animals treated for 3 weeks were comparable to those of the vehicle-injected and catalytically inactive-DdCBE controls (0.026-0.046%) ( Supplementary Fig. 3a). However, the adult mice treated with DdCBE-Nd3-9577-1 for 24 weeks showed~7-fold higher average off-target editing frequency (0.22-0.30%) as compared with the controls (Supplementary Fig. 3a). We observed the highest off-target editing in the neonates treated with DdCBE-Nd3-9577-1, which was~3-fold higher than the one observed for 24-week AAV-treated adult mice ( Supplementary Fig. 3a). There was a positive correlation between on-target and off-target editing, with increased m.9576 G and m.9577 G modification being accompanied by higher levels of C:G-to-T:A SNVs ( Supplementary Fig. 3b). Next, we tested offtarget editing at nuclear pseudogenes, which are identical or share a high degree of sequence homology with mtDNA (nuclear mitochondrial DNAs, NUMTs). We did not observe editing beyond the background at the tested NUMTs, even though one of them was identical with the mtDNA on-target sites (Supplementary Fig. 4). This result is consistent with previous reports of exclusive mitochondrial localization of DdCBEs 28 . Taken together, our data show that, while no detectable off-targeting is observed in nDNA upon AAV delivery of DdCBE, substantial off-targets are observed in mtDNA, especially when the on-target modification is also high. The latter result suggests that future development of DdCBE must focus on further improvement of the precision of this emerging tool.
Discussion
The discovery of a bacterial cytidine deaminase acting on doublestranded DNA (DddA) led to the development of mitochondrial DddA-derived cytosine base editors (DdCBEs), which is likely to revolutionize the field of mammalian mtDNA genetic modification 28 . The DdCBE technology provides the potential to reverse engineer the mitochondrial genome in animal cells and eventually correct homo-and heteroplasmic pathogenic point mutations in mtDNA. Generation of novel animal models is now expected to proceed in an expedited fashion, either by DdCBEmediated manipulation of Embryonic Stem (ES) cells, direct modification of mouse embryos or, as shown here, by somatic delivery. As with any emerging technologies, the DdCBE approach needs to be tested in multiple systems to validate its usefulness. Thus far, the versatility of DdCBEs was shown by successfully base editing five mtDNA genes with efficiencies ranging between 5 and 50% in human cells in vitro 28 . Further proof-of-concept exemplified the use of base editing in mouse embryos and reported successful germline transmission of DdCBE-induced mtDNA edits 29 . In the latter report, mutations in MT-Nd5, generated by delivering DdCBE mRNAs in mouse zygotes were maintained throughout development and differentiation. These mutations were successfully transmitted to offspring (F1) with heteroplasmy levels of up to 26%, providing evidence that DdCBEs can be used to generate mouse models with bespoke mtDNA mutations 29 . In this study, we used somatic AAV delivery of DdCBEs, providing a proof-of-concept for an alternative in vivo mtDNA mutagenesis means. The presented successful AAV-based mtDNA editing is also critical for in vivo proof-of-concept and insights into potential clinical translation to human somatic mitochondrial gene correction therapies to treat primary mitochondrial disease (PMD) phenotypes in worstaffected tissues. These future therapies could be crucial for mtDNA-associated PMDs, whose de novo genetics and hard-topredict penetrance make preimplantation genetic diagnosis (PGD) screening difficult.
We predicted two possible issues that could prevent efficient mtDNA editing by DdCBEs in vivo.
[i] The editing of mitochondrial genomes with the current DdCBEs involves inhibition of mitochondrial base excision repair (BER), to allow the retention of uracil in DNA (the result of cytosine deamination) and a sufficient level of active mtDNA replication allowing for the conversion of uracil into thymine 28 . The efficacy of mitochondrial BER had not been fully investigated in vivo and it was possible that it operates at the level of preventing efficient DdCBEmediated editing.
[ii] Previous reports highlighted that mammalian mtDNA is replicated continuously even in post-mitotic cells 33 , but whether its activity is sufficient to achieve successful editing by DdCBEs was unknown. The successful mtDNA editing in mouse hearts reported here show that [i] the efficacy of BER operating in mouse heart does not lead to immediate excision of uracil in DNA, precluding efficient editing, and [ii] the level of mtDNA replication is high enough for fixing C-to-U deamination events into C-to-T mutations within 3 or 24 weeks after DdCBE treatment of neonatal or adult mice, respectively.
In neonates, our data also show that DdCBE administration earlier during development results in higher editing efficiency. A partial explanation for this finding could be that [i] at earlier vector administration, the greater AAV vector-to-cell ratio promoted higher transduction of the DdCBEs in the neonate heart and [ii] the postnatal development of mouse cardiomyocytes involves substantial mtDNA replication leading to an almost 13fold increase of mtDNA copy number during the first four weeks of postnatal life 34 , increasing the probability of fixing C-to-U deamination events into C-to-T mutations. Considering future therapeutic interventions based on DdCBE-mediated mtDNA correction, this observation may be encouraging in the context of those mitochondrial diseases with early-onset and a rapid progression, for which administration of potential therapies in adults would be inadequate. Nonetheless, further studies on the gene correction potential of AAV-delivered DdCBEs would be required in mtDNA-disease mouse model. However, none of the four currently available mouse models harboring pathogenic mtDNA point mutations is suitable for a DdCBE-mediated gene correction study either due to the nature of mutation (MT-ND6: m.13997 G > A, MT-MK: m.7731 G > A, MT-TA: m.5024 C > T) or incompatible sequence context upstream of a T-to-C mutation (MT-COI: m.6589 T > C) 35 . The latter means that, in addition to DdCBE-based approaches, the field should continue generating novel mouse mtDNA mutated mouse lines using the established, phenotype-first pipeline, as it can generate any mtDNA mutations (not only C:G to T:A offered by DdCBEs) 36 .
Our results confirm the previous observation that each DddA tox split edits the TC sites with a preference for specific windows in the spacing region. They also highlight that for any given target sequence, testing G1397 and G1333 splits in both orientations are required to achieve on-target editing. Here we demonstrate that the pair DdCBE-Nd3-9577-1 combining the G1333-C split in the TALE targeting the L-strand with G1333-N on the TALE targeting the H-strand produces more robust base editing of C 12 and C 13 located on the H-strand. Such observations fall in line with the previous reported preference of this split combination, which favors base editing of H-strand Cs present in the center of the editing window between the TALEs 28 .
Our off-target editing analysis revealed that adult mice treated with DdCBE-Nd3-9577-1 for 24 weeks show~0.25% of C:G-to-T:A SNV frequencies mtDNA-wide, which is higher than these reported previously for DdCBEs transiently expressed for 3 days in human cells, which ranged between~0.05 to~0.15% 28 . Also, the C:G-to-T:A SNV off-target frequencies detected in the neonates injected with the active DdCBE-Nd3-9577-1 pair for 3 weeks were on average at~0.8%, which was~5 times higher than the least "precise" mitochondrial base editor reported by Mok et al. 28 . We attribute these differences to longer mtDNA-DdCBE exposure time (weeks vs days) in our experiments and conclude that further optimization of mitochondrial DdCBE concentration and specificity will be required, especially in long-term in vivo experiments.
In the present study MT-Nd3 G40K was installed to provide a proof-of-concept of somatic mtDNA editing. However, this mutation is located in the conserved ND3 loop involved in active/ deactive state transition 31,32 . It is expected that high G40K heteroplasmy will result in mitochondrial dysfunction by permanently locking complex I in active confirmation, which warrants further studies. In addition, it has been previously shown that targeting the reversible S-nitrosation of the neighboring ND3 residue C39 37 protects against ischaemia-reperfusion (IR) injury. In this line, the G40K mutant could be explored in the context of changes in exposure of Cys39 in models of IR injury.
In conclusion, DdCBE is a promising tool for de novo mtDNA editing in post-mitotic tissue, which upon further research and optimization could be used to revert pathogenic mtDNA variants in patients affected with mitochondrial disease.
Methods
Ethics statement. All animal experiments were approved by the local Animal Welfare Ethical Review Body (AWERB) at the University of Cambridge and carried out in accordance with the UK Animals (Scientific Procedures) Act 1986 (Procedure Project Licence: P6C20975A) and EU Directive 2010/63/EU.
Plasmid construction and viral vectors.
The DdCBE architectures used were as reported in 28 . A catalytically dead DddA tox (E1347A) was used in the "inactive" control experiments. TALE arrays were designed using the Repeat Variable Diresidues (RVDs) containing NI, NG, NN, and HD, recognizing A, T, G, and C, respectively. To construct the plasmids used in the cell screen, all DdCBEs ORFs were synthesized as gene blocks (GeneArt, Thermo Fisher) and cloned into pVax vectors downstream of a mitochondrial localization signal (MLS) derived from SOD2, using the 5´KpnI and 3´BglII restriction sites (Supplementary Sequences 1). Vector construction of DdCBEs intended for AAV production was achieved by PCR amplification of the transgenes to include 5´NotI and 3´BamHI sites, allowing cloning into a rAAV2-CMV backbone (Supplementary Sequences 2), previously reported in 26 . The resulting plasmids were used to generate recombinant AAV2/9.45 viral particles at the UNC Gene Therapy Center, Vector Core Facility (Chapel Hill, NC).
Cell culture and transfections. NIH/3T3 cells (CRL-1658 TM , American Type Culture Collection (ATCC)) were cultured at 37°C under 5 % (vol/vol) CO 2 and in complete Dulbecco's Modified Eagle Medium (DMEM) (4.5 g/L glucose 2 mM glutamine, 110 mg/ml sodium pyruvate), supplemented with 10% calf bovine serum with iron and 5% penicillin/streptomycin (all from Gibco). Mycoplasma tests in the culture medium were negative. The cell line was not authenticated in this study. For DdCBE pair screen, NIH/3T3 mouse cells plated in six-well tissue culture plates at a confluency of 70% were transfected with 3200 ng of each monomer (L and H), to a total of 6400 ng of plasmid DNA using 16 µl of FuGENE-HD (Promega), following manufacturer´s guidelines. After 24 h, cells were collected for Fluorescence-activated cell sorting (FACS) and sorted for GFP and RFP double-positive cells using a BD FACSMelody TM Cell sorter. The collected doublepositive cells were allowed to recover for another 6 days and then used for DNA extraction, as described below.
Animals. Mice in a C57BL/6J background were obtained from Charles River Laboratories. The animals were maintained in a temperature-and humiditycontrolled animal care facility with a 12 h light/12 h dark cycle and free access to water and food, and they were sacrificed by cervical dislocation. In adult experiments, 8-week-old male mice were administered systemically by tail-vein injection with 1 × 10 12 AAV particles of each monomer [AAV-DdCBE (L) -Nd3-9577 -G1333-C and AAV-DdCBE (H) -Nd3-9577-G1333-N]. An equal dose was applied in newborn pups (Postnatal day 1-males and females) via the temporal vein, using a 30 G, 30°bevelled needle syringe. Control mice were injected with similar volumes of vehicle buffer (1× PBS, 230 mM NaCl and 5% w/v D-sorbitol).
Genomic DNA isolation and Sanger sequencing of MT-Nd3 locus. NIH/3T3 mouse cells were collected by trypsinization, washed once in PBS, and resuspended in lysis buffer (1 mM EDTA, 1% Tween 20, 50 mM Tris (pH = 8)) with 200 µg/ml of proteinase K. Lysates were incubated at 56°C with agitation (300 RPM) for 1 h, and then incubated 95°C for 10 min before use in downstream applications. Genomic DNA from mouse heart samples (~50 mg) was extracted with a Maxwell® 16 Tissue DNA Purification Kit in a Maxwell® 16 Instrument (Promega), according to the manufacturer´s instructions.
For Sanger sequencing, the MT-Nd3 edited region was PCR-amplified with GoTaq G2 DNA polymerase (Promega) using the following primers: Mmu_Nd3_Fw: 5´-GCA TTC TGA CTC CCC CAA AT -3´; and Mmu_Nd3_Rv: 5´-GGC CTA GAG ATA GAA TTG TGA CTA GAA -3´. The PCR was performed with an initial heating step of 1 min at 95°C followed by 35 cycles of amplification (30 s at 95°C, 30 s at 63°C, 30 s at 72°C), and a final step of 5 min at 72°C. PCR purification and Sanger sequencing were carried out by Source Bioscience (UK) with the Mmu_Nd3_Rv primer.
High-throughput targeted amplicon sequencing. Genomic DNA was extracted as described above. For high-throughput targeted amplicon resequencing of the MT-Nd3 region, a 15,781 bp fragment was first amplified by long-range PCR to avoid amplification of nuclear mtDNA pseudogenes (NUMTs), with PrimeSTAR GXL DNA polymerase (TAKARA) using the following primers: Long-R_mtDNA_Fw: 5´-GAG GTG ATG TTT TTG GTA AAC AGG CGG GGT -3´; and LongR_mtDNA_Fw: 5´-GGT TCG TTT GTT CAA CGA TTA AAG TCC TAC GTG -3´. The PCR was performed with an initial heating step of 1 min at 94°C followed by 10 cycles of amplification (10 s at 98°C, 13 min at 68°C), and a final step of 10 min at 72°C. All PCR reactions from this and the following steps were cleaned up AMPure XP beads (Beckman Coulter, A63881). An aliquot of the purified long-range PCR reactions was amplified with primers containing an overhang adapter sequence, compatible with Illumina index and sequencing pri- For mtDNA-wide off-targets analysis, two overlapping long amplicons (8331 bp and 8605 bp) covering the full mtDNA molecule were amplified by long-range with PrimeSTAR GXL DNA polymerase (TAKARA) using the following primers: mmu_ND2_Fw: 5´-TCT CCG TGC TAC CTA AAC ACC -3´; with mmu_ND5_Rv: 5´-GGC TGA GGT GAG GAT AAG CA -3´; and mmu_ND2_Rv: 5´-GTA CGA TGG CCA GGA GGA TA -3´; with mmu_ND5_Fw: 5´-CTT CCC ACT GTA CAC CAC CA -3´. The PCR was performed with an initial heating step of 1 min at 94°C followed by 16 cycles of amplification (30 s at 98°C, 30 s at 60°C, 9 min at 72°C), and a final step of 5 min at 72°C.
For nuclear DNA off-targets assessment, two regions with high homology to the MT-Nd3 targeted region were analyzed. A region with 100% identity in chromosome 1 was amplified by long-range with PrimeSTAR GXL DNA polymerase (TAKARA) using the following primers: G40K_NUMT100_Fw2: 5´-T GC ACT GCT GAC CCA TTA AT -3´; with G40K_NUMT100_Rv2: 5´-ACA CAC ACT AGA CAA CAC CCA -3´. The second region with 86% identity in chromosome 14 was amplified using the following primers: G40K_NUMT86_Fw1: 5´-CTG GTG GTC ACT TGG TGT GT -3´; with G40K_NUMT86_Rv1: 5´-TGT TAC ATG TTT CTC TGT TTT TGC T -3´.
The PCR was performed with an initial heating step of 1 min at 94°C followed by 20 cycles of amplification (30 s at 98°C, 30 s at 60°C, 9 min at 72°C), and a final step of 5 min at 72°C. Tagmentation and the indexing PCR were performed using the Nextera XT Index Kit (Illumina, FC-131-1096) according to the manufactureŕ s instructions and as described above. Libraries were subjected to high-throughput sequencing using the Illumina MiSeq platform (PE250) and demultiplexed using the Illumina MiSeq manufacturer´s software.
Processing and mapping of high-throughput data. Quality trimming and 3' end adaptor clipping of sequenced reads were performed simultaneously, using Trim Galore! (--paired) 38 . For targeted amplicon resequencing of the MT-Nd3 region and for mtDNA-wide off-targets analysis reads were aligned to ChrM of the mouse reference genome (GRCm38) with Bowtie2 (--very-sensitive; --no-mixed; --nodiscordant) 39 . For nuclear DNA off-targets assessment reads were aligned to the two regions with high homology (GRCm38). Count tables for targeted amplicon resequencing of the MT-Nd3 region were generated with samtools mpileup (-q 30) 40 and varscan 41 . To study the editing pattern per reading and to determine the percentage of reads that had edits at C 12 and C 13 , we used cutadapt (-e = 0; --action=none; --discard-untrimmed) 42 for the 10nt surrounding the editing site in combination with the samtools flagstat command. For the mtDNA-wide and nuclear DNA off-targets analysis REDItools2.0 43 was used (-bq 30) to generate count tables. The average mtDNA-wide C•G-to-T•A off-target editing frequency was assessed by summation of all off-target C-to-T and G-to-A editing frequencies divided by the total number of C•G sites in mouse mtDNA (5990).
Quantification of viral genomes copy number and relative mtDNA content by quantitative real-time PCR. All quantitative real-time PCR reactions were performed in a QuantStudio TM Immunoblotting of DdCBEs in mouse hearts. Heart samples (~50 mg) were homogenized in 200 uL of ice-cold RIPA buffer (150 mM NaCl, 1.0% NP-40, 0.5% sodium deoxycholate, 0.1% SDS, 50 mM Tris, pH 8.0) containing 1X cOmpleteTM mini EDTA-free Protease Inhibitor Cocktail (Roche, UK), using a gentleMACS TM dissociator. Homogenates were incubated on ice for 20 mins and then clear by centrifugation (20,000 × g for 20 min at 4°C). Protein lysates (~20 μg) were mixed with 10 X NuPAGE™ sample reducing agent and 4X NuPAGE™ LDS sample buffer (Invitrogen), and incubated for 5 min at 95°C. The boiled protein samples were then separated in a Bolt 4-12% Bis-Tris (Thermo Fisher) pre-cast gel and later transferred to a PVDF membrane using an iBlot 2 gel transfer system (Thermo Fisher), according to the manufacturer´s recommendations. The residual proteins that remained in the gel were detected using SimpleBlue SafeStain (Thermo Fisher) and used as a loading control. The membrane was blocked in 5% milk in PBS with 0.1% Tween 20 (PBS-T) for 1 h at room temperature (RT) and then incubated with either rat anti-HA-tag antibody (Roche, 11867423001), 1:1000 diluted in 5% milk in PBST or mouse anti-FLAG-M2-tag (Sigma-Aldrich, F3165), 1:2000 diluted in 5% milk in PBST. Membranes were washed three times with PBS-T for 10 min at RT and then incubated with HRP-linked secondary antibodies, either anti-rat IgG (Cell Signaling, 7077 S) or antimouse IgG (Promega, W4021), 1:5000 diluted in 5% milk in PBST. The membranes were washed another three times as before and imaged digitally with an Amersham Imager 680 blot and gel imager (GE Healthcare), upon incubation with Amersham ECL TM Western Blotting Detection Reagents (GE Healthcare).
Immunohistochemistry and microscopy. At sacrifice, mouse heart samples were frozen by immersion in isopentane cooled in liquid nitrogen. For immunohistochemistry analysis, heart samples were mounted in optimal cutting temperature compound (OCT) and sectioned on a cryostat at −20°C to a thickness of 8 μm. Sections were fixed in 10% neutral buffered formalin (SIGMA, HT501128) at RT. After three washes in PBS, samples were permeabilized using 0.2% Triton X-100 in PBS for 15 min at RT, followed by three washes in PBS, and then blocked in 5% normal goat serum + 2% BSA in PBS for 1 h at RT. After three washes in PBS, samples were incubated with the rabbit anti-HA-Tag (Cell Signaling, 3724) primary antibody 1:2000 in DAKO solution (Agilent, S3022) for 1 h at RT, followed by three additional washes in PBS. Sections were then incubated with goat antirabbit IgG (H + L) AlexaFluor diluted 1:300 in DAKO diluent solution 568 for 1 h at RT, washes three times in PBS and finally coverslip with prolonged diamond antifade mountant with DAPI (Thermo Fisher, P36962). All confocal images were acquired using a Zeiss LSM880 microscope using identical acquisition parameters.
Statistics. Graphical visualization of data and all statistical analyses were performed with GraphPad Prism software (version 8.0). All numerical data are expressed as mean ± standard error of mean (SEM). Ordinary one-way ANOVA with Dunnett's test was used for multiple comparisons. Animals were randomized and no blinding to the operator was used.
|
v3-fos-license
|
2020-02-13T09:05:14.435Z
|
2020-01-31T00:00:00.000
|
211199867
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.intechopen.com/citation-pdf-url/70718",
"pdf_hash": "e59bbf9239281ce2c6f5f20dc2ca8df91e53a4ea",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44165",
"s2fieldsofstudy": [
"Education"
],
"sha1": "b00ada30660f70b328a1715dbd6c753c1f03fe98",
"year": 2020
}
|
pes2o/s2orc
|
Students ’ Productive Struggles in Mathematics Learning
Using a predetermined framework on students ’ productive struggles, the purpose of this study is to explore high school students ’ productive struggles during the simplification of rational algebraic expressions in a high school mathematics classroom. This study is foregrounded in the anthropological theory of the didactic, and its central notion of a “ praxeology ” – a praxeology refers to the study of human action, based on the notion that humans engage in purposeful behavior of which the simplification of rational algebraic expressions is an example. The research methodology comprised a lesson study involving a sample of 28 students, and the productive struggle framework was used for data analysis. Findings show that the productive struggle framework is a useful tool that can be used to analyze students ’ thinking processes during the simplification of rational algebraic expressions. Further research is required on the roles that noticing and questioning can play for mathematics teachers to respond to and effectively support the students ’ struggles during teaching and learning.
Introduction
The purpose of this study is to explore high school mathematics students' productive struggles during the simplification of rational algebraic expressions. In recent research in mathematics learning and teaching [1][2][3], struggle is often associated with negative meanings of how mathematics is practiced in classrooms. Teachers of mathematics often view students' struggles in mathematics as something that should be avoided and/or as a learning problem that needs to be diagnosed and remediated or simply eradicated [4,5]. Struggle in mathematics learning and teaching is an essential component of students' intellectual growth, and of deep learning of mathematical concepts with understanding [6]. Research suggests that the apparent confusion and/or doubt displayed by students during problem-solving provide students with opportunities for deepening their conceptual understanding of mathematical concepts during teaching [7,8]. However, exposing students to complex problem-solving tasks which are beyond their cognitive levels, skills and abilities can result in productive failures on the part of students [7]. When students engage in complex problem-solving tasks, they are likely to experience productive failure unless support structures are put in place. Broadly, support structure refers to "[re-] structuring the problem itself, scaffolding, instructional facilitation, provision of tools, expert help, and so on" ( [7], p. 524). Research has shown that exposing students to complex problem-solving without putting in place efficient support structures can result in an unproductive cognitive process [9,10]. The notion of productive failure is centered on view that students are not in position to find the solution to a mathematical problem on their own in the short term. With assistance from teachers and capable peers, and taping from their prior knowledge, students can overcome their productive failures. Students can also experience unproductive success when they experience immediate learning gains through drilland-practice, and memorization approaches. Unproductive failure learning situations arise when the conditions in a learning environment do not favor neither learning in short term nor long term. While there is no "recipe" in avoiding and/or addressing unproductive failure situations when students are engaging in complex tasks, for examplesimplifying rational algebraic expressions, teachers can adopt approaches that ameliorate unproductive situations. According to [11] and others [12,13], learners who engage in unguided problem solving are likely to experience productive failure. ( [13], p. 128) posits that "What can be conceivably be gained by leaving the learner [student] to search for a solution when the search is usually very time consuming, may result in … no solution at all." Hence, to avoid unproductive failure learning situations, students must be provided with guidance during problem-solving. By guidance, we are referring to: scaffolding of problems; feedback through questioning, among others.
In other words, the struggle becomes a process in which students restructure their existing knowledge while moving towards a new understanding of what is being taught [14][15][16]. Students' struggles become productive in classrooms where they are afforded opportunities to solve complex problems, while being encouraged to try various approaches; even though in these classrooms, students can still fail and struggle, they will feel motivated and good about solving complex problems [17]. Equally, productive struggles ensue when students are given the support structure during problem-solving [7]. In classrooms, at the center of teaching and learning, teachers are expected to create a learning environment that values and promotes productive struggles among students by using challenging learning tasks that are nonetheless accessible to all students [18][19][20][21]. Productive struggle, which is stimulated by using challenging tasks during learning and teaching, supports students' cognitive growth and is essential for their learning of mathematics with understanding. While facilitating students' productive struggles teachers should avoid "reducing the cognitive load of the task such as [by] providing routine instructions tasks and over-modelling how to approach the task" ( [17], p. 20). ( [18], p. 178), similarly, encourages teachers to avoid "effortless achievement" by students; instead, teachers should value persistence and hard thinking.
While substantial research has been carried out on the types of errors that are committed by students when simplifying rational algebraic expressions in high school mathematics [22][23][24], this study explores the students' productive struggles during the simplification of rational algebraic expressions in real time, unlike the previous studies that focused only on students' errors. It is apropos to mention that students' productive struggles also include an understanding of how students deal with conceptual errors and misconceptions. As such, this study uses a predetermined framework [6] to analyze students' productive struggles as well as for analyzing the teachers' responses to the students' productive struggles. Existing research has focused on the difficulties encountered by students in understanding the equivalence of rational algebraic expressions through simplification and by valuing the importance of working and/or manipulating these expressions accurately with great flexibility [25][26][27][28]. The challenge here lies in the ability of students to work with more than one rational algebraic expressions and to find their equivalences. Thus, to explore students' productive struggles during the simplification of rational algebraic expressions in high school, this study is guided by the following research questions: What are the types of productive struggles experienced by the students while simplifying rational algebraic expressions in a high school lesson? How do teachers notice, and respond to the students' productive struggles during classroom activities? What questioning techniques are used by the teachers to support the students' productive struggles?
Theoretical framework
In this section, we define the anthropological theory of the didacticin which the study is based, and the students' productive struggle framework that is used for analyzing students' learning activities.
Anthropological theory of didactics
This study is founded in the anthropological theory of didactics and its central notion of a "praxeology"a praxeology refers the study of human action, based on the notion that humans engage in purposeful behavior of which learning mathematics is an example [29,30]. Nicaud et al. [28] argues that anthropological theory of the didactic as a general epistemological model for mathematical knowledge can be used to understand human mathematical activities, such as, in the context of this chapter, the simplification of rational algebraic expressions. Like any praxeology, the mathematical knowledge emerging from human activities is constituted by an amalgamation of four critical components, namely: type of task; technique; technology; and theory [28]. In human activities related to the learning of mathematics, Nicaud et al. [28] further re-classified the four critical components into two main praxeological modelsthe practical block and the knowledge block. The practical block is made up of the type of task and the technique. In the context of this study, the specific task is the simplification of rational algebraic expressions, whereas the technique refers to the tools that students need to carry out these simplifications. Examples of tools include: factorizations; finding common denominators, expanding expressions, and cancelation procedures among others. The knowledge block consists of a technologywhich is used to explain the technique, and a theorywhich is used to justify the technology. A point to be stressed here is that the word "technology" is used here to refer to a discourse on a given technique. In other words, "this discourse is supposed, at least in the best-case scenario, both to justify the technique as a valid way of performing tasks and throw light on the logic and workings of that technique" ( [31], p. 2616). For instance, in this study, the technique is the "know how" to simplify the rational algebraic expressions, while the technology consists of what mathematical knowledge or logic justifies the way these techniques are operationalized.
At the core of ATD [29] is the notion of an epistemological model aimed at understanding the "ecology of mathematical knowledge that emerges from human practices" ( [30], p. 1). Research shows that there many traditions of didactics at the core of teaching and learning in schoolsthe German Didaktik, whose origins hail from the seventeenth century, is one them [32,33]. In general, the word "didaktik" refers to both the art of teaching, and to a theory of teaching. It is worthwhile noting that the German Didaktik does not cover subject areas issues but covers general issues of theory and practice of teaching [34]. However, the German Didaktik is guided by three core tenets: bildung; theory of educational content; and the notion of teaching as a meaningful endeavor which is encountered between students and content [35][36][37]. The bildungencapsulates the aims and values of the education system centered on "formation of the mind, the unfolding of capability, and the development of the sensitivity of the learner [student]" ( [35], p. 544). In the German Didaktik, theory of educational content is construed as: the nature of content; educational value of the content; and the general organization of the content for educational purposes [38]. Also, at the core of German Didaktik, is the notion of "productive encounter" between content and students, which is analyzed and facilitated by teachers during teaching and learning [39,40]. To provide context to this discussion between ATD and the German Didaktik, our position is that the German Didaktik is a general theory on the art of teaching and learning, while ATD seeks to address teaching and learning issues within a subject areafor example mathematics. In this study, the focus is exploring students' productive struggles when simplifying rational algebraic expressions. As such, the ATD with its praxeologies is used as theory for understanding how students conceptualize the simplification of rational algebraic expressions in mathematics.
Productive struggles
In the previous section, this study has alluded to the importance of students' struggle during learning activities on simplification of rational algebraic expressions and explained how this leads to overcoming conceptual difficulties and achieving deeper and more long-lasting learning [41]. Kapur [11] posits that, during productive struggles, a failed initial attempt on a certain task can lead to improved learning. This learning process envisioned by [11] occurs in two stages. Firstly, students are given a learning activity or problem they cannot solve immediately, and thus the teacher encourages them to conjecture on the possible solutions to the problem. Secondly, once the initial attempts have failed, students receive instruction on possible ways to solve the problem and are given another opportunity to try to solve the problem themselves. In other words, productive struggle "can prime students for subsequent instruction by making them more aware of their own knowledge gaps and more interested in filling those gaps" ( [41], p. 85). Depending on individual students' levels of conceptual understanding, it is apropos to say that they experience different types of struggles. After observing these different types of productive struggles in a classroom situation when working on challenging problems, Warshauer [42] developed a productive struggle framework that consists of four types.
The main four types of productive struggles identified by [6,16,42] relate to the following aspects: getting started; carrying out a process; experiencing uncertainty in explaining and sense making; and expressing misconceptions and errors. Table 1 shows the types students' productive struggles and their respective general descriptions [6,16,42].
The study uses the above four pre-determined types of students' productive struggles as a framework for analyzing students' ways of simplifying rational algebraic expressions.
Responses to productive struggles
The construct of "noticing" in mathematics teaching is a widely researched phenomenon in mathematics education, particularly in the high school context [43][44][45]. Mathematical noticing or simply noticing during teaching consists of three interrelated skills: "attending to children's [students'] strategies, interpreting children [students'] understandings, and deciding how to respond based on children's [students'] understandings" ( [45], p. 117). Huang and Li [46] further elaborates on this, positing that attending is also about identifying what is noteworthy, that interpreting is about making general connections between specific classroom interactions and broader theories of teaching and learning, and that deciding is also about how teachers use what they know and understand about their learning contexts to decide how to respond or reason about classroom activities. Teachers use the construct of noticing to identify students' productive struggles. Once the students' struggles have been identified, the teachers will make intentional efforts to support these strugglesin this context, the simplification of rational algebraic expressions. In other words, supporting students' productive struggles requires the teacher to find ways of addressing or responding to the struggles by converting them into positive learning endeavors that create further opportunities for deep learning, rather than episodes in which learners experience difficulties and frustration [4,6,16]. Recent studies have illustrated many possible ways teachers can use to respond to the students' productive struggles in mathematics [4][5][6]47] -these ways are not mutually exclusive.
Teachers mainly respond to the students' productive struggles in the following four ways: Firstly, they can use tellingin other words, after evaluating the nature of the students' productive struggles, a teacher can help a student by: suggesting new approaches to solve the problem; directly correcting the student's errors and/or misconceptions; and giving the student a simpler problem to work on first. [48,49] stress the notion of "judicious telling," which requires teachers to support students' Type of productive struggle Description of the productive struggle Getting started Students feel cognitively overloaded and confused about the taskthis is evidenced by the fact that there are no written answers or attempts on paper. Students also claim that they do not remember the work and/or the type of problems, and there could be gestures of uncertainty, and resignation. As a sign of frustration, students' utterances could be: "I do not know what to do," "Oh dear! I am very confused," "I wish I knew where to start," etc. In term of simplifying rational algebraic expressions, students might not fully understand the illustration from the question.
Carrying out a process This relates to students encountering an impasse while attempting to solve a given task. For example, students may find it difficult to demonstrate or follow a known procedure or algorithm. Also, students may fail to recall the facts or formulae required to successfully implement a process, such as factorizations, multiplication of factors, or division of factors required to obtain an equivalent fraction.
Experiencing uncertainty in explaining and sense-making Students may find it difficult to explain their work to other members of the group, when working in small groups, or to the whole class, when asked to do so by the teacher. In many cases, students fail to verbalize their thinking processes or to justify their correct answers. For example, the student may say "I know this is the correct simplification, but I cannot explain how I got it."
Expressing misconception and errors
Errors can be classified as: careless and conceptual. On one hand, conceptual errors occur when students fail to observe the correct relational ideas when solving problems. On the other hand, careless errors relate to unintentional and yet avoidable procedures that students commit during problem solving. A misconception is usually not wrong thinking; however, it can be interpreted as an indication of deep-seated misplaced ideas that are used to justify the process of finding a solution to a problemthese can manifest as local generalizations made by students. Table 1.
Types of students' productive struggles and their descriptions.
productive struggles by repeating the students' own contributions with the aim of highlighting the mathematical ideas that students have already grasped and understood to enable students to better understand the contexts and terminology in the specific tasks. Secondly, teachers can utilize directed guidance, which involves the teacher breaking down the problem given to the student into manageable parts, which can assist him/her to anticipate the next step in solving the problem. Directed guidance can also be used, as in this study, for instance, to allow a student to do operations on numerical fractions before he/she attempts simplifications of rational algebraic expressions. Teachers can also use "advancing questions," which can "extend students' current mathematical thinking towards a mathematical goal (simplifications of rational algebraic expressions) of a lesson" ( [47], p. 178). Thirdly, teachers could use probing guidance, in which the teacher assesses the student's thinking by asking him/her to justify and explain his/her proposed solution. This is done by asking assessing questions and advancing questions (as explained above). Asking assessing questions allows the teacher: to discover students' thinking processes, evaluate their cognitive capabilities, and encourage them to share their thinking on the simplification of rational algebraic expressions [47]. Lastly, teachers can use affordance, which involves the teacher's ability to engage students by emphasizing justifications and sense-making with the entire group or with individuals. The term also refers to affording the students time and space to think and solve the problem with encouragement from the teacher but with minimum help. By using these four ways to respond to students' productive struggles, and teachers are afforded the opportunity to deepen their own understanding and more appropriately access students' thinking processes, while positioning themselves to effectively support students' learningin this case, their learning on the simplifications of rational algebraic expressions. The teachers' questioning techniques will allow the teachers to deepen his/her understanding of the nature of the struggles students harbor.
As already alluded to in this chapter, support structures need to be put in place to ameliorate situations where students' productive struggles can be obstacles for student learning or barriers to students' conceptual development in mathematics [7]. In addition, where students struggle as expected during learning, it is apropos for teachers not to rush to provide a support structure, but to wait until students reach an impasseas evidenced by utterances such as "I am stuck,", and "I have no idea on how to proceed," among others. By extension, support structures can also refer to questioning techniques of teachers, teacher explanations, or feedback in real time on students' work. It is worthwhile noting that a delayed support structure, for instanceteacher's explanation, can lead to performance failure in the short term, but in the longer term benefits the student as it gives the student time to discern the concepts of the problems being solved.
Methodology
In this section, we describe the research sample within the context of the study, the lesson study as a research methodology, and the data sources and analysis techniques.
Participants
This study sought to explore Grade 11 mathematics students' productive struggles during simplification of rational algebraic expression, and the ways in which the teacher noticed, and responded in a high school located in South Carolina in the United States of America. Twenty-eight students participated, constituting all Grade 11 students at the high school. Since the study involved minors, ethical clearance was sought from the South Carolina County School District, the school principal, and the legal guardians of the students. In addition, consent was also sought from the participating teacher who was responsible for teaching the concept of simplification of rational algebraic expressions.
Data sources
In this study, data was collected by using a pre-determined research instrument, in other words, a lesson on the simplification of rational algebraic expressions, which was co-planned and co-implemented by the teacher and the researchers. The lesson which is the subject of investigation is part of a series of lessons that were taught on the simplifications of rational algebraic expressions. To be more precise it is the third lesson of the series lessons. Lesson 1 dealt with simplification of rationale algebraic expressions of the form: 2 ab þ 5 bd . Lesson 2 dealt with the simplification of single rational algebraic expressions where factorization was envisaged, for example: 2xÀ6 x 2 À9 . Lesson 3, which is the focus of this study, deals with the simplification of two rationale algebraic expressions being added or subtracted, for example, x x 2 þxÀ2 À 2 x 2 À5xþ4 , where factorization and finding common denominators are envisaged. All the problem solved by students during the three lessons outlined are foregrounded in the South African Grade 11 mathematics syllabus and come from a prescribed textbook that students used.
This study uses a lesson plan as research methodology with specific focus on exploring the types of productive struggles students experienced during the simplification of rational algebraic expressions. While the stages of the lesson study of setting goals and planning are given less prominence in the data analyses, they are nonetheless important because they foreground the activities of the implementation and debriefing stages. Since the students' productive struggles manifest during the implementation stage of the lesson study, the study has prioritized the implementation stage to explore the students' productive struggles. The debriefing stage affords the teacher the opportunity to discuss with the researcher the students' productive struggles as he observed and responded to them in class, and in real time.
Research studies position the lesson study, which originated from East Asia, as a form of practice-based continuous professional development of mathematics teachers, which has since been adopted by many other countries [50]. In each of these countries, the emphasis of the lesson study varies, however, its major role of schoolbased continuous professional development for mathematics teachers remain. For example, in China, the focus is on "developing best teaching strategies for specific subject content for student learning," and in Japan, the focus is on "general and longterm educational goals, such as developing students' mathematical thinking through observing student learning in order to collect evidence to improve it" ([51], p. 271). Regardless of the country, the lesson study has three salient features, it is: a deliberate practicemeaning that the task of a lesson study is goal-oriented aimed at improving teacher performance, and affords opportunities for repetition and refinement; a research methodologyaimed at improving both professional and academic knowledge; and an improvement sciencethrough the use of "plan-do-study-act" ([52], p. 54) innovations. In this study, we chose the lesson study as a research methodology to explore the students' productive struggles when simplifying rational algebraic expressions over the design-based research methodology. The lesson study, as a deliberate practice and research methodology can be used in a similar way as the design-based research to narrow the gap between research and practice during teaching [51][52][53]. Proponents of the lesson study argue that "not only is this (lesson study) real research, but the methodology of lesson study has huge benefits as means of developing knowledge that is useful for improving teaching (and learning)" ( [54], p. 584). As a research methodology, the lesson study, seeks to address specific research questions using a research lessona research lesson is a lesson that is a subject of an investigation by researchers. A case in point here is the simplification of rational algebraic expressions. When using the lesson study as a research methodology, the research lesson is often proceeded by an evaluation of the students' conceptual understandings of concepts taughtthis can be done through documentary analysis of students' written work in tests and/or carrying out focus group interviews with students who participated in the research lesson.
Data collection processes were informed by the stages of a lesson study approach [55]. Moreover, all the stages were video-recorded. The lesson study was thus used in this study as the research methodology [51], as this allowed both teacher and researchers to study students' thinking [47,51]. In the context of this study, the lesson study consisted of four stages. The first involved setting goals by identifying specific students' learning and development goals and achievements, as agreed upon beforehand by the teacher and the researchers, pertaining to the simplification of rational algebraic expressions. The second stage was planning, which meant using the goals identified to plan a "research lesson" that would be used for data collection on the topic. During this stage, discussions took place on how to anticipate students' questions and the teacher's responses. During the third stage, implementing, the teacher taught the class, while the researchers observed and collected the data. Focus group interviews took place with the students, who were given opportunities to explain their understanding of the lesson topic. In the final stage, debriefing, the teacher and the researchers met to discuss and/or reflect on the data collected; samples of students' work that had been collected were also analyzed to validate some of their productive struggles during the lesson [51,[55][56][57].
Data analysis
Video-recordings of the classroom interactions during the research lesson and focus group interviews for students were transcribed verbatim. Thereafter, the transcriptions were analyzed using a pre-determined productive struggles framework (see Table 1) thus exploring the types of students' productive struggles encountered and the teachers' responses to these. In addition, documentary analysis was used to analyze students' written work to see how they were simplifying rational algebraic expressions.
Findings and discussion
In this section, a pre-determined framework (see Table 1) is used to explore the types of struggles experienced by the high school students, and the ways in which the teacher noticed and responded to these are discussed from examples given within a lesson situation. To be more specific during the implementation stage of the lesson study. For anonymity, the letters T and S represent the teacher and student respectively. The word episode is used to refer to a lesson excerpt.
Getting started
Below is an excerpt that describes the classroom interactions between the teacher and the students when the students were asked to simplify two rational algebraic expressions: 2 x 2 ÀxÀ2 À 6 x 2 þ6xþ5 . Prior to this lesson, students had been simplifying single rational algebraic expressions that require factorizationstudents were expected to factorize the numerator and denominator, and then perform a cancelation. However, in this episode from the lesson, the simplification of rational algebraic expressions was extended sums and differences of two rational algebraic expressions which required factorization.
Learning T: What factors are common in denominators of both fractions? [sensing that there might be an overall conceptual problem in the class by listening to students' chit chat]. Hey guys, let us back up. Are sure you know what going on [referring to the last student]? Alright, we are going to revisit the problem we did yesterday: 2 3x þ 2 7x 2 À 3 2xy 2 , which we expanded to: 2 3x þ 2 7xx À 3 2xyy At the beginning of the episode, a student remarked that he/she was stuckmeaning that they could not initiate the simplification the rational algebraic expression. The teacher used probing guidance, by asking questions, such as Where are you stuck? … What are you going to do? This prompted the student to explain his/ her thinking processes to the teacher. While listening to the students chatting, the teacher noticed that they were experiencing challenges of simplifying rational algebraic expressions' particularly the factorization. From an ATD perspective, some students lacked the technique or the tools such as factorizations to simplify the rational algebraic expressions [29,31]. In responding to this productive struggle, the teacher decided, together with the students, to revisit a much simpler example they had done the previous day. The teacher's action constituted directed guidance by redirecting the students' attention to a much simpler example with the aim of trying to deepen their understanding of the related concepts. In the debriefing interview, the teacher referred to the get started stage as a "freak out" moment, positing, "I definitely think there was a get out there and a freak out moment and they don't understand anything." He continued to say that, whenever his students were stuck, he reminded them to calm down and think about the concepts they had already covered and to try to apply them to the novel problem. Intuitively, the teacher alludes to the notion of delay of structure [7] this notion is about a teacher delaying, giving a student a support structure, for example, in the form questions, explanations or feedback, immediately when the student experiences an impasse [58,59].
Carrying out a process
In another episode during a lesson, the students were tasked with simplifying the following two rational algebraic expressions: x 2 À7xþ23 x 2 À36 þ 6xÀ19 36Àx 2 . The student in question did nearly all the work correctly but failed to factorize the last stepthis work was done on the board during the lesson. As the student was busy simplifying the rational algebraic expressions, he/she came to an impasse and failed to reduce the final rational algebraic expression. In this excerpt, S represents a student working the problem on the board, while C 1 , C 2 , and C 3 are other students in the class. While S was simplifying the rational algebraic fraction on the board, the other students (C 1 , C 2 , and C 3 ) were comparing their own solutions to that of S.
Learning episode 2. S: x 2 À7xþ23 x 2 À36 þ 6xÀ19 36Àx 2 = x 2 À7xþ23 x 2 À36 þ À 6xÀ19 36Àx 2 À ) [The student tries to use the notion that x À y À Á ¼ À y À x À Á having noticed that x 2 À 36 ð Þand 36 À x 2 ð Þexhibit a similar traitthe student's work was written on the board]. S: S: I am stuck [the student fails to recognize that the numerator of the last fraction can be factored as x À 6 ð Þ x À 7 ð Þand the fraction would consequently reduce to xÀ7 xþ6 ] C 1 : That is not what I got, teacher [C 1 seeks to help S]. C 2 : We did the other side, teacher, we got +13 and À 42. The classmate alludes to the fact that he multiplied x 2 À7xþ23 x 2 À36 by negative one to get S: The classroom interactions in this episode were student-student interactions, where the teacher did not participate in the simplifications of the algebraic rational expression. When the student working at the board encountered an impasse, he/she said, "I am stuck"thus calling for help. In this episode, the teacher did not comment or respond; instead, one of the students did so, stating, "That is not what I got, teacher." By not responding immediately to the students' classroom interactions, the teacher was using the affordance techniquewhere students were afforded the space and time to think through and solve the problem with the teachers' encouragement but with minimum help [49]. In this kind of approach, students are encouraged to use other students' thinking processes as resources to simplify rational algebraic expressions; for instance, student (C 2 ) suggested an alternative step of writing the expression Àx 2 þ 13x À 42 instead of x 2 À 13x þ 42. Having noticed that the student at the board (S) needed help in simplifying the rational algebraic fraction, a fellow student (C 3 ) told him/her how to proceed: "factor out the top [the numerator] and then you can cross [cancel with denominator] out." Finally, with this assistance, student (S) was able to simplify the algebraic rational expression successfully.
During the debriefing interview, the teacher alluded to the fact that some students failed to carry out a procedure: "the main thing with today's lesson was about finding the common denominator … but I think other than that they got it pretty good." Interestingly, when asked about why he/she did not respond or comment on the students' interactions, the teacher said, "it is one way that I use to create an interactive and engaging learning environment among students during the lesson." In addition, the teacher was concerned that, as the problems would become more complex in subsequent lessons, his students were likely to struggle with identifying common denominators.
Experiencing uncertainty in explaining and sense-making
In the next episode, the focus is on how a student simplified two rational algebraic expressions: xþ4 x 2 þ15xþ56 þ 6 x 2 þ16xþ63 on the board. Using this example, we illustrate how a student found it difficult to verbalize his/her thinking processes and failed to justify his/her answers even though they were correct. In this episode, while the student was using the correct method, there came a point where he/she could not explain and/or verbalize his/her strategy for simplifying the problem. For example, when asked by the teacher why he/she had multiplied both fractions by the factor x þ 9 ð Þ, the student could not answer, but instead shrugged his/her shoulder as a way of saying "I do not know." When the teacher sensed this uncertainty, he responded by asking probing questions to guide the student towards achieving the goal of the question -"why did you put x þ 9 ð Þ there?" The teacher wants to get to a point where the student says that he/she wants to find a common denominator between the two rational algebraic expressions.
During the interview with the teacher, he remarked that uncertainty was also expressed through the student's unwillingness to go to the board to work out the problems given to the class. The teacher said, "I really like to see the people that are struggling more at the board," and that he would like to hear more students saying, "I don't know what I am doing, but I am going up there"the teacher acknowledges that the latter is a challenge which he/she hopes could be resolved by exposing students to more practice questions.
Expressing misconceptions and errors
In this section, we discuss the types of errors that manifested in the students' written work in learning episodes 1, 2, and 3 above. Figure 1 below shows a student's conceptual error that was committed when simplifying the rational algebraic expression in learning episode 1. Figure 1 reveals that although the student completed the question, he/she committed a conceptual error by making both denominators the same by observing that the first denominator had a "-2" and the second denominator had a "5"the numbers "2" and "5" are the independent variables of the two denominators of the fractions to be simplifiedthe student ignored the "minus" sign for "2," opting instead to use a positive "2." In other words, the student seems to have ignored the letters and reduced the rational algebraic expressions into simple numerical fractions [23,24]. The student, however, succeeded in simplifying his/her own numerical fractions from À 4 20 to À 1 5 [58]. In Figure 2, the student did not realize that x 2 À 36 ¼ À 36 À x 2 ð Þor more generally that (a À bÞ ¼ À b À a ð Þ, and thus had a misconception that the two denominators from both rational algebraic expressions were the same. This misconception resulted in the student not being able to simplify the resulting rational algebraic expression, because he/she could not factorize its numeratorin fact, the numerator cannot be factorized, hence the cancelation between the numerator and denominator cannot be done.
In Figure 3, the student committed an error by forgetting to follow through the multiplication of the numerator and denominator of the first and second fractions by x þ 9 ð Þand x þ 8 ð Þrespectivelyas a result the student had incorrect numerators and could not simplify the two rational algebraic expression.
Limitations
This study is based on a very small sample of 28 Grade 11 mathematics students in one school from a county. It is not the intention of the authors to draw upon any generalizations on the students' productive struggles on the simplifications of rational algebraic expressions from the small sample used in the study. Ii is our contention, that some of the observations made on the students' productive struggles are attributed to sample of 28 Grade 11 students who participated, their mathematical skills and abilities on the topic under discussion. As such, this study merely highlights some of the potential productive struggles that students are likely to encounter when solving problems on the simplifications of rational algebraic expressions. In a way the study can be used to give directions on the future research work on students' productive struggles, mathematics teachers noticing and questioning techniques during lessons.
Instructional implications
Given the limitations of the studythe use of a single and small study, we are cautious about drawing generalized instructional implications that can be drawn from this study. Having said that, we believe that the study can highlight issues related to: struggle, support structures, and delay structure. Struggleoften struggle in mathematics is viewed as something negative, however, this study construes struggle as something essential for the student's intellectual growth, and a necessity which used during mathematics lessons. Support structuresduring problem solving the role of support structures during learning in the form of feedback, questions, scaffolding questions, among others, is critical for students learning. Delay of structure -an important instructional implication here is that the role of delay of structure when students reach an impasse opens-up opportunity for learningwhere there is no impasse, despite rigorous support structure provision, learning is not guaranteed [7].
Conclusion
In this chapter, our aim was to explore students' productive struggles on the simplifications of rational algebraic expressions, and of how teachers notice and respond to these productive struggles. Using the pre-determined productive struggle framework developed by Warshauer [42], we were able to identify and categorize the types of productive struggles that students experienced in the classroom and to look at the different ways in which the teacher addressed these struggles. Throughout the paper, it was not our intention to deal with constructs of noticing and questioning separately, but rather to discuss them within the types of productive struggles. In addition, the types of errors discussed in this paper are not exhaustive [23,24], since they only pertain to the problems discussed in learning episodes 1, 2, and 3.
In conclusion, while this study contributes to the mathematics classroom discourses on the students' productive struggles on the simplification of rational algebraic expressions, using a bigger sample, further research is required on: the roles that mathematical teachers' noticing and questioning can play, and on how teachers respond to and effectively provide support structures to students' productive struggles during the teaching and learning of specific mathematics concepts.
|
v3-fos-license
|
2022-12-22T16:10:12.776Z
|
2022-12-20T00:00:00.000
|
254959218
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/23/1/17/pdf?version=1672707715",
"pdf_hash": "5aae0343076ab43f45e6903de67f690109063f4b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44167",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"sha1": "d09c00b711f77e6e572bc391d1ae2598440914d3",
"year": 2022
}
|
pes2o/s2orc
|
Digital Twin for a Collaborative Painting Robot
A collaborative painting robot that can be used as an alternative to workers has been developed using a digital twin framework and its performance was demonstrated experimentally. The digital twin of the automatic painting robot simulates the entire process and estimates the paint result before the real execution. An operator can view the simulated process and result with an option to either confirm or cancel the task. If the task is accepted, the digital twin generates all the parameters, including the end effector trajectory of the robot, the material flow to the collaborative robot, and a spray mechanism. This ability means that the painting process can be practiced in a virtual environment to decrease set costs, waste, and time, all of which are highly demanded in single-item production. In this study, the screen was fixtureless and, thus, a camera was used to capture it in a physical environment, which was further analyzed to determine its pose. The digital twin then builds the screen in real-time in a virtual environment. The communication between the physical and digital twins is bidirectional in this scenario. An operator can design a painting pattern, such as a basic shape and/or letter, along with its size and paint location, in the resulting procedure. The digital twin then generates the simulation and expected painting result using the physical twin’s screen pose. The painting results show that the root mean square error (RMSE) of the painting is less than 1.5 mm and the standard deviation of RMSE is less than 0.85 mm. Additionally, the initial benefits of the technique include lower setup costs, waste, and time, as well as an easy-to-use operating procedure. More benefits are expected from the digital twin framework, such as the ability of the digital twin to (1) find a solution when a fault arises, (2) refine the control or optimize the operation, and (3) plan using historic data.
Introduction
The development of digital technologies has reshaped the world. New technologies and methods are being introduced which have revolutionized the way we live today. Not only humans, but industries, factories, and manufacturing processes have evolved through the technological revolution. The industrial revolution has evolved from the mechanical production facilities powered by water and steam to the current cyber-physical production systems, called the Industrial Revolution 4.0 (IR 4.0) [1][2][3]. Industry 4.0 uses automation technologies, artificial intelligence (AI), machine learning (ML), cloud computing, edge computing, fifth-generation (5G) cellular networks, Internet of Things (IoT), big data, etc. to promote potential in every aspect of the industry [4]. Moreover, the use of these automation digital worlds to create virtual products and processes, resulting in optimized factories and manufacturing processes [5].
The concept of a digital twin is essentially a computer program that uses real-world data to produce simulations to predict the performance and behavior of a system. This concept was first introduced by NASA's Apollo space program [6], which was then used in a three-dimensional (3D) model and digital concept to design urban road networks [7] and more [8][9][10]. Emulating a real-world system in a virtual world requires both worlds to be synchronized and that can be achieved with the help of sensing devices, connected smart devices, IoT, AI, ML, and real-time data elaboration. The advantage of the digital twin for Industry 4.0 manufacturing systems is to exploit its features to forecast and optimize the behavior of real systems at each life cycle phase in real time [11][12][13][14].
Collaborative painting robots are gaining much attention lately concerning the framework of digital twins and that is in line with the standards and practices of IR 4.0. This is primarily because of improved safety, reduced waste material, superior efficiency and improved system uptime, to name a few, when compared with traditional painting robots [15][16][17]. With rapidly growing economies, industrialization and population growth, new construction is taking place everywhere. Metropolitan cities are developing vertically, and parts of the buildings are sometimes hard to reach. It is therefore a need of the hour to devise new tools which can perform laborious painting tasks efficiently. The traditional painting robot's system comprises position sensors, an automatic spray gun, and a painting robot as shown in Figure 1 [18,19]. The issues that are encountered by traditional painting robots are unstable trajectory and movement speed that results in bubbles and scars to the painted surface causing poor and uneven-thickness coatings. Furthermore, in a practical environment, it is very hard to optimize massive parameters in a complex system operating in an unknown environment using traditional methodology. To control the quality and standard of painting, tool planning algorithms and painting models have been developed and researched for many years [20][21][22][23][24]. However, in practice, it is very difficult to generate an online tool trajectory, and obtain the optimal tool trajectory and film quantity deviation for a free-form surface. Automated tool planning is the bottleneck for the painting process. Control strategies for eliminating the dynamic and friction influences and improving the accuracy and repeatability of robots have been developed [15,[25][26][27][28][29][30]. However, owing to the complexity and time-intensiveness of the painting process, their developed algorithms seem unable to solve the problem online and cannot solve the trajectory optimization problem.
To overcome the shortcoming and challenges of traditional painting robots, a collaborative painting robot is introduced in this paper. Collaborative robots bring the strength of humans and robots in one place that helps overcome their weaknesses. It offers To control the quality and standard of painting, tool planning algorithms and painting models have been developed and researched for many years [20][21][22][23][24]. However, in practice, it is very difficult to generate an online tool trajectory, and obtain the optimal tool trajectory and film quantity deviation for a free-form surface. Automated tool planning is the bottleneck for the painting process. Control strategies for eliminating the dynamic and friction influences and improving the accuracy and repeatability of robots have been developed [15,[25][26][27][28][29][30]. However, owing to the complexity and time-intensiveness of the painting process, their developed algorithms seem unable to solve the problem online and cannot solve the trajectory optimization problem.
To overcome the shortcoming and challenges of traditional painting robots, a collaborative painting robot is introduced in this paper. Collaborative robots bring the strength of humans and robots in one place that helps overcome their weaknesses. It offers assistance to human counterparts in performing tedious, dull and dirty jobs. The main contributions of this paper are • Development and experimental performance evaluation of a collaborative painting robot using a digital twin framework.
•
The digital twin of the automatic painting robot simulates the entire process and estimates the paint result before the real execution. This results in decreased set costs, waste, and time, with improved results.
The rest of this article is organized as follows. In Section 2, we present the setup and communication architecture between the physical collaborative painting robot and the virtual system that represents the robot. In addition, a case study task of painting is presented, and the digital twin-based methodology demonstrates how to solve and perform the task. Next, the results and discussion of the proposed approach and its benefits are presented in Sections 3 and 4, respectively. Finally, the conclusions are presented in Section 5.
Digital Twin Architecture for Collaborative Painting Robot
Painting robots have been used in various applications of the painting industry, such as car and boat painting. Examples of well-known painting robots are the Dürr spraypainting robot EcoRP E043i, ABB spray-painting robot IRB 52, and Fanuc spray-painting robot P-250iB, which are serial mechanisms or 6 degrees-of-freedom (6-DOF) robots [31][32][33][34][35]. Such robots have the advantages of a large workspace, high dexterity and maneuverability, complex movement and pose, speed, and accuracy. Although there are some disadvantages, such as low payload-to-weight ratio, low stiffness, and safety due to their large workspace, they are not issues for painting applications because of the weight of their sprayers. Thus, such robots are well-known for their painting applications.
Most well-known commercial robots may be programmed using three techniques: first, by operators teaching their position and movement; second, by a teaching pendant; and third, by a script run on an external computer. For the traditional programming methods, most operators use the first two techniques to program a robot because they are simple, and operators can confirm the programming with robot simulation on the teaching pendant or the physical movement of the robot. However, with these techniques, the program cannot be quickly changed or modified online. Operators must go to the robot and apply the first or second programming technique and confirm the program with offline simulation or physical movement. The third technique seems more complex for operators because it requires an external computer, complex programming skills, and the program can still not be confirmed with online simulation.
The proposed digital twin framework of the collaborative painting robot with the digital twin is shown in Figure 2. The proposed digital twin architecture and communication model is an open architecture that supports available tools, hardware, and software to build a digital twin for a collaborative robotic cell. The Wifi-AX6000 connects all of the hardware in the automation cell to realize data bandwidth, latency, and massive communication. This supports Wi-Fi IEEE 802.11ax, IEEE 802.3/3u/3ab, and GigE standards, making it suitable for machine communication. CoppeliaSim, a physics simulation and visualization mechanism, runs on a dedicated computer, whereas MATLAB runs on a shared edge computer. The robot's controller, data storage, and internet are also connected to the network. In this architecture, the physics simulation and edge computer update at different rates. It should be noted that the internet and cloud will be used in the future to connect the cell to higher levels of automation and commercially available cloud services. The data flow of this digital twin framework is shown in Figure 2b. The edge computer is used for all high-level computing such as design pattern and design trajectory. The low level is the position and velocity control of the robot joints that are controlled by the robot controller. For the virtual representation, the physics simulation is used to The data flow of this digital twin framework is shown in Figure 2b. The edge computer is used for all high-level computing such as design pattern and design trajectory. The low level is the position and velocity control of the robot joints that are controlled by the robot controller. For the virtual representation, the physics simulation is used to simulate the characteristics and motion of a real collaborative painting robot [36][37][38][39]. The characteristic and motion outputs of a robot can be obtained and sent to the edge computer program [40]-edge computer-to process. Moreover, the physics simulation obtains the inputs from the edge computer program and sensor camera to perform the simulation with the current information. The edge computer program is also included in the virtual representation. It works as the controller/edge computer of the system. It obtains the data from the physics simulation, the real robot, and the sensor camera to compute the control action and sends the command to the physics simulation, the real robot, and the spray gun. Then, the real collaborative painting robot, spray gun, and sensor camera are presented as the real system. The sensor camera is used to measure environmental data and send the data to the edge computer program and the physics simulation. The spray gun is used to paint when the command is given. Next, the real collaborative painting robot obtains the command from the edge computer program-controller-and executes the task and feeds back the outputs to the edge computer program. With this architecture, the real-time simulation and control of the collaborative painting robot can be achieved, and the online automated tool planning can be developed in real-time with real-time simulation technology and is accurate. The operators can program the robot easily by designing a desired pattern on the PC/screen. Then, the physics simulation program simulates the process and estimates the results. Once the results are confirmed, the edge computing program creates the projection model and plans for the robot trajectory. Then, the edge computing program sends the designed trajectory to the real robot and commands control of the process flow of the spray mechanism. Next, the robot executes the task. During the painting, the camera is used to monitor, compare the digital twin image and real image, and analyze the error, and sends the feedback to the edge computer. The error can be used to stop the robot or correct the robot's trajectory in real-time.
This concept can be extended to be the model reference adaptive control concept ( Figure 3). The reference input, r, is the desired painting pattern. The outputs, y m and y, are the estimated desired painting pattern output and the actual desired painting output, respectively. Next, the error difference between the estimated and desired painting pattern outputs is defined by e. In addition, the controller command is defined by u and the parameter setting of the system is defined byp. simulate the characteristics and motion of a real collaborative painting robot [36][37][38][39]. The characteristic and motion outputs of a robot can be obtained and sent to the edge computer program [40]-edge computer-to process. Moreover, the physics simulation obtains the inputs from the edge computer program and sensor camera to perform the simulation with the current information. The edge computer program is also included in the virtual representation. It works as the controller/edge computer of the system. It obtains the data from the physics simulation, the real robot, and the sensor camera to compute the control action and sends the command to the physics simulation, the real robot, and the spray gun. Then, the real collaborative painting robot, spray gun, and sensor camera are presented as the real system. The sensor camera is used to measure environmental data and send the data to the edge computer program and the physics simulation. The spray gun is used to paint when the command is given. Next, the real collaborative painting robot obtains the command from the edge computer program-controller-and executes the task and feeds back the outputs to the edge computer program. With this architecture, the real-time simulation and control of the collaborative painting robot can be achieved, and the online automated tool planning can be developed in real-time with realtime simulation technology and is accurate. The operators can program the robot easily by designing a desired pattern on the PC/screen. Then, the physics simulation program simulates the process and estimates the results. Once the results are confirmed, the edge computing program creates the projection model and plans for the robot trajectory. Then, the edge computing program sends the designed trajectory to the real robot and commands control of the process flow of the spray mechanism. Next, the robot executes the task. During the painting, the camera is used to monitor, compare the digital twin image and real image, and analyze the error, and sends the feedback to the edge computer. The error can be used to stop the robot or correct the robot's trajectory in real-time. This concept can be extended to be the model reference adaptive control concept (Figure 3). The reference input, r, is the desired painting pattern. The outputs, and , are the estimated desired painting pattern output and the actual desired painting output, respectively. Next, the error difference between the estimated and desired painting pattern outputs is defined by . In addition, the controller command is defined by and the parameter setting of the system is defined by .
Collaborative Painting Robot
For the collaborative painting robot, a collaborative robot Universal Robot UR3 model was selected to be implemented due to its benefits [2,3,41]. For example, various robot sizes can be selected; thus, they can be applied for various painting applications. In addition, Universal Robots can solve the robot safety issue because they are designed to work with humans in the same space. It is possible to develop an advanced painting application with which humans and Universal Robots can help each other to work in the same space. Moreover, Universal Robots' ecology is thriving thanks to the popularity of robots among researchers. There are many publications, mathematic models, simulations, and advanced control techniques based on these robots [42][43][44]. The advanced implementation can be developed based on the existing technology. Therefore, this benefit facilitates the development of a digital twin architecture and is essential for developing a highly complex but comprehensive digital twin system. Figure 4 shows the components of the collaborative painting robot. For the collaborative painting robot, a collaborative robot Universal Robot UR3 model was selected to be implemented due to its benefits [2,3,41]. For example, various robot sizes can be selected; thus, they can be applied for various painting applications. In addition, Universal Robots can solve the robot safety issue because they are designed to work with humans in the same space. It is possible to develop an advanced painting application with which humans and Universal Robots can help each other to work in the same space. Moreover, Universal Robots' ecology is thriving thanks to the popularity of robots among researchers. There are many publications, mathematic models, simulations, and advanced control techniques based on these robots [42][43][44]. The advanced implementation can be developed based on the existing technology. Therefore, this benefit facilitates the development of a digital twin architecture and is essential for developing a highly complex but comprehensive digital twin system. Figure 4 shows the components of the collaborative painting robot. The collaborative robot, a 6-DOF manipulator robot, belongs to the UR family of Universal Robots [45]. The controller software of the collaborative robot is Universal Robots 3.15. The workspace of the collaborative robot for the painting application is shown in Figure 5. The red area is the applicable workspace. The collaborative robot, a 6-DOF manipulator robot, belongs to the UR family of Universal Robots [45]. The controller software of the collaborative robot is Universal Robots 3.15. The workspace of the collaborative robot for the painting application is shown in Figure 5. The red area is the applicable workspace.
Spray Gun
An automatic spray gun was used for painting and attached to the robot end-effector. The developed automatic spray gun for this project is shown in Figure 6. It comprises a spray nozzle, solenoid valves, an air compressor, pressure gauges, a color supply tank, and a microcontroller unit.
The spray nozzle was used as an airbrush and its nozzle diameter was 1 mm. The
Spray Gun
An automatic spray gun was used for painting and attached to the robot end-effector. The developed automatic spray gun for this project is shown in Figure 6. It comprises a spray nozzle, solenoid valves, an air compressor, pressure gauges, a color supply tank, and a microcontroller unit.
Sensor Camera
An RGB camera-a Logitech C922 pro webcam [46]-was used as a sensor for the painting application. There were two purposes for using this camera. First, the camera was used to measure the target painting plane. The image processing technique was required to detect the target plane with and without a specific marker. With a specific marker, a simple image processing technique can be used to detect a plane. However, without the specific marker, it is difficult to detect the target plane and may be unsuitable for this work. Advanced image processing techniques may be required. For this project, the specific marker was attached to the target plane, and the image processing was run on the edge computer program to detect the target plane. The second purpose was to detect the painting's quality. The painting quality was used as the feedback signal for the collaborative painting robot. Thus, the robot can paint the target as the painting design. Moreover, the camera also had its twin. The image from physical and digital twins can be used to analyze the accuracy and correct the digital twin model.
Virtual System
CoppeliaSim is a robot simulation software that provides a set of developing robot tools. It is used as a 3D virtual representation and to simulate the real-time characteristics and motion of the real collaborative painting robot. Using this physics simulation, many advantages are obtained, such as forward and inverse kinematics, path planning, collision detection, and minimum distance calculation modules. Moreover, the collaborative robot model already exists. Therefore, these basic tools do not need to be developed. The Cop-peliaSim version 3.0 rev3 edu was used for this development. The spray nozzle was used as an airbrush and its nozzle diameter was 1 mm. The spray nozzle was connected to a color feed tube for supplying color from a color supply tank and to a pneumatic pressure tube with an operating pressure ranging from −0.95 to 10 bar. In addition, it had a 2-cfm consumption, 100-cc paint capacity, 40-mL/min fluid flow, and 100-mm spout distance. The volume of the color supply tank was 500 cc. Then, the solenoid valves-5/2 control valves-were used to control the pneumatic pressure connecting to the spray nozzle and color supply tank and were controlled by a microcontroller-MKS GEN L V1.0 Controller Board. The compressor for actuating the spray gun was a 0.34-KW air compressor. The air compressor had an 8-bar pressure, 32-L/min output volume, and 25-L tank. Then, the pressure gauges were used to monitor the pressure. The paint in the color supply tank can be any liquid, liquefiable, or mastic composition. After applying the paint in a thin layer, the paint was converted to an opaque solid film.
Sensor Camera
An RGB camera-a Logitech C922 pro webcam [46]-was used as a sensor for the painting application. There were two purposes for using this camera. First, the camera was used to measure the target painting plane. The image processing technique was required to detect the target plane with and without a specific marker. With a specific marker, a simple image processing technique can be used to detect a plane. However, without the specific marker, it is difficult to detect the target plane and may be unsuitable for this work. Advanced image processing techniques may be required. For this project, the specific marker was attached to the target plane, and the image processing was run on the edge computer program to detect the target plane. The second purpose was to detect the painting's quality. The painting quality was used as the feedback signal for the collaborative painting robot. Thus, the robot can paint the target as the painting design. Moreover, the camera also had its twin. The image from physical and digital twins can be used to analyze the accuracy and correct the digital twin model.
Virtual System
CoppeliaSim is a robot simulation software that provides a set of developing robot tools. It is used as a 3D virtual representation and to simulate the real-time characteristics and motion of the real collaborative painting robot. Using this physics simulation, many advantages are obtained, such as forward and inverse kinematics, path planning, collision detection, and minimum distance calculation modules. Moreover, the collaborative robot model already exists. Therefore, these basic tools do not need to be developed. The CoppeliaSim version 3.0 rev3 edu was used for this development.
To implement the physics simulation as the digital twin, a script to run the simulation was required. The developed script received the robot input motion commands and data environment to run the simulation. Then, the simulation generated the robot outputs and sent them to the edge computer program to process the action. Moreover, the models of spray guns and sensor cameras were developed on the physics simulation. Thus, the physics simulation can simulate how the spray gun works and how the sensor camera views the environment and scene. These are useful for operators to visualize the outputs and for feedback signals of the painting system. The microcontroller shown in Figure 6 is an Androino board and is used to control the solenoid valves with input/output ports. It is also responsible for communication with a PC with a serial port. Figure 7 shows the physics simulation program working with a real robot. To implement the physics simulation as the digital twin, a script to run the simulation was required. The developed script received the robot input motion commands and data environment to run the simulation. Then, the simulation generated the robot outputs and sent them to the edge computer program to process the action. Moreover, the models of spray guns and sensor cameras were developed on the physics simulation. Thus, the physics simulation can simulate how the spray gun works and how the sensor camera views the environment and scene. These are useful for operators to visualize the outputs and for feedback signals of the painting system. The microcontroller shown in Figure 6 is an Androino board and is used to control the solenoid valves with input/output ports. It is also responsible for communication with a PC with a serial port. Figure 7 shows the physics simulation program working with a real robot.
Control Modules
The real robot-virtual robot-user interface was developed with an edge computer script, which was developed on the MATLAB R2021a version. Operators can put the design painting and requirements in the script. Then, the script sends the requirements to simulate the situation in the physics simulation. After the simulation is confirmed, the physics simulation sends commands to the real robot. Moreover, the script receives the data from the sensor camera as the feedback signals to adaptively control the real robot and simulation. With this technique, the real-time simulation and control of the collaborative painting robot can be achieved. In addition, an online automated tool planning can be developed, and the painting quality can be controlled.
In this study, the painting process was controlled with interactive scripts. The primary script utilizes low-level commands from the 1. standard environment, 2. image processing, and 3. instrument control toolboxes. The block scripts were developed for the 2D
Control Modules
The real robot-virtual robot-user interface was developed with an edge computer script, which was developed on the MATLAB R2021a version. Operators can put the design painting and requirements in the script. Then, the script sends the requirements to simulate the situation in the physics simulation. After the simulation is confirmed, the physics simulation sends commands to the real robot. Moreover, the script receives the data from the sensor camera as the feedback signals to adaptively control the real robot and simulation. With this technique, the real-time simulation and control of the collaborative painting robot can be achieved. In addition, an online automated tool planning can be developed, and the painting quality can be controlled.
In this study, the painting process was controlled with interactive scripts. The primary script utilizes low-level commands from the 1. standard environment, 2. image processing, and 3. instrument control toolboxes. The block scripts were developed for the 2D spray path generator for a desired robotic configuration pattern and target plane pose; the target camera plane posed a language translator and communication with the collaborative robot controller and the physics simulation program. Then, the primary script that controls the block scripts was developed.
Painting Case Study and Experiments
With the growing construction requirements to fulfill the need for industrialization and housing, painting jobs are also growing. With multistory buildings, mega-structures, and restricted time for completing the project, the automated method of painting is the need of the hour. To evaluate and present the collaborative painting robot with the digital twin concept, a painting scenario was set up. In the scenario, an operator designs painting patterns, such as basic shapes and letters and their sizes. Next, the operator selected the painting location. Then, the collaborative painting robot generated automated tool planning and showed how the robot creates painting results. The operator can view simulated results and has an option to confirm or cancel a result. If the operator confirms a result, the real robot will run and complete the task. Notably, the confirm or cancel process of operators can be skipped if the operator does not require to confirm the result.
With the scenario, three painting patterns were used to evaluate the painting system. The first pattern is a square shape, the second one is an infinity symbol, and the third one is the word "CU." The first pattern was used to evaluate the size of the painting pattern along the vertical and horizontal coordinates independently. The second pattern was used to evaluate the size of the painting pattern along the vertical and horizontal coordinates simultaneously. The third pattern was used to present the application of the collaborative painting robot at various painting locations. The text was designed and painted. Next, each pattern was performed five times. The results are shown in Section 3.2.
Results
According to the designed scenario, the operator can design simple patterns and their sizes and implement them in the script. Then, the collaborative painting robot runs the simulation and the real robot. Figure 8 shows a designed scenario. Next, the sensor camera is used to determine the specific painting location, and the robot executes the task. The painting results are shown in the following section. need of the hour. To evaluate and present the collaborative painting robot with the digital twin concept, a painting scenario was set up. In the scenario, an operator designs painting patterns, such as basic shapes and letters and their sizes. Next, the operator selected the painting location. Then, the collaborative painting robot generated automated tool planning and showed how the robot creates painting results. The operator can view simulated results and has an option to confirm or cancel a result. If the operator confirms a result, the real robot will run and complete the task. Notably, the confirm or cancel process of operators can be skipped if the operator does not require to confirm the result. With the scenario, three painting patterns were used to evaluate the painting system. The first pattern is a square shape, the second one is an infinity symbol, and the third one is the word "CU." The first pattern was used to evaluate the size of the painting pattern along the vertical and horizontal coordinates independently. The second pattern was used to evaluate the size of the painting pattern along the vertical and horizontal coordinates simultaneously. The third pattern was used to present the application of the collaborative painting robot at various painting locations. The text was designed and painted. Next, each pattern was performed five times. The results are shown in Section 3.2.
Results
According to the designed scenario, the operator can design simple patterns and their sizes and implement them in the script. Then, the collaborative painting robot runs the simulation and the real robot. Figure 8 shows a designed scenario. Next, the sensor camera is used to determine the specific painting location, and the robot executes the task. The painting results are shown in the following section.
First and Second Pattern Results
The square shape and infinity symbol patterns were painted five times, which were captured by a camera. The images were scanned using a scanner, and the shapes were analyzed with an image processing program-MVTec Halcon 20.11 [47]. Examples of the square shape and infinity symbol patterns are shown in Figure 9.
First and Second Pattern Results
The square shape and infinity symbol patterns were painted five times, which were captured by a camera. The images were scanned using a scanner, and the shapes were analyzed with an image processing program-MVTec Halcon 20.11 [47]. Examples of the square shape and infinity symbol patterns are shown in Figure 9. The sizes of the designed square shape and infinity symbol patterns are 100 × 100 and 300 × 100 mm 2 , respectively. The images of square shape and infinity symbol painting were used to create a variation model that can be used for image comparison by MVTec Halcon 20.11. The mean and various images of the images were generated and are shown in Figure 10. Because the mean images in Figure 10a,b have the same shape and size as the designed square shape and infinity symbol patterns, each painting line of square shape and infinity symbol is almost coincident. Moreover, the black in the variation image presents the area that has a small variation of the object position in the image, and the white inside it presents the area that has a large variation of the object position in the image. The variation image in Figure 10 shows that the black area forms the same shape as the designed square shape and infinity symbol patterns and the white area is around the square shape and infinity symbol patterns, meaning that the painting system has good repeatability. Further, the root mean square errors (RMSEs) of the square shape and infinity symbol patterns are listed in Table 1. The sizes of the designed square shape and infinity symbol patterns are 100 × 100 and 300 × 100 mm 2 , respectively. The images of square shape and infinity symbol painting were used to create a variation model that can be used for image comparison by MVTec Halcon 20.11. The mean and various images of the images were generated and are shown in Figure 10. Because the mean images in Figure 10a,b have the same shape and size as the designed square shape and infinity symbol patterns, each painting line of square shape and infinity symbol is almost coincident. Moreover, the black in the variation image presents the area that has a small variation of the object position in the image, and the white inside it presents the area that has a large variation of the object position in the image. The variation image in Figure 10 shows that the black area forms the same shape as the designed square shape and infinity symbol patterns and the white area is around the square shape and infinity symbol patterns, meaning that the painting system has good repeatability. Further, the root mean square errors (RMSEs) of the square shape and infinity symbol patterns are listed in Table 1. The RMSEs of the vertical and horizontal lines are computed based on the im they are listed in Table. 1. The standard deviations of the RMSEs of the vertical and zontal coordinates are very small, confirming that the system has good repeatabil addition, the mean RMSEs are small. Then, the color intensity measurement of s shape and infinity symbol patterns is shown in Table 2. The RMSEs of the vertical and horizontal lines are computed based on the images; they are listed in Table 1. The standard deviations of the RMSEs of the vertical and horizontal coordinates are very small, confirming that the system has good repeatability. In addition, the mean RMSEs are small. Then, the color intensity measurement of square shape and infinity symbol patterns is shown in Table 2.
Third Pattern Results
The third pattern is an application of painting the word "CU" pattern ( Figure 11). Then, the robot is allowed to paint the word in a variety of locations. These patterns show that the system can quickly create a simple painting text of shape.
Third Pattern Results
The third pattern is an application of painting the word "CU" pattern ( Figure 11). Then, the robot is allowed to paint the word in a variety of locations. These patterns show that the system can quickly create a simple painting text of shape.
Discussion and Future Recommendations
In this paper, we have demonstrated the concept of a digital twin for the case of a collaborative painting robot. The proposed digital twin architecture and communication model is an open architecture that supports available tools, hardware, and software to build a digital twin for a collaborative robotic cell. WiFi was used to communicate among all entities. The physics simulation was used to simulate the characteristics and motion of a real collaborative painting robot. The edge computer, the real robot and the camera were used to provide input to the physics simulation.
To evaluate the performance of the developed framework, a painting environment is setup. It is desired to see the performance of the collaborative painting robot at drawing basic shapes and letters in particular font sizes. As a test case, a square shape, an infinity symbol, and the word "CU" were given to the painting robot to draw/paint. According to the experiment and results, the collaborative painting robot with the digital twin concept can achieve these three basic tasks of painting. The operator can design simple patterns and let the robot generate an automated tool for planning and show how the robot creates a painting result. Then, Table 1 shows that the root mean square error (RMSE) of the painting is less than 1.5 mm, the standard deviation of the RMSE is less than 0.85 mm. and the maximum error is less than 2.6 mm. In addition, the color quality of the paint of the patterns is acceptable based on the visual inspection. Additionally, based on the color intensity measurement in Table 2, the color of the patterns is uniform and the color intensity can be controlled because the standard deviation of average color intensity is very low. Thus, the results demonstrate the success of our experiment.
As our proposed collaborative painting robot is in the development stage and there are indeed ways to improve its efficiency, there are certain limitations associated with it. The collaborative painting robot is currently set up and fixed on a table. Thus, the robot workspace is limited. It cannot paint outside the workspace. However, it is possible to put
Discussion and Future Recommendations
In this paper, we have demonstrated the concept of a digital twin for the case of a collaborative painting robot. The proposed digital twin architecture and communication model is an open architecture that supports available tools, hardware, and software to build a digital twin for a collaborative robotic cell. WiFi was used to communicate among all entities. The physics simulation was used to simulate the characteristics and motion of a real collaborative painting robot. The edge computer, the real robot and the camera were used to provide input to the physics simulation.
To evaluate the performance of the developed framework, a painting environment is setup. It is desired to see the performance of the collaborative painting robot at drawing basic shapes and letters in particular font sizes. As a test case, a square shape, an infinity symbol, and the word "CU" were given to the painting robot to draw/paint. According to the experiment and results, the collaborative painting robot with the digital twin concept can achieve these three basic tasks of painting. The operator can design simple patterns and let the robot generate an automated tool for planning and show how the robot creates a painting result. Then, Table 1 shows that the root mean square error (RMSE) of the painting is less than 1.5 mm, the standard deviation of the RMSE is less than 0.85 mm. and the maximum error is less than 2.6 mm. In addition, the color quality of the paint of the patterns is acceptable based on the visual inspection. Additionally, based on the color intensity measurement in Table 2, the color of the patterns is uniform and the color intensity can be controlled because the standard deviation of average color intensity is very low. Thus, the results demonstrate the success of our experiment.
As our proposed collaborative painting robot is in the development stage and there are indeed ways to improve its efficiency, there are certain limitations associated with it. The collaborative painting robot is currently set up and fixed on a table. Thus, the robot workspace is limited. It cannot paint outside the workspace. However, it is possible to put the system on a mobile robot to resolve this issue. With a mobile robot, the collaborative painting robot can move to different locations and paint. Currently, a sensor camera is used to measure the target painting plane. Because the camera is an RGB camera, it cannot detect the target plane well without any markers as there is no feature to detect to determine the target plane. Thus, a marker is required for the robot. However, it is possible to detect the target plane without a marker. The RGB camera needs to be replaced with a depth camera, such as an Intel Realsense D435i camera. The depth camera can provide a depth image to accurately determine the target plane.
Besides using the sensor camera to detect the target plane, it is used to detect painting quality. The painting quality is used as the feedback signal for the collaborative painting robot. Thus, the robot can paint the target as the painting design. Moreover, in the future, it would be interesting to create the digital twin level of the sensor camera. Currently, the camera is only a digital model level in the physics simulation. The camera in the physics simulation is just used to view what the robot sees. However, if the digital twin level can be implemented with the sensor camera, many features can be implemented, such as image processing and data/feature extraction, on the digital twin images. As such, various benefits are realized. The example of a vision-based data reader is shown in [14].
The developed spray gun system is simple and works well. However, the designed spray nozzle can still be improved for uniform liquid flow. A stereolithography 3D printer can be used as it has better accuracy. A porous nozzle can be easily created. Another solution is to use a standard commercial painting nozzle. Then, the flow of liquid can be easier to control. Currently, the spray gun is only a digital model level in the physics simulation; in addition, the spray gun model is a simple painting/line model and does not include the rheology of the paint. If the digital twin level can be implemented with the spray gun, many effects of paint can be simulated, such as diffusion and dispersion of paint. Then, the painting quality will be increased. Moreover, if the digital twin level of the robot, sensor camera, and spray gun are perfectly combined, a complicated painting task can be achieved because the painting system can accurately predict the painting quality.
The major benefits of implementing the digital twin framework are the improvement of costs, waste and time. This framework will be more efficient with low batch production. For example, advertising poster production with low batch production may produced less than 10 posters. In this case, the manual setup time may be more than 3 h. The operator with experience needs at least two sets of material for setting up the robot. Once the setup is finished and the process is running, if a painting error happened during the process without monitoring, all the products after the error happened are failures. Thus, the setup and operating cost, waste, and setting time of the traditional painting robot are high. However, with the digital twin framework, the setup time can be reduced to less than 1 h because the operators can quickly simulate the results without waiting for the robot to paint and they can evaluate many scenarios for the painting setup. Then, only one set of materials for setting up the robot may be enough because the simulation is used to confirm the process. Additionally, during the production process, if a painting error happened, the process can be stopped or corrected to prevent product failure. Therefore, the digital twin framework can reduce the the setup and operating cost, waste, and setting time in this case.
Conclusions
A collaborative painting robot was developed using the digital twin framework. The prime advantages of the development of this product are improving the safety and health of workers, reducing waste material and effectively getting the job done. The prototype spray mechanism, with its digital model, was developed for the robot's end effector as part of the project. Digital versions of the robot and an eye-in-hand camera are commercially available. The physics simulation is loaded with all the digital models in the proposed painting robotic cell and is thus used as a digital shadow. The proposed digital twin architecture and communication are effective and flexible. In this study, the edge computer is used to control the process flow, communicate with the robot's controller, perform virtual simulation and data storage, process images from the physical camera, and generate a 2D path for the robot. This concept makes extensive use of enabling technologies and tools to build a painting robot and its digital twin. As a result, painting can be practiced in a virtual environment where the simulated process and outcome are observed before the real execution. The results of the letter and basic shape experiments demonstrate the effectiveness of the proposed techniques. The proposed architecture is an open architecture that allows for future exploration to provide additional benefits, such as: (1) finding a solution when the fault begins; (2) refining the control or optimizing operation; and (3) using historical data for planning. For future work and improvement in collaborative robot performance, the adaptive control concept can be introduced into the system. Further enhancement can be made if the live image is compared to the digital twin image and the estimated error can be fed to the system for correction. Machine learning can also be used to train the collaborative painting robot about the expected error so that the painting can be further enhanced.
|
v3-fos-license
|
2022-06-29T15:17:31.827Z
|
2022-06-27T00:00:00.000
|
250103155
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2022.930023/pdf",
"pdf_hash": "59163e09a3d786a31e747b56997a33b9b83b3d52",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44168",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b3b474040c0d58958bfea852d3cf5029e8cc8399",
"year": 2022
}
|
pes2o/s2orc
|
Effects of Dental Implants and Nutrition on Elderly Edentulous Subjects: Protocol for a Factorial Randomized Clinical Trial
Background Loss of masticatory function consequent to tooth loss has been associated with changes in food choices and insufficient nutritional intake. To date, interventions based on dental prostheses alone did not significantly improve nutrient intake. Pilot studies have shown positive impacts of interventions combining implant-supported fixed dental prosthesis with brief dietary advice. The relative contribution and the potential synergy of the components of such interventions need to be determined as it has major public health implications for the community-dwelling aging population that continues to disproportionately suffer from tooth loss and its consequences. Objective To assess the effect of rehabilitation of masticatory function with fixed implant supported dentures and nutrition education in older subjects with terminal dentition (stage IV periodontitis) or full edentulism. Methods A 2 × 2 factorial randomized controlled trial with 16-month follow-up of eligible adults (≥60 years) with loss of masticatory function consequent to full arch edentulism or terminal dentition (n = 120) will be conducted to test whether the rehabilitation of masticatory function with fixed implant supported dentures, nutrition education and/or their combination improves intake of fresh fruits and vegetables for aging subjects. The study has been designed to detect changes in fresh fruits and fresh vegetables intake at 4 months using the 24-h dietary recall method. Changes in protein as percentage of total energy, nutritional biomarkers, plasma metabolomics, oral and gut microbiome, quality of life and masticatory function will also be assessed. Discussion We hypothesize that receiving rehabilitation of masticatory function with fixed implant dentures together with nutrition education is the most effective intervention for improving nutrient intake in aging community-dwelling subjects with extensive tooth loss. The results of this study will assist in designing better treatment regimens, guide medical care for individual subjects, and inform public health and policy. Clinical Trials Registration NCT05334407.
INTRODUCTION
Over the course of life unmanaged caries and periodontitis, the most common diseases of mankind, lead to tooth loss and associated loss of quality of life and eventually compromised masticatory function. Older adults and aging subjects may be disproportionately affected (1). At the end of the disease spectrum, subjects with complete tooth loss (edentulism) or presence of only few remaining teeth that do not enable adequate chewing function show changes in their food choices and seem to prefer softer diets with higher carbohydrates and fat and less fresh fruits and vegetables (2,3).
Accumulating evidence points to the presence of an association between changes in dietary behavior consequent to tooth loss and insufficient nutrition intake (4)(5)(6). A recent systematic review indicated that subjects lacking a functional dentition had a 21% increased likelihood of being at risk of malnutrition or being malnourished (7). Such impaired nutrition may have long term effects on muscle strength and physical decline and be detrimental to general health (8,9). Indeed, the recent Global Burden of Disease study of dietary risk factors identifies 15 important disease associated exposures. Their analysis shows that 5 of the health associated exposures: consumption of fruit, vegetables, whole grains, nuts, and fiber require a good level of mastication (10).
A recent systematic review has addressed the efficacy of tooth replacement with dental prostheses and identified clear benefits in terms of restoration of masticatory function (11). Among fully edentulous subjects, greater benefits have been observed with dental implant retained prostheses with respect to conventional dentures.
While the physiology of mastication is an essential component of alimentation and contributes to the broader process of nutrition, recent research has focused on the nutritional benefits of tooth replacement to better focus the relevance of oral health on general health. Several studies have tried to improve the nutrient intake among edentulous individuals with various types of dentures. However, this goal has been elusive for interventions based on either complete dentures or implant-retained overdentures, given the functional limitation on these prostheses and perhaps the lack of concomitant dietary intervention (12)(13)(14)(15). A small-scale case series has shown that implant-supported fixed prosthesis resulted in more efficient mastication and improved nutrient intake compared with conventional and implant-based removable dentures in partial edentulism (16).
Within dentistry, the long-held assumption that restoration of masticatory function alone-i.e., without dietary re-education intervention-brings nutritional benefits is being questioned. Sparse evidence points to the positive impact of nutrition counseling on the dietary intake of edentulous subjects receiving Abbreviations: DE, dental intervention; DI, dietary intervention; HbA1c, glycated hemoglobin; CRF, case report form; CBCT, Cone beam computed tomography; hs-CRP, high-sensitivity-C-reactive protein; TNF-α, tumor necrosis factor-α; IL-1β, Interleukin-1β; IL-6, Interleukin-6; Co Q10, coenzyme Q10; LS-MS, liquid chromatography-mass spectrometry; HDL, high density lipoprotein; LDL, low density lipoprotein; HBM, health belief model. dental prostheses: brief dietary advice has been advocated to help patients take full advantage of the enhanced masticatory function to improve their diet (17,18). Ellis et al. further showed that the impact of dietary advice on patient's satisfaction with dentures and oral health-related quality of life depends on the nature of the prosthesis (19). A recent systematic review on the impact of oral rehabilitation coupled with dietary advice on nutritional status has indicated that in most studies the dietary interventions were not theory based and poorly described (20). Not unexpectedly, the meta-analysis found only a trend toward significant changes in fruit and vegetable consumption and marked heterogeneity among the included pilot case series. No trial has been performed to assess the benefit of dietary advice alone or the combined effect of re-establishment of masticatory function with an implant-supported fixed prosthesis and dietary advice in edentulous elderly subjects. Understanding the relative contribution of restoration of masticatory function and nutrition education is critical to design effective interventions and improve public policy related to nutrition and prevention of physical decline in aging populations.
Based on the current equipoise about the relative contribution of dental and dietary interventions and the clinical and public health relevance of defining appropriate interventions to improve nutrition of older adults with extensive tooth loss, this protocol describes a 2 × 2 factorial clinical trial to assess the effect of rehabilitation of masticatory function with fixed implant supported dentures and/or brief nutrition education on the dietary intake and nutrition in older subjects with terminal dentition (stage IV periodontitis) or full edentulism. The clinical trial is being implemented. The effectiveness of the intervention will be validated during the trial. Results will identify the relative importance and optimal sequence of dental and dietary interventions, providing critical information with major implications for caring of individual subjects and for public health and policy. The results of the clinical trial will be available in 2 years.
Study Design
This protocol has been prepared according to the SPIRIT guideline for clinical trial protocols (21).
Study Overview
The study is designed as a factorial randomized controlled clinical trial testing the benefits of dental and/or dietary interventions on changes in fresh fruits and fresh vegetables intake at 4 months ( Table 1). Group A (DE+/DI+) will receive the full treatment regimen. Group B (DE+/DI-) will receive implantsupported fixed prosthesis at first and nutrition education after a 4-month waiting period. Group C (DE-/DI+) will receive the nutrition education at first and implant treatment after a 4-month waiting period. Group D (DE-/DI-) will receive the same treatment as Group A after a 4-month waiting period (Figure 1). The waiting period is equal to the current waiting list in the department. Subjects will be followed for an additional 12-month period. All procedures will follow the principles of
Recruitment
Older subjects (≥60 years of age) with full arch edentulism or terminal dentition seeking care at the Dept. of Oral and Maxillofacial Implantology of Shanghai Ninth People's Hospital will be screened and invited to participate while attending new patient clinics.
Eligibility Criteria
All potential participants will be assessed for eligibility based on the inclusion and exclusion criteria.
Inclusion Criteria
• Being edentulous or having a terminal dentition (22) and accepted treatment plan for fixed implant-supported prosthesis restoring at least 10 pairs of occluding teeth. • Self-reported inadequate fresh vegetables or fresh fruits or protein foods intakes (daily intake thresholds based on the Chinese Dietary Guidelines for the Elderly recommendations). • Understanding written and spoken Chinese and ability to respond to Chinese questionnaires. • Able and willing to give informed consent for participation in the study. • Able and willing to comply with 12-month follow-up.
Exclusion Criteria
• General and local contraindications to implant-supported immediate-loading fixed prosthesis. • Looking for replacement of existing implant-retained overdenture with implant-supported fixed denture treatment. • Presence of infectious disease, acute or chronic symptoms of TMJ disorder.
• Any dietary restriction, currently taking nutrient supplements or inability to choose his/ her diet.
Screening
Screening evaluations for this study will be performed in the context of routine patient evaluation in the clinic. The investigators will approach consecutive patients with the condition for possible inclusion in the study. During screening the investigator will also verify eligibility criteria. Additionally, social media (WeChat & Weibo) advertisement will be utilized to help recruit study participants. For some of these subjects, initial screening will be performed by phone. Following the telephone screening, the potential subjects will be invited for clinical screening evaluation.
Enrollment
Subjects fulfilling the inclusion and exclusion criteria will be invited to participate in the study and receive an explanation of the study, its objectives, benefits and risks by the investigator in the context of informed consent. The following information will be collected and recorded after the inclusion of the participants.
(1) Demographics: date of birth, gender, education, ethnicity, family income and anthropometric measures. (2) Lifestyle factors: smoking (including tobacco consumption and smoking history), drinking habits and oral hygiene habits using the items from the Fourth National Oral Health Survey Questionnaire in Mainland of China (23) will be recorded. (3) Health literacy: oral health literacy will be recorded using the Chinese version of the Short-Form Health Literacy in Dentistry (HeLD) scale (24) and nutrition literacy using the Nutrition Literacy Questionnaire for the Chinese Elderly (25). (4) Medical History: Details of medical, including diabetes mellitus, cardiovascular diseases and other systemic diseases will be recorded. For subjects with diabetes, levels of glycated hemoglobin (HbA1c) will be obtained from the patients' medical records. (5) Concomitant Medications: All over the counter or prescription medication, vitamins, and/or herbal supplements will be recorded on CRFs.
Randomization
Subjects will be randomly assigned to one of four groups with a 1:1:1:1 ratio by stratified block randomization. The block size will be 8 and stratifying factors will be diabetes status and smoking. Subjects will be registered into the study by a study registrar who will assign the treatment number and organize the sequence of the bookings of the patient according to the random allocation. The registrar will not be involved in any other study procedures.
Blinding and Allocation Concealment
Timing of treatment will be concealed to the therapists and to the examiners. Two separate masked therapists will perform the dental or the nutritional interventions. All laboratory assessments will be performed blindly.
Removal and Withdrawal Criteria
Those who have been selected for this trial and fall into one of the following circumstances are regarded as removed cases.
(1) Violation of important entry criteria; (2) Receiving no study interventions; Each participant has the right to withdraw from the study at any time. In addition, the investigators may discontinue a participant from the study at any time if the investigators consider it necessary for any reason including: (1) Best interest of the patient.
(2) Ineligibility (either arising during the study or retrospective having been overlooked at screening). (3) Significant protocol deviation. (4) Significant non-compliance with treatment regimen or study requirements. (5) An adverse event which requires discontinuation of the study or results in inability to continue to comply with study procedures. (6) Inability to continue to comply with study procedures (e.g., moving to another city). If any subjects withdrawn from the study, no particular observation or treatment would need to continue. Subjects would be replaced if anyone withdrawn after the study started.
Regardless of the reason, complete clinical data should be retained for subjects who withdraw from the trial. The reason for withdrawal or early termination will be recorded in the CRF (case report form). If the participant is withdrawn due to an adverse event, if any, the investigator will arrange for follow-up visits until the adverse event has resolved or stabilized.
Treatment of Trial Participants
The treatment process will include provision of implantsupported full-arch fixed prostheses (dental intervention, DE) and dietary intervention tailored to the dental status (dietary intervention consisting of nutrition education, DI). Subjects will be randomized to either a waiting period or treatment regarding dental intervention and dietary intervention. Ethical justification for the waiting period comes from the current waiting list in the regular care at the hospital. In order to improve the compliance with the protocol, all subjects participating in the study will have an alert in their patient record identifying them as participants to this protocol to alert administrative staff on the need to follow a stringent timing of followup appointments.
Dental Intervention
All participants will receive implant-supported full-arch fixed prostheses in at least one jaw (26,27) and appropriate treatment in the opposing jaw regarding periodontal disease, caries, replacement of missing teeth and soft tissue disorders to get at least 10 pairs of occluding teeth (22). Before surgery, a treatment plan will be made according to the clinical examination, study model and the CBCT data. Presurgery mock-up will be produced to guide implant placement and to facilitate the fabrication of the immediate prosthesis. A surgical guide with tooth set-up will be used for implant placement and bite registration, as needed.
After administrating local anesthesia any remaining tooth will be exacted atraumatically and the sockets will be carefully curetted. A crestal incision will be made and a full thickness mucoperiosteal flap will be reflected. For the preparation of the osteotomy site for implant placement, a modification of the drilling protocol according to the manufacture's recommendation will be followed as needed for immediate placement in case of the presence of residual teeth/roots. Tapping may not be used depending on the bone density to ensure primary stability of the implant. After the site preparation, 4-8 Nobel Active R implants (Nobel Biocare, Goteborg, Sweden) will be placed. Multi-Unit abutments (Nobel Biocare, Goteborg, Sweden) will be placed onto the implants. The abutment will be tightened with a torque of 35Ncm for straight multi-unit abutments and 15 Ncm for angulated multi-unit abutments. Healing caps will be placed on the abutments to support the periimplant mucosa. The flap will be closed with a 5-0 resorbable suture (Vicryl, Johnson & Johnson Medical, Pomezia, Italy). Then, splinted impression will be taken at abutment level using an individual open tray. Pre-surgery mock-up or surgery guide with tooth set-up will be used to register the occlusal relationship. Patients will receive amoxicillin (Xinya Co, 500 mg, 3 times/day for 7 days). Decongesting nasal drops (phenylephedrin, 0.1 ml, 3 times/day for 3 days) will be prescribed if sinus elevation will be performed. Mouth rinsing with chlorhexidine 0.12% three times per day and modified oral hygiene procedures will be prescribed for the first 2 weeks of healing (sutures still in place).
A screw-retained, metal-reinforced, acrylic resin interim restoration will be delivered within 24 h of surgery. All centric and lateral contacts will be assessed and modified, until occlusal contacts are uniformly distributed on the entire prosthetic arch. Sutures will be removed at 2 weeks. After a healing period of 4 month, a definitive screw-retained, full-arch prosthesis will be delivered.
Brief Nutrition Education
The nutrition education will be conducted based on the health Belief Model (HBM) (28) addressing perceived susceptibility and severity of lacking the targeted behavior, perceived benefits and barriers of carrying out the targeted behavior, cues to action, and self-efficacy. With behavioral goals being increasing an individual's likelihood of food intake regarding fresh vegetables, fresh fruits, and high-quality protein foods (i.e., poultry, meat and aquatic product), the nutrition education session has been designed to be culturally tailored.
Participants will receive a 20-min coordinated nutrition education in the form of a slideshow presentation by a nutritionist in the clinical setting. On completion they will receive a copy of a pamphlet prepared in three parts (overall dietary goal, recipe examples mainly composed of softer and easy-to-chew food, and recipe examples composed of various food without restriction on the texture). The advice has been compiled with reference to the 4th edition of Dietary Guidelines for Chinese Elderly Residents (2016) by the Chinese Nutrition Society (29) that will be given to the participant separately. If a participant does not prepare his or her own meals, the person who does the cooking receives the dietary advice as well. A dietary checklist aiming to evaluate the compliance will be delivered with the pamphlet and patients will send it back after 1-week's recording.
Measurements and Outcomes
Timing of Assessment Study assessments will be performed at baseline, 4, 8, and 12 months unless otherwise stated below. For Group B, C and D, an additional assessment will be performed at 16-month follow-up.
Food and Nutrient Intake
The primary outcome measure will be changes in intake of fresh fruits and fresh vegetables measured at 4 months using the 24-h dietary recall method. Protein% of total energy will also be calculated. Three 24-h dietary recall will be conducted through face-to-face interview, twice on weekdays and once on weekend. The data on food consumption will be converted into the corresponding nutrient contents based on the 6th version of China Food Composition Tables Standard Edition. Moreover, a modified simplified food-frequency questionnaire (FFQ) of 33 food group items (30) will be conducted at baseline, 4, 8, 12, and 16 months.
Masticatory Function
Masticatory function will be assessed at baseline (before treatment) and 4, 8, 12 and 16 months after insertion of a fixed implant retained prosthesis using the quantitative method described by Schimmel et al. (31) as previously described (32). In brief, subjects will be asked to mix a two-color chewing gum with 20 masticatory cycles. The obtained bolus will be pressed to a standardized height and a color image will be acquired. Quantitative data will be obtained by digital analysis of the image using variance of hue as the outcome.
Peri-Implant Soft and Hard Tissue Health
Peri-implant soft tissue condition will be measured by periodontal probing (UNC/CP-11.5B Screening Color-Coded Probe, Hu-Friedy, Chicago, IL, USA). Modified plaque index (mPI), probing depth (PD), and modified bleeding index (mBI) will be evaluated (33). Standardized panoramic radiographic imaging will be conducted to assess the periimplant bone level. The assessment will be performed at 4-, 8-, and 12-month post-surgery.
Oral Health Impact Profile (OHIP)-14
The oral health impact profile (OHIP)-14 (34) will be administrated to assess the impact of oral health on the quality of life of participants using a validated Chinese translation of the instrument (35).
Biological Samples
Biological samples will be collected and processed in a standard way by dedicated study personnel blind with respect to treatment status. The blood sample collection will be scheduled at 8 a.m.-9 a.m. Patients will fast overnight (12-14 h) prior to blood collection. Subjects will be advised to avoid strenuous exercise 1 h prior to collection. Samples will be processed at the clinical research center laboratory to meet the preservation standards for the assay of each marker and will be either assayed immediately or stored at −80 • C in the Shanghai Ninth People's Hospital Biobank facility for later analysis.
Metabolic and Inflammatory Biomarkers
The following biomarkers will be assessed by a specialized GCP approved clinical pathology laboratory. a) Blood serum concentration of homocysteine. b) Plasma hs-CRP, TNF-α, IL-1βand IL-6. c) Co Q10, Uric acid and superoxide dismutase. d) Blood lipids (total cholesterol, HDL cholesterol, LDL cholesterol, triglycerides and Lpa).
Plasma Metabolomics
a) Metabolites will be profiled with untargeted metabolomics using liquid chromatography coupled with mass spectrometry (36). b) Oxylipin changes will be assessed by oxidative lipidomics (37).
Oral and Fecal Microbiome
Oral rinse, subgingival plaque and fresh stool samples will be collected for 16S rRNA gene sequencing at baseline, 4, 8, 12, and 16 months.
Oral microbiome samples will be obtained by oral rinsing for 1 min with 5 mL of buffer solution (38,39). Additionally, in dentate patients a subgingival plaque sample will be taken from the deepest periodontal pocket/lesion with a sterile paper point inserted to the depth of the pocket. The subgingival plaque sample will be collected after isolating the sampling area with cotton rolls gentle air drying, and supragingival plaque removal. Samples will be immediately stored at −80 • C.
Sterile stool tube with a spatula inside will be given to the participants with detailed instructions on how to collect the specimen. Fresh stool samples will be collected by the participants at home the night before the visit day or the morning of the visit day (40). Samples will be stored in the patient's refrigerator at 4 • C until submission. During transportation, samples will be kept on ice in a cooling bag.
Nutritional Status
Mini-nutrient status form (MNA-SF) (41) will be used to screen patients for risk of malnutrition at baseline, 4, 8, 12, and 16 months.
Muscle Strength
A hand grip dynamometer will be used to assess muscle strength at baseline, 4, 8, 12 and 16 months essentially as described (32).
Depression Symptoms
Depressive symptoms will be assessed with the shortened Center for Epidemiologic Studies Depression Scale (CES-D10) (44).
Adverse Events
The collective evidence from numerous clinical trials reveals consistent findings that the implant supported fixed full-arch prosthesis is a safe and effective treatment approach for terminal dentition or full edentulism. Dietary advice is also a safe intervention for edentulous elderly. No significant adverse events have been reported. Occasionally the patient may experience early implant failure and/or mechanical complications of the prosthesis. These will be recorded in the case report forms and will be managed according to standard of care with additional implant placement or refabrication/modification of the existing prosthesis.
Follow-Up for Adverse Events
Adverse events (AE), if any, will be managed according to current standard of care for the specific condition and will be reported to the ethics committee. The principal investigators will assess and manage the condition to the best of his knowledge and refer to specialist care if appropriate and in the best interest of the participant. AE will be considered resolved once the principal investigators concur that to be the case.
Sample Size
The sample size has been determined based on the primary outcome: changes in the intake of fresh fruits and fresh vegetables. Based on the relevant studies where the average fruit and vegetable intake was about 255 ± 200 g per day in edentulous elderly (17), we assume that the average fruit and vegetable intake will increase 50 g, 70 g and 245 g in patients receiving dental prostheses alone, brief dietary advice alone, or the combination dental treatment with brief dietary advice, respectively. With an anticipated 20% loss to follow-up, 30 patients are needed in each group to test the significance of dental treatment, brief dietary advice, alone or in combination with alpha set at 0.05 and with 80% power. Due to limitations in the baseline knowledge for precise sample size calculations and the efforts required for an adequate pilot study, adaptive adjustment of sample size will be performed. Sample size calculation will be re-estimated after obtaining the primary outcome of 10 patients in each group. Based on the interim analysis for adaptive design, several scenarios have been identified: (i) the original sample size estimation is appropriate and the study will be completed accordingly; (ii) the original sample size will be insufficient but will still be within the capability of recruitment of the study center, in such case the sample size will be expanded according to the adaptive design principles; (iii) the original sample size will be insufficient but too large for successful completion of the study at the study center alone, in such scenario the study will be completed as a pilot study with the original sample size and a multicenter trial will be designed and implemented based on the results.
Statistical Software and General Requirements
Data analysis will be performed in SPSS software, version 26.0 (IBM Corp., Armonk, NY, USA). The level of statistical significance will be set at 0.05 for all tests.
Statistical Analysis Plan
(1) Descriptive statistics will report the demographic, clinical and biological characteristics of the study population. Means with standard deviations (SD) or medians with interquartile range (IQR) will be used to describe continuous variables. Frequencies will be used to describe categorical variables. (2) The normality of clinical and biological parameters at baseline and each re-evaluation visit will be tested for normality using the Kolmogorov-Smirnov test. The homogeneity of the clinical and biological parameters at baseline and each re-evaluation visit will be tested using Levene's test. (3) ANOVA analysis will be used to test the effect of dental treatment, brief dietary advice, both dental treatment and brief dietary advice, sequence of two interventions for continuous variables with a normal distribution. The Kruskal-Wallis test will be used for non-continuous variables or continuous variables not normally distributed.
Data Quality and Assurance
All investigators involved in this study will be trained appropriately for the standard operating procedures including the questionnaire conduction, blood sampling and preservation, presurgical examinations and treatment (regarding periodontal diseases, caries, missing teeth and soft tissue disorders) and delivery of interventions.
(1) The 24-h dietary recall will be conducted by clinical nutrition specialists trained appropriately for the standardized interviewing procedures and assessment of dietary intake.
The inter-and intra-examiner reliability with respect to the measurement of fresh foods and fresh vegetables intake will be assessed by the intraclass correlation coefficient (ICC). In order to ensure optimal inter-and intra-examiner reliability, ICCs needs to be more than 0.75.
(2) The therapists, who will deliver the dental intervention, will be experienced specialists in implant dentistry fulfilling the Shanghai requirements. Treatment will be provided to the satisfaction of the clinician and patient. For logistic reasons, five therapists will be included in this study. (3) The investigator who performs dietary interventions will be trained in delivering a standardized dietary instruction session including the verbal instruction and demonstration of the pamphlet. In addition, the investigator will be trained in the use of questionnaires.
Regular monitoring for assuring protocol compliance, and data quality at the clinical site, including review of source documents and records, consent forms, etc will be performed by an investigator trained in both GCP and the specific procedures. Furthermore, the clinical research coordinator will audit the case report forms for the first few patients to ensure correct filling of forms. The study will also be monitored by the compliance office of the National Clinical Research Center of Oral Diseases and Clinical Research Center of the 9th People's Hospital.
Confidentiality
All trial-related data will be stored securely at Shanghai PerioImplant Innovation Center. The participant information will be stored in locked cabinets with limited access. All data will be anonymized by assigning a Research ID used for data collection and processing to maintain participant confidentiality. All records containing personal identifiers will be stored separately from study records identified by the research ID number.
DISCUSSION
Many older adults with severe tooth loss and masticatory dysfunction change their food choices and incorporate softer food with more carbohydrates and fats and depleted of essential micronutrients and fibers. They also progressively lose weight, become frail and dependent on others for their daily necessities. Replacement of missing teeth alone restores masticatory function but does not positively influence diet. Great attention is currently being paid to the combination of dental and dietary interventions. Their relative importance and optimal sequence, however, remain unknown. This lack of knowledge has farreaching consequences in the design of optimal treatment regimens and testing their health benefits in definitive studies. The present study will provide critical information with major implications for caring of individual subjects and for public health and policy (45). The design of this trial has posed significant challenges in terms of experimental design, choice of the population/condition, definition of the dental and the dietary interventions as well as the choice of outcomes. These will be briefly discussed following the PICOT format.
The selected 2 × 2 factorial randomized clinical trial design provides greater efficiency in terms of sample size while allowing testing of multiple clinically relevant questions on the relative effect size of dental and/or dietary interventions. The incorporation of an adaptive design that will recalculate sample size after data will be available from a third of the planned subjects provides robustness to the approach even considering the possible imprecision of the preliminary data used for sample size calculation. While sample size assumptions have been piloted and confirmed in the specific patient population the approach offers added robustness against type II errors. This is particularly important given the high costs of rendering the treatment to this population and the consequent difficulty in properly funding a pilot trial. Specific a priori scenarios have been identified with regards to completion of the trial.
Tooth loss is frequently incremental over the course of life and subjects in the population present with a spectrum of severity of loss of masticatory function. This study will focus on the more severe end of the spectrum as these subjects are both likely to suffer from greater changes in diet and more likely to show improvements in masticatory function because of tooth replacement. It will also recruit aging subjects who represent most edentulous subjects. Additional studies expanding the observations to milder forms of edentulism will be needed. To ensure that subjects will suffer from both masticatory dysfunction and a degree of malnutrition, an inclusion criterion has been added in terms of verification of poor fruit, vegetable, or protein intake. Pilot nutritional analysis of edentulous subjects reporting for treatment in the specific setting has verified that most of them reported at least one aspect of impairment and fit the inclusion criteria. These aspects are important for the external applicability of the results of the trial and ongoing epidemiologic research will provide additional information.
The definition of both the dental intervention and the nutrition education are also notable. To address masticatory dysfunction this study will employ fixed dentures supported by dental implants-a well-defined intervention routinely performed in the specific setting-as these have been shown to provide better objective and subjective chewing benefits (11). The masticatory function will be restored to provide at least 10 occluding pairs of teeth, a number generally considered compatible with adequate function (22,46). HBM has been used in aiding behavior change intervention for decades, and it has been applied to Asian populations. With the HBM-based nutrition education, the objective is to motivate participants from the pre-contemplative stage to the contemplate stage, and even to preparatory stage with the materials provided. Combining with the dental intervention which will solve the physical barrier, the hypothesis is that participants will progress to the executive stage at home. During the follow-up period, the importance of dietary intake will be reinforced to help them to stay in the maintenance stage. The intervention, its instruments and their delivery have been tailored to local circumstances, evaluated, and revised in our pilot study before implemented for the trial. Details are presented in the online appendices as a potential resource for additional trials.
While the equipoise to justify randomization is strong, recruiting patients with edentulism/terminal dentition for a trial is challenging due to the severity of the condition and the impact on quality of life. The opportunity arises in the specific setting due to the waiting list for treatment that justifies the delay in the delivery of the care initially sought by the patient.
Lastly the choice of the primary outcome has been complex due to the limited previous information on clinically relevant outcomes and the need to maintain the size of this trial within the recruitment possibilities of the single center. The choice to focus on a proxy outcome-changes in fresh fruits and fresh vegetables consumption-as the primary outcome, rather than a health gain measure, is based on the need to establish the effectiveness of the treatment regimen and logistic considerations. The limitation of 24-h recall method in providing an accurate estimate of longterm energy intake has been realized. Thus, the study plans to combine food frequency questionnaires which replied on generic rather than specific memory to offer detailed assessment of the study period. Furthermore, the study plans to assess a wide palette of secondary outcome that will provide insight into mechanisms of a potential benefit by exploring both biochemical markers, metabolomics and changes in the oral-gut microbiome axis and functional quality of life instruments.
The relatively short follow-up time for the factorial design component of the study is adequate to assess the efficacy of the interventions. The 12-month extension is relevant as it will supply critical information about retention of subjects in the trial and medium-term compliance with the dietary intervention and effectiveness of the dental intervention. It will also provide the basis for future longer-term trials focusing on health outcomes.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Review Board of the Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine (Approval No. SH9H-2021-T321-3). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
MT conceived and designed the study. CY, H-CL, S-JQ, and J-YS contributed to the study design and protocol development. H-CL, BL, JS, S-CQ, YT, and KD assisted with preliminary analyses on the patient population, piloting of material, and preparation of study launch. XZ provided the sample size calculations and the statistical plan. S-JQ, JS, and BL drafted the manuscript based on the original protocol. All authors revised and approved the final version of the manuscript.
FUNDING
This study has been supported by the Project of Biobank from Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine (YBKB202102 and YBKA201906).
|
v3-fos-license
|
2019-12-05T09:05:57.108Z
|
2019-12-04T00:00:00.000
|
209508902
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.dib.2019.104931",
"pdf_hash": "3fbec3d2abe4ebd8c9ed5c010380b3bf4069566f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44172",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "98c03c18b9506602f4f69aa95773d4cc0c38e35f",
"year": 2019
}
|
pes2o/s2orc
|
Data on fuzzy logic based-modelling and optimization of recovered lipid from microalgae
This article presents the data of recovered lipid from microalgae using fuzzy logic based-modelling and particle swarm optimization (PSO) algorithm. The details of fuzzy model and optimization process were discussed in our work entitled “Application of Fuzzy Modelling and Particle Swarm Optimization to Enhance Lipid Extraction from Microalgae” (Nassef et al., 2019) [1]. The presented data are divided into two main parts. The first part represents the percentage of recovered lipid using fuzzy logic model and ANOVA. However, the second part shows the variation of the cost function (recovered lipid) for the 100 runs of PSO algorithm during optimization process. These data sets can be used as references to analyze the data obtained by any other optimization technique. The data sets are provided in the supplementary materials in Tables 1–2.
a b s t r a c t
This article presents the data of recovered lipid from microalgae using fuzzy logic based-modelling and particle swarm optimization (PSO) algorithm. The details of fuzzy model and optimization process were discussed in our work entitled "Application of Fuzzy Modelling and Particle Swarm Optimization to Enhance Lipid Extraction from Microalgae" (Nassef et al., 2019) [1]. The presented data are divided into two main parts. The first part represents the percentage of recovered lipid using fuzzy logic model and ANOVA. However, the second part shows the variation of the cost function (recovered lipid) for the 100 runs of PSO algorithm during optimization process. These data sets can be used as references to analyze the data obtained by any other optimization technique.
Data
This article presents the numerical data generated during the maximizing of recovered lipid from microalgae using fuzzy logic based-modelling and PSO algorithm. The simulation was carried out using Matlab/Simulink software package on a Core i7 computer with Win10 operating system. The data generation process comes with some stages. First, by using the experimental data from Refs. [1,6], a robust model that describes the lipid extraction is generated using fuzzy logic technique. Table 1 (supplementary materials) shows a comparison of fuzzy based model with ANOVA. Second, the optimal decision variables for extracting the lipids are determined using PSO algorithm. During optimization process, three different operating parameters; power (W), heating time (minutes), and extraction time (hours) have been used as a decision variables in order to maximize the percentage of the recovered-lipid which is used as a cost function. Due to the stochastic behavior of the swarm optimizers, the optimizer results cannot be trusted unless many trials have been done [7e9]. Therefore, the optimization process was executed for 100 times. The data of the 100 runs is presented in Table 2 (supplementary materials).
Experimental design, materials and methods
A sample of 500 ml of the wet algae was subjected to microwave pre-treatment using a round bottom open glass. The samples were pre-treated at a microwave power ranging between 180 W and 600 W for times ranging between 2 minutes and 8 minutes. Furthermore, different extraction times were tested between 3 and 4 hours. More information about the experimental design and data can be Specifications Table Subject area Energy More specific subject area Renewable Energy; Artificial Intelligence; Swarm Optimization Type of data Excel files How data was acquired The input parameters of PSO from Refs. [2,3]. Data of fuzzy model from Refs. [4,5]. Afterwards, the numerical simulation was conducted by MATLAB/Simulink software package Data format Filtered and analyzed Experimental factors The fuzzy model has 13 fuzzy rules. The model's training process has been done with 13 samples for 50 epochs
Experimental features
The fuzzy logic based model has minimum RMSE and maximum coefficient of determination compared with ANOVA Data source location Wadi Addawaser, Prince Sattam Bin Abdulaziz University, Saudi Arabia Data accessibility Data are provided in supplementary materials with this article Value of the data The data presented in this paper can be utilized directly without spending time to initiate any further simulations to study the recovered lipid from Microalgae. By using these data sets, researchers can make comparisons with other modelling techniques like artificial neural networks (ANNs) These data sets are very useful for making comparisons with other optimization algorithms such as genetic algorithm and cuckoo search.
found in Refs. [1,2]. Then, based on these experimental data sets, an accurate fuzzy logic based model is created to simulate the process. Finally, PSO algorithm has been used to identify the optimal operating parameters to maximize the recovered lipid. Tables 1 and 2, in Appendix-A, show the outputs of the fuzzy model and the results of optimization process, respectively.
|
v3-fos-license
|
2021-07-07T13:20:14.166Z
|
2021-07-07T00:00:00.000
|
235748335
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.599719/pdf",
"pdf_hash": "b884bb352f2fffdd1914e02327c35cb8e780df6f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44176",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b884bb352f2fffdd1914e02327c35cb8e780df6f",
"year": 2021
}
|
pes2o/s2orc
|
Genetic Variations in the Transforming Growth Factor-β1 Pathway May Improve Predictive Power for Overall Survival in Non-small Cell Lung Cancer
Purpose: Transforming growth factor-β1 (TGF-β1), a known immune suppressor, plays an important role in tumor progression and overall survival (OS) in many types of cancers. We hypothesized that genetic variations of single nucleotide polymorphisms (SNPs) in the TGF-β1 pathway can predict survival in patients with non-small cell lung cancer (NSCLC) after radiation therapy. Materials and Methods: Fourteen functional SNPs in the TGF-β1 pathway were measured in 166 patients with NSCLC enrolled in a multi-center clinical trial. Clinical factors, including age, gender, ethnicity, smoking status, stage group, histology, Karnofsky Performance Status, equivalent dose at 2 Gy fractions (EQD2), and the use of chemotherapy, were first tested under the univariate Cox's proportional hazards model. All significant clinical predictors were combined as a group of predictors named “Clinical.” The significant SNPs under the Cox proportional hazards model were combined as a group of predictors named “SNP.” The predictive powers of models using Clinical and Clinical + SNP were compared with the cross-validation concordance index (C-index) of random forest models. Results: Age, gender, stage group, smoking, histology, and EQD2 were identified as significant clinical predictors: Clinical. Among 14 SNPs, BMP2:rs235756 (HR = 0.63; 95% CI:0.42–0.93; p = 0.022), SMAD9:rs7333607 (HR = 2.79; 95% CI 1.22–6.41; p = 0.015), SMAD3:rs12102171 (HR = 0.68; 95% CI: 0.46–1.00; p = 0.050), and SMAD4: rs12456284 (HR = 0.63; 95% CI: 0.43–0.92; p = 0.016) were identified as powerful predictors of SNP. After adding SNP, the C-index of the model increased from 84.1 to 87.6% at 24 months and from 79.4 to 84.4% at 36 months. Conclusion: Genetic variations in the TGF-β1 pathway have the potential to improve the prediction accuracy for OS in patients with NSCLC.
INTRODUCTION
Lung cancer is the leading cause of cancer death and the second most commonly diagnosed type of cancer in the USA. It was estimated that 235,760 new cases would be diagnosed in 2020, accounting for about 12.5% of all cancers diagnosed, and only 23% of cases are diagnosed at an early stage (1,2). The 5year survival rate is only about 22.6% in the USA, though there is already a 13% improvement over the last 5 years for all lung cancers (2,3). Approximately, 83% of patients with lung cancer are identified with non-small cell cancer (NSCLC) (4), and radiation therapy (RT) is a mainstay local treatment used for all stages of the disease (5). However, the survival benefit of RT to an individual patient varies with the baseline clinical and genetic factors of each patient. Some clinical factors, such as age, stage group, and histology, have a strong correlation with the overall survival (OS) of patients with NSCLC after RT (6). There is a need for an integrated clinical and genetic model for survival prediction.
Recent studies have shown a strong correlation between transforming growth factor-β1 (TGF-β1) and OS in various types of cancer (7). TGF-β1 is a prototype of a multifunctional cytokine and plays an important role in tumor angiogenesis, stroma formation, immune suppression, carcinogenesis, tumor metastasis progression, and prognosis for patients with cancer. Single nucleotide polymorphisms (SNPs) of TGF-β1 have been significant factors for prognosis in colon and pancreatic cancers (8,9). We hypothesized that functional SNPs of the TGF-β1 pathway genes can regulate the TGF-β1 expression level and function of the downstream pathway genes for tumor progression and the immune system of the host, thus contributing to OS in patients with NSCLC.
Study Population
This study included 166 patients with inoperable stages I-III NSCLC, enrolled through prospective studies approved by the institutional review board (IRB) of participating centers. All patients signed written informed consent. Patients received definitive thoracic radiotherapy (≥55 Gy EQD2) with or without chemotherapy. All patients were treated with three-dimensional conformal RT techniques as described in previous studies (10,11). Clinical factors, including total equivalent dose at 2 Gy fractions (EQD2), age, gender, ethnicity, smoking history, histology, stage group, Karnofsky performance score (KPS), and the use of chemotherapy, were collected prospectively.
Selection of SNPs
We selected 14 functional SNPs present in the 11 genes responsible for the TGF-β1 pathways based on the following criteria: (1) tag SNPs in the candidate genes; (2) a minor allele frequency greater than 10%; and (3) previously reported significant findings with correlation with the outcome of RT or chemotherapy or cancer risk.
Sample Collection and Genotyping
The buffy coat was collected from each patient before the commencement of treatment and stored at −80 • C. Genomic DNA was extracted from the buffy coat using the Blood Mini Kit of Gentra R Puregene R (Qiagen, Valencia, CA) according to the protocol of the manufacturer. The concentrations of genomic DNA were measured by a Nano Drop 2000c Spectrophotometer (Nano Drop Technologies, Inc., Wilmington, DE). Quantified DNA samples were placed on a matrix-assisted laser desorption/ionization time-of-flight mass spectrometer (Sequenom, Inc., San Diego, CA) according to the protocol of the manufacturer. For pre-genotyping quality control, randomly selected samples were blindly run in duplicate or triplicate. For post-genotyping quality control, low call-rate SNPs that had a call rate of <90% in all samples or the samples that had a call rate of <90% in all SNPs were excluded from further analysis.
Statistical Analysis
The analysis was performed with R (12), and the missing data were imputed with the most frequent values. A power analysis was performed based on the data. The Cox proportional hazards model (13) was used to carry out univariate analysis, and the random survival forest tree (14) was used to carry out multivariate analysis. For discrete clinical factors, the median survival time (MST) with 95% CIs and the 24-month survival time with 95% CIs were calculated. At first, the Cox proportional hazards model was used to estimate the hazard ratio (HR) and 95% confidence interval (CI) of each predictor. The OS and event indicator, used as the output variables, were calculated from the beginning of treatment to the last visit or death. All significant predictors (p < 0.05) selected from clinical factors with the univariate Cox proportional hazards model were combined as a group of predictors named "Clinical." The independence between SNPs was tested before running a multivariate model. To show the results of the independence test, the linkage disequilibrium (LD) (15) was calculated and plotted. Then, each SNP was tested with the Cox proportional hazards model. The significant SNPs were combined as a group of predictors named "SNP." Two models, RModel1 and RModel2, were built as random survival forest trees on Clinical and Clinical + SNP, respectively. The justification for using the random survival forest tree instead of the Cox proportional hazards model as the multivariate model was given that: (1) the ensemble structure of the random survival forest tree could avoid the overfitting issue, given the limited number of patients and numerous predictors used in the study; (2) the random survival forest tree could handle both categorical and continuous predictors smoothly; (3) the Cox model assumes that continuous predictor variables have linear relationships with the risk of the event occurring, which is usually not true (16).
The predictive power of RModel1 and RModel2 were estimated and compared in terms of the concordance index (Cindex) (17) with a 3-fold cross-validation (18). The 3-fold crossvalidation randomly and evenly divided the whole data set into three groups. Then, the random survival forest classifier was trained by using two groups as training data. The trained classifier was tested using the remaining group to get the evaluation metrics. In this way, three evaluation metrics could be achieved using three disparate groups as testing data, and the mean evaluation metrics were used in the evaluation.
Patient Clinical Factors
A total of 166 patients were included in this study. The death probability was 0.51 for the data. The postulated HR was set as 2. The postulated proportions of the sample size allotted to one group were 0.5. Type I error was 0.05, as stated above. The power of 166 patients was 0.7, which is less than the traditional 0.8, but it is still reasonable (19). Table 1, including gender (p = 0.0084), stage group (p = 0.016 for stage group 2 and p = 0.19 for stage group 3), smoking (p = 0.061 for former smokers and p = 0.041 for smokers), histology (p = 0.024 for squamous, p = 0.022 for large cell, and p = 0.0018 for other), age (p = 0.011), and EQD2 (p = 0.00024), were significant. This group of significant clinical factors was defined as Clinical. The favorable factors were female, early-stage group, no smoking, adenocarcinoma, young, and high EQD2, consistent with published studies (20). Ethnicity, the use of chemotherapy, and KPS did not show a significant correlation with survival and were not included in the multivariate analysis.
The effect of clinical factors in patients with stage III NSCLC was also tested similarly, and the results were similar to that discussed above. Detailed findings were shown in the Supplementary File.
Individual SNPs and OS
The correlation of all SNPs with OS was summarized in Table 2. The genetic model for each SNP followed the previous publication (21). Among them, four SNPs, including BMP2:rs235756 (p = 0.022), SMAD9:rs7333607 (p = 0.015), SMAD3:rs12102171 (p = 0.050), and SMAD4: rs12456284 (p = 0.016), were significant predictors for OS. The Kaplan-Meier (KM) plots of these four SNPs are shown in Figure 1 with pvalues for the log-rank test listed. All p-values for the log-rank test were significant with the cut-off value of 0.05. BMP2:rs235756 (HR = 0.63; 95% CI:0.42-0.93) in a recessive model showed lower risk for patients with minor allele (T). The MST increased from 22 months for patients with the wild-type (C) to 37.9 months for patients carrying the minor allele (T) (Log-rank p = 0.020, Figure 1A).
SMAD9:rs7333607 (HR = 2.79; 95% CI 1.22-6.41) in a recessive model was correlated with an increased risk of death among patients carrying the minor allele (G). Patients with minor allele (G) of this SNP had a significantly shorter MST of 7.1 months compared with 25.1 months for patients with the wild type (A) (Log-rank p = 0.011, Figure 1B).
SMAD3:rs12102171 (HR = 0.68; 95% CI: 0.46-1.00) was in a dominant model. Patients carrying the minor allele (T) had a significantly decreased risk of death. This decrease in risk resulted in an increased MST by nearly 11.8 months: from 18.8 months for those with the wild-type genotype (C) to 30.6 months for patients carrying the minor allele (T) (Log-rank p = 0.050, Figure 1C).
SMAD4: rs12456284 (HR = 0.63; 95% CI: 0.43-0.92) in a dominant model which correlated with a decreased risk of death among patients carrying the minor allele (G). These patients with the minor allele (G) of this SNP had a significantly longer MST of 32 months compared with 22 months for patients with the wild type (A) (Log-rank p = 0.011, Figure 1D).
The effect of SNPs in patients with stage III NSCLC was also tested similarly, and the results were similar to that discussed above. Detailed findings were shown in the Supplementary File.
A Combined Model of Integrating Clinical and SNP Factors for Survival
The LD plot of 14 SNPs is shown in Figure 2. Most SNPs showed strong independence (R 2 < 0.2). The significant SNPs were independent of each other, and the multivariate analysis of each SNP was valid.
After a long-term follow-up of 18-100 months, the random forest classifier of RModel2 with 1,000 trees trained with Clinical+SNP significantly increased the C-index compared to that of RModel1 as shown in Figure 3A. For example, the C-index of RModel1 at 24 months was 84.1%. After adding SNP as predictors, the C-index of RModel2 increased to 87.6%. At 36 months, the C-index increased from 79.4 to 84.4%. A t-test was applied on the C-index of the two models, and the p-value was 0.003 for both models, which indicated that RModel2 performed better than RModel1 in terms of the C-index.
DISCUSSION
This study analyzed the correlation with clinical outcomes in patients with several adverse genotypes, and the results suggest that the cumulative influence by multiple genetic variants within the TGF-β signaling pathways could improve the prediction accuracy for survival among patients with NSCLC after RT. The survival significance of TGF-β1 pathway genomics has a biologic rationale. The TGF-β is a prototype of a multifunctional cytokine and is the ligand for the TGFβ type I and II receptors. TGF-β composes of TGF-β1, 2, 3, and other about 30 family members, including the activin/inhibin subfamily, such as BMP subfamily (Bone Morphogenetic Proteins BMPs) and the mullerian inhibitory substance (22,23). BMPs are the intracellular signaling members which can activate downstream signaling genes in TGF-β signaling pathways (24,25). Smad proteins (Smad 1 through 9) are transcriptional regulators which are important for intracellular TGF-β signaling (26). In TGF-β signaling pathways, those subfamily genes have a similar effect on cell growth, cell proliferation and differentiation, and cell death and plays a key role in embryonic development, immune system regulation, and the duo roles of diseases, such as skeletal diseases, fibrosis, and cancer (23,(27)(28)(29)(30). TGF-β signaling is very important in lung health and disease, regulating lung organogenesis and homeostasis, including alveolar cells and epithelial cells differentiation, fibroblast activation, and extracellular matrix organization. Whereas, TGF-β is the most potent epithelial-mesenchymal transition (EMT) inducer in NSCLC formation (31). DNA variants like SNPs can affect gene expressions and the functions of core disease-related genes (32).
The findings that SNPs in the TGF-β1 pathway genes can predict survival are clinically meaningful SNPs and consistent with the previous reports. Signature of TGF-β predicts metastasis-free survival in NSCLC (33,34). SNPs of TGF-β1 gene have been reported to associate with OS in patients with NSCLC treated with definitive radio (chemo) therapy (35)(36)(37). The signature of a single SNP may only provide a modest or undetectable effect, whereas the amplified effects of combined SNPs in the same pathway may enhance predictive power (7,38). In radiation, TGF-β1 may help in predicting radiation-induced lung toxicity (RILT) (39)(40)(41).
The SNPs identified in the study with prognostic values are consistent with reports from other investigators on their significance in other cancers (42)(43)(44). BMP2:rs235756 is in the downstream region of the BMP2 gene and has already been shown to alter normal BMP function. Several studies suggested that BMP2:rs235756 increased the production of the BMP protein and the concentration of serum ferritin levels, which promoted BMP signaling in cancer progression (42)(43)(44). BMP2 is highly expressed in lung cancer and is involved in regulating lung cancer angiogenesis and metastasis (45,46). Silencing the expression of BMP-2 inhibits lung cancer cell proliferation and migration (47). BMP2:rs235756 has previously been reported as a significant biomarker for OS in patients with lung cancer (21). For patients who underwent RT, BMP2:rs235756 was shown to predict radiation pneumonitis (48), which is an important clinical outcome.
Furthermore, this study also suggested that SMAD3:rs12102171 correlated with OS in NSCLC. SMAD3:rs12102171, located in the intron region between exon3 and 4 of the SMAD3 gene, is known for its function as a mediator of TGF-β pro-fibrotic activities. Inflammatory cells and fibroblasts without smad3 do not auto-induce TGF-β, but Smad3 null mice are resistant to radiation-induced fibrosis (49). TGF-β/Smad3 signaling plays critical roles in biological processes, such as epithelial-mesenchymal transition (EMT) lung cancer cell progression and lung cancer patient survival (21,50). That report showed a significant correlation with osteoarthritis (51). SMAD9:rs7333607 is located in the intron region of the SMAD9 gene and only correlated with lung cancer survival (21).
Smad4 belongs to the Smad gene family, acts as a mediator of TGF-β signaling pathways (26), and was classified as a tumor suppressor gene which plays important roles in maintaining tissue homeostasis and suppressing tumorigenesis (1). The loss of SMAD4 expression significantly correlated with poor OS in patients with cancers, such as pancreatic cancer, colorectal cancer, and prostate cancer (52,53). The SNP rs12456284 locates 3 ′ UTR region of the Smad4 gene, was predicted to influence the potential miRNA binding, and downregulate the gene expression with Smad4 associated with gastric cancer (54). Genetic variants in the BMP/Smad4/Hamp hepcidin-regulating pathway, such as Hamp rs1882694, BMP2 rs1979855, rs3178250, and rs1980499, were associated with OS, local-regional progression-free survival, progression-free survival, and distant metastasis-free survival in patients receiving definitive RT for NSCLC but not rs12456284 (55).
In a tree analysis of the study, the variable importance (VIMP) measures the increase (or decrease) in prediction error for the forest classifier when a variable is randomly "noised-up." A large positive VIMP shows that the prediction accuracy of the forest classifier is significantly degraded when a variable is noisedup. Thus, a large VIMP shows a more predictive variable. The VIMP of each variable in the RModel1 and RModel2 are listed in Figures 3B,C. It is shown that EQD2 and stage group were always two important predictors in the two models. SMAD3:rs12102171 was more important than other predictors, except for EQD2 and stage group, which was not reported before. BMP2:rs235756 and SMAD4: rs12456284 have a similar importance as smoking, which has been consistently shown as an important predictor in the clinical OS of patients with NSCLC. SMAD9:rs7333607 was less important and it may be overlooked should the results be validated by independent studies.
The present study has several limitations. First, this study has limited statistical power because of the small sample size in each stage group and the analysis of the limited number of SNPs. Second, the selection of the SNPs was rather arbitrary, which was limited by the published data at the start of this study. Additional SNPs candidates may be further identified; future studies can use the methodology of the study to develop better models with the inclusion of more candidates and more external validations. Although it showed the promise of genetic variation in guiding personalized medicine, the study shall be considered exploratory. The findings should be validated by an independent study population.
CONCLUSIONS
In this study, we systematically evaluated genetic variations in the TGF-β1 pathway as predictors of the outcomes for patients with NSCLC treated with RT. Four SNPs (SMAD3:rs12102171, BMP2:rs235756, SMAD9:rs7333607, and SMAD4: rs12456284) showed strong correlations with OS in patients with NSCLC after RT. The current model improves prediction accuracy by adding genetic variations in the TGF-β1 pathway.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: NCBI (accession numbers are: SCV001478478-SCV001478481).
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by IRB. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
|
v3-fos-license
|
2020-03-19T10:17:31.646Z
|
2020-03-01T00:00:00.000
|
212751118
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-393X/8/1/127/pdf",
"pdf_hash": "23d72faf98344dd251f623595a3e41b4c25eab03",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44178",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "ccf034cf5f36fe12c4cc4fe6b3ff5eef807822de",
"year": 2020
}
|
pes2o/s2orc
|
Tauopathy Analysis in P301S Mouse Model of Alzheimer Disease Immunized With DNA and MVA Poxvirus-Based Vaccines Expressing Human Full-Length 4R2N or 3RC Tau Proteins.
Alzheimer’s disease (AD) is a neurodegenerative disorder characterized by a progressive memory loss and cognitive decline that has been associated with an accumulation in the brain of intracellular neurofibrillary tangles (NFTs) formed by hyperphosphorylated tau protein, and extracellular senile plaques formed by β-amyloid peptides. Currently, there is no cure for AD and after the failure of anti β-amyloid therapies, active and passive tau immunotherapeutic approaches have been developed in order to prevent, reduce or ideally reverse the disease. Vaccination is one of the most effective approaches to prevent diseases and poxviruses, particularly modified vaccinia virus Ankara (MVA), are one of the most promising viral vectors used as vaccines against several human diseases. Thus, we present here the generation and characterization of the first MVA vectors expressing human tau genes; the full-length 4R2N tau protein or a 3RC tau fragment containing 3 tubulin-binding motifs and the C-terminal region (termed MVA-Tau4R2N and MVA-Tau3RC, respectively). Both MVA-Tau recombinant viruses efficiently expressed the human tau 4R2N or 3RC proteins in cultured cells, being detected in the cytoplasm of infected cells and co-localized with tubulin. These MVA-Tau vaccines impacted the innate immune responses with a differential recruitment of innate immune cells to the peritoneal cavity of infected mice. However, no tau-specific T cell or humoral immune responses were detected in vaccinated mice. Immunization of transgenic P301S mice, a mouse model for tauopathies, with a DNA-Tau prime/MVA-Tau boost approach showed no significant differences in the hyperphosphorylation of tau, motor capacity and survival rate, when compared to non-vaccinated mice. These findings showed that a well-established and potent protocol of T and B cell activation based on DNA/MVA prime/boost regimens using DNA and MVA vectors expressing tau full-length 4R2N or 3RC proteins is not sufficient to trigger tau-specific T and B cell immune responses and to induce a protective effect against tauopathy in this P301S murine model. In the pursuit of AD vaccines, our results highlight the need for novel optimized tau immunogens and additional modes of presentation of tau protein to the immune system.
Introduction
Alzheimer's disease (AD) is the most common neurodegenerative disorder and the biggest cause of dementia worldwide, representing one of the major causes of dependence, disability and mortality in elderly people [1]. It is characterized by an irreversible and progressive neural atrophy and memory loss [2] that is associated with two main pathological lesions in the brain: the presence of senile plaques and the accumulation of neurofibrillary tangles (NFTs), both leading to neural death [3,4].
Senile plaques are formed of abnormally folded β-amyloid peptides and play an important role in the development of the disease, mainly in familial Alzheimer's disease (FAD) [5]. Therefore, β-amyloid peptides, presented at the extracellular space in monomeric or aggregated forms, have been suggested to be a suitable target for AD treatment [6]. Several active [7,8] or passive [9,10] anti-β-amyloid immunotherapy procedures have been shown to lower cerebral β-amyloid levels and improve cognition in animal models of AD, but clinical trials have yielded disappointing results [11,12], including a phase II clinical trial that was halted because of adverse effects like meningoencephalitis [13].
NFTs consist of helical filaments of hyperphosphorylated tau protein inside neurons. Although senile plaques of β-amyloid appear earlier than NFT pathology, they don't correlate with the disease progression as they reach a plateau early in the symptomatic phase of the disease [14,15]. However, NFT pathology shows correlation with the stage of the disease, its clinical features and severity [16][17][18][19][20][21]. Therefore, it has been suggested that by the time tau pathology appears, β-amyloid therapy would be useless [22]. This possibility could explain the failure of β-amyloid therapy once the disease has developed, at later stages. Furthermore, the presence of extracellular toxic tau protein has been described [23]. Thus, based on these observations, several active and passive tau immunotherapeutic approaches have been tested in animal models and in phase I and II clinical trials with the aim of inducing anti-tau antibodies capable of clearing tau pathological species and eventually improve neuronal function [24]. Active immunotherapeutic approaches include the whole tau molecule, tau in aggregated or modified forms or synthetic tau peptides, such as ACI-35 [25] and AADvac1 [26]. Most of the passive immunotherapeutic approaches consist of monoclonal anti-tau antibodies, such as 8E12, RO7105705 and BIIB092 [27], and it has been recently described that anti-tau antibodies enter the brain [28,29] and can be internalized in neurons via Fcγ receptors [30]. Moreover, in addition to conventional vaccines, gene-based methods have been used to induce the expression of proteins to activate the cellular immune response upon delivery of DNA vectors [31].
Modified vaccinia virus Ankara (MVA) is a highly attenuated poxvirus vector extensively used in several preclinical and clinical trials as a vaccine candidate against numerous infectious diseases and cancer [32][33][34]. MVA is safe, well tolerated and expresses high levels of heterologous antigens, triggering potent immune responses against them [35]. Therefore, the use of MVA as a vector to express the human tau gene could be an encouraging approach to generate novel vaccines that might control AD progression.
Here, we describe the generation and characterization of two novel vaccine candidates against AD based on the MVA vector expressing either the human full-length 4R2N tau isoform protein or a 3RC tau protein containing 3 tubulin-binding motifs and the C-terminal region (termed MVA-Tau4R2N and MVA-Tau3RC, respectively). Both MVA-Tau vaccine candidates correctly expressed the corresponding tau proteins, were detected in the cytoplasm of infected cells co-localized with tubulin, and the recombinant viruses were highly stable in cell culture upon multiple passages. Both MVA-Tau vaccines impacted on the in vivo recruitment of innate immune cells in the peritoneal cavity of infected mice, although no tau-specific T and B cell immune responses were detected in mice immunized with the MVA-Tau vectors. Vaccination of transgenic P301S mice, a mouse model for tauopathies, with the combination of two vectors, a DNA-Tau as a priming component (DNA-au4R2N or DNA-Tau3RC) and MVA-au as a booster (MVA-Tau4R2N or MVA-Tau3RC) was not able to decrease significantly the levels of hyperphosphorylated tau, a clinical sign of AD, nor have a significant impact on the motor capacity and survival rate. Besides the failure to control the AD progression in this P301S mouse model, these results open the path to generate novel optimized tau immunogens able to induce a better tau antigen presentation to the immune system and to develop novel vaccination protocols that could control disease progression.
Ethics Statement
Female C57BL/6OlaHsd mice (6 to 8 weeks old) were purchased from Envigo Laboratories, stored in the animal facility of the Centro Nacional de Biotecnología (CNB) (Madrid, Spain), and the immunogenicity studies were approved by the Ethical Committee of Animal Experimentation (CEEA) of CNB-CSIC (Madrid, Spain) and by the Division of Animal Protection of the Comunidad de Madrid (PROEX 331/14). The efficacy animal studies in transgenic P301S mice were approved by the CEEA of Centro de Biología Molecular "Severo Ochoa" (CBMSO) (Madrid, Spain) and the Division of Animal Protection of the Comunidad de Madrid (PROEX 62/14). Animal procedures were performed according to international guidelines and to the Spanish law under the Royal Decree (RD 53/2013).
Viruses
Parental virus used for the generation of recombinant MVA-Tau4R2N and MVA-Tau3RC vaccine candidates is a wild-type (WT) MVA (MVA-WT) modified by inserting the green fluorescent protein (GFP) gene into the vaccinia virus (VACV) thymidine kinase (TK) locus and by deleting the immunomodulatory VACV genes C6L, K7R, and A46R (termed MVA-∆-GFP) [36][37][38]. To generate the MVA-Tau4R2N or the MVA-Tau3RC vaccine candidates the GFP insert of MVA-∆-GFP was substituted by the full-length human tau gene (isoform Tau4R2N) or the human Tau3RC fragment containing 3 tubulin-binding motifs and the C-terminal region, respectively. We have also used as a control the MVA-WT. All MVAs were grown in primary CEF cells to obtain a master seed stock (P2 stock), purified through two cycles of sucrose-cushion sedimentation, and titrated, as previously described [35]. All MVAs were free of contamination with mycoplasma, bacteria or fungi.
Human Tau Antigens
In this study we used the full-length human tau gene (isoform Tau4R2N; GenBank accession number X14474.1), and a tau 3RC fragment containing three tubulin-binding motifs and the C-terminal region [39]. Both tau sequences were previously cloned in the mammalian plasmid expression vector pSG5 to generate pSG5-Tau4R2N and the pSG5-Tau3RC plasmids, respectively (also termed in this study DNA-Tau4R2N and DNA-Tau3RC, respectively) that correctly expressed the Tau4R2N and Tau3RC proteins [40,41].
Construction of Plasmid Transfer Vectors pCyA-Tau4R2N and pCyA-Tau3RC
The plasmid transfer vectors pCyA-Tau4R2N and pCyA-Tau3RC were constructed and used for the generation of recombinant viruses MVA-Tau4R2N and MVA-Tau3RC, respectively, allowing the insertion of the human tau genes in the TK locus of parental MVA-∆-GFP by homologous recombination, following an infection/transfection procedure, as previously described [36][37][38]42]. The full-length human Tau4R2N or the Tau3RC genes present in the mammalian plasmid expression vectors pSG5-Tau4R2N and pSG5-Tau3RC were amplified by PCR (primers will be provided upon request) and inserted in the plasmid transfer vector pCyA-20 [42] to generate the pCyA-Tau4R2N and the pCyA-Tau3RC plasmid transfer vectors, respectively. Plasmid transfer vectors pCyA-Tau4R2N and pCyA-Tau3RC contains the VACV synthetic early/late (sE/L) promoter, a multiple-cloning site where the human Tau4R2N or Tau3RC genes are inserted between the VACV TK-L and TK-R flanking regions, the selectable marker gene for ampicillin, and a β-galactosidase (β-Gal) reporter gene sequence between two repetitions of the VACV TK-L flanking arms that will lead the deletion of the β-galactosidase gene from the final recombinant virus by homologous recombination after successive passages. The correct generation of pCyA-Tau4R2N and pCyA-Tau3RC was confirmed by DNA sequence analysis.
Generation of Recombinant Viruses MVA-Tau4R2N and MVA-Tau3RC
MVA-Tau4R2N and MVA-Tau3RC were generated using MVA-∆-GFP as parental virus and pCyA-Tau4R2N or pCyA-Tau3RC as plasmid transfer vectors, respectively, using an infection/transfection protocol previously described [36][37][38]42]. The MVA-Tau4R2N and MVA-Tau3RC recombinant viruses obtained were then grown in CEF cells, purified and titrated by plaque immunostaining assay [35]. The correct generation and purity of recombinant viruses MVA-Tau4R2N and MVA-Tau3RC was confirmed by PCR with primers TK-L and TK-R, annealing in the VACV TK locus and allowing the amplification of the full-length human Tau4R2N or the Tau3RC inserts, as previously described [36][37][38]42]. Moreover, the correct presence of deletions in VACV C6L, K7R and A46R genes in MVA-Tau4R2N and MVA-Tau3RC was confirmed by PCR, as previously described [36][37][38]43,44]. Furthermore, the correct insertion of human Tau4R2N or Tau3RC genes was also confirmed by DNA sequence analysis.
Genetic Stability of MVA-Tau4R2N and MVA-Tau3RC
The genetic stability of recombinant viruses MVA-Tau4R2N and MVA-Tau3RC was analyzed as previously described [36][37][38]42] during 9 low multiplicity of infection (MOI) serial passages by checking by Western blotting the expression of human Tau4R2N and Tau3RC proteins, as described above in Section 2.7.2.
Analysis of the Expression of Human Tau4R2N and Tau3RC Proteins by Confocal Immunofluorescence Microscopy
Immunofluorescence studies were done in HeLa cells mock-infected or infected at a MOI of 0.5 PFUs/cell with MVA-Tau4R2N, MVA-Tau3RC or MVA-WT for 24 h, as previously described [36][37][38]45]. We used as a microtubule marker a rabbit polyclonal anti-tubulin antibody (BioNova, Madrid, Spain; diluted 1:200), and to detect the human tau proteins we used a mouse monoclonal antibody against the microtubule-binding region of the human tau protein (antibody 7.51; diluted 1:200). Anti-tau and anti-tubulin antibodies were then detected with mouse or rabbit secondary antibodies conjugated with the fluorochrome Alexa Fluor 488 (green) and Alexa Fluor 594 (red), respectively (Invitrogen, Carlsbad, CA, USA; diluted 1:500). The cell nuclei were stained with 4 ,6 -diamidino-2-phenylindole (DAPI) (Sigma-Aldrich, St. Louis, MO, USA). Images of sections of the cells were acquired using a Leica TCS SP5 microscope and were recorded and processed.
Recruitment of Immune Cells in the Peritoneal Cavity of C57BL/6 Mice Inoculated with MVA-Tau4R2N and MVA-Tau3RC
Groups of female C57BL/6OlaHsd mice (6 to 8 weeks old; n = 5 mice per group) were injected by the intraperitoneal (i.p.) route with 1 x 10 7 PFUs per mouse of MVA-Tau4R2N, MVA-Tau3RC, MVA-∆-GFP or PBS. At 24 and 48 h post inoculation, peritoneal exudate cells were collected in 6 mL of PBS-2% FCS and the presence of different immune cells was analyzed by flow cytometry, as previously described [37,46,47]. Absolute numbers of immune cell populations for each mouse were determined by flow cytometry after extrapolation to the number of cells counted after the peritoneal washes.
P301S Transgenic Mice Immunization Schedule
The efficacy of the recombinant viruses MVA-Tau4R2N and MVA-Tau3RC was evaluated in transgenic P301S mice, a mouse model for tauopathies, obtained from Jackson laboratory (B6;C3-Tg(Prnp-MAPT*P301S)PS19Vle/J), that carries a mutant (P301S) human microtubule-associated protein tau (MAPT) gene encoding T34-tau isoform (1N4R) driven by the mouse prion-protein promoter (Prnp) on a B6C3H/F1 genetic background [21,48]. This background of P301S mice was homogenyzed to C57BL/6 by backcrossing these mice with C57BL/6 wild-type females in our laboratory. For the study of the efficacy of MVA-Tau4R2N, groups of transgenic P301S mice (n = 4 mice/group; male and female of 13 weeks of age at the beginning of the study) were immunized with 100 µg of pSG5-Tau4R2N (termed DNA-Tau4R2N) or pSG5-Φ (termed DNA-Φ), by the intramuscular (i.m.) route. Four weeks after the first immunization (week 17), mice received a booster dose with 2 × 10 7 PFU of MVA-Tau4R2N or MVA-WT by the i.p. route. Additionally, 4 C57BL/6 WT mice were immunized at the same time points (weeks 13 and 17) with PBS. At week 31 mice were sacrificed and their brains were collected to study the tau phosphorylation by immunohistochemistry or Western blot (see Section 2.12). For the study of the efficacy of MVA-Tau3RC, groups of transgenic P301S mice (n = 7 mice/group; male and female of 22 weeks of age at the beginning of the study) were immunized with 100 µg of pSG5-Tau3RC (termed DNA-Tau3RC) or DNA-Φ by the i.m. route. Four weeks after the first immunization (week 26), mice received a booster dose with 2 × 10 7 PFU of MVA-Tau3RC or MVA-WT by the i.p. route. At weeks 34, 41, and 47 (months 8, 9.5 and 11, respectively) the motor capacity of the mice was evaluated by a rotarod test (see Section 2.13). At week 48, mice were sacrificed and their brains were collected to study the tau phosphorylation by western blot or immunohistochemistry (see Section 2.12).
Study of Tau Phosphorylation in Brain Samples by Western Blot or Immunohistochemistry
Tau phosphorylation in hippocampal brain samples was analyzed by Western blotting, as previously described [49]. Extracts were prepared by homogenizing hippocampal samples in ice-cold extraction buffer consisting of 50 mM Tris-HCl pH 7.4, 150 mM NaCl, 1% NP-40, 1 mM sodium orthovanadate, 1 mM EDTA, a protease inhibitor cocktail (cOmplete™, Roche), and 1 µM okadaic acid. Protein content was determined by the Bradford protein assay (Sigma-Aldrich, St. Louis, MO, USA) and 20 µg of total protein were electrophoresed on a 10% SDS-PAGE and then transferred to a nitrocellulose membrane. Prior to antibody binding, membranes were blocked with 5% nonfat dried milk. To evaluate the expression of total and phosphorylated human tau proteins, a mouse monoclonal antibody against total human tau protein (antibody TAU-5; Merck Millipore, Burlington, MA, USA; diluted 1/1000), and a mouse monoclonal antibody that recognizes tau protein phosphorylated at both serine 202 and threonine 205 (antibody AT8; Thermo Fisher Scientific, Waltham, MA USA; diluted 1/100) were used. As loading controls, a mouse anti-glyceraldehyde-3-phosphate dehydrogenase (GADPH) antibody (Abcam; diluted 1/5000), or a mouse anti β-actin antibody (Sigma-Aldrich, St. Louis, MO, USA; diluted 1/1000) were used. The membranes were incubated with the antibodies at 4 • C overnight. An anti-mouse HRP-conjugated antibody (Dako; diluted 1/5000) was used as secondary antibody, and the immunocomplexes were detected using an ECL system (Amersham Biosciences, Little Chalfont, UK) and analyzed in a ChemiDoc-Imaging System (Bio-Rad, Hercules, CA, USA). Band intensities were quantified using ImageJ software (NIH, Bethesda, MD, USA) and phospho-Tau (AT8)/Tau (TAU-5) intensity ratios were represented and analyzed using GraphPad Prism Software (San Diego, CA, USA).
Tau phosphorylation in dorsal dentate gyrus samples was analyzed by immunohistochemistry, as previously described [49]. Sections were immersed in 0.3% H 2 O 2 in PBS for 30 min to quench endogenous peroxidase activity. Subsequently, sections were blocked for 1 h in PBS containing 0.5% Fetal Bovine Serum, 0.3% Triton X-100 and 1% BSA (Sigma-Aldrich, St. Louis, MO) and incubated overnight at 4 • C in PBS containing 0.3% Triton X-100 and 1% BSA with the corresponding primary antibody: anti-tau TAU-5 (Calbiochem, San Diego, CA, USA) or anti-phosphorylated tau AT8 (Thermo Fisher Scientific) antibodies. Finally, brain sections were incubated with anti-mouse secondary antibody and avidin-biotin complex using the Elite Vectastain kit (Vector Laboratories). Chromogen reactions were performed with diaminobenzidine (SIGMAFASTTM DAB, Sigma-Aldrich, St. Louis, MO, USA) for 10 min. Mouse sections were mounted on glass slides and coverslipped with Mowiol (Calbiochem). Images were captured using an Olympus BX41 microscope with an Olympus camera DP-70 (Olympus Denmark A/S).
Rotarod Test
Motor coordination and balance of immunized P301S mice were tested by rotarod test by using an accelerating rotarod apparatus (Ugo Basile, Comerio, Italy). After a pre-trained period of two days at a constant speed (4 rpm the first day over 1 min four times and 8 rpm over 1 min four times the second day), on the third day the rotarod accelerated from 4 to 40 rpm over 5 min and mice were tested three times. The latency to fall was measured during accelerating trials.
Data Analysis and Statistical Procedures
The statistical significance of differences between groups in the experiment of cell recruitment was determined by Student's t test (unpaired, non-parametric, two-tailed). The statistical analyses of the tau phosphorylation measured by immunohistochemistry and Western blotting in immunized P301S mice were done using one-way ANOVA (unpaired, two-tailed) with Dunnett's correction for multiple comparisons. Statistical analysis of rotarod test was performed using Student's t test. For the survival analysis, we carried out a Kaplan-Meier comparison of the survival curves (Log-rank Mantel-Cox test). Significant differences are described as follows: * p ≤ 0.05; ** p ≤ 0.005; *** p ≤ 0.001.
Generation and In Vitro Characterization of MVA-Tau4R2N and MVA-Tau3RC
To generate novel vaccines against AD that could impact on tau pathology, we have developed two MVA-based vaccine candidates expressing either the full-length human Tau4R2N isoform or the Tau3RC mutant protein (termed MVA-Tau4R2N and MVA-Tau3RC, respectively). Human Tau4R2N and Tau3RC genes were inserted in the vector backbone of an optimized parental MVA (termed MVA-∆-GFP) that contains deletions in the VACV immunomodulatory genes C6L, K7R, and A46R [36][37][38], and were placed into the VACV TK locus under the transcriptional control of the VACV sE/L promoter driving the constitutive expression of the human tau proteins ( Figure 1A). This optimized MVA vector expressing antigens from various pathogens was successfully used as a vaccine candidate against chikungunya virus (CHIKV) [36], Ebolavirus [37] and Zika virus [38] triggering broad T and B cell immune responses in animals and providing high protective efficacy after virus challenge. The correct insertion and purity of recombinant MVA-Tau4R2N and MVA-Tau3RC viruses were analyzed by PCR using primers annealing in the VACV TK-flanking regions that confirmed the presence of the full-length human Tau4R2N and the Tau3RC genes in MVA-Tau4R2N and MVA-Tau3RC, respectively ( Figure 1B). Moreover, the correct sequence of both full-length human Tau4R2N and Tau3RC genes inserted in the VACV TK locus was also confirmed by DNA sequencing.
To show that MVA-Tau4R2N and MVA-Tau3RC constitutively express the full-length human Tau4R2N and Tau3RC proteins, respectively, we carried out a Western blot analysis using the specific antibody 7.51 that binds to the microtubule-binding region to analyze cell extracts from DF-1 cells mock infected or infected at a MOI of 5 PFU/cell with MVA-Tau4R2N, MVA-Tau3RC, or parental MVA-∆-GFP for 24 h. The results demonstrated that MVA-Tau4R2N and MVA-Tau3RC properly expressed the full-length human Tau4R2N and Tau3RC proteins, respectively ( Figure 1C). Moreover, a kinetic time course showed that both MVA-Tau vaccine candidates expressed the human Tau4R2N and Tau3RC proteins in cell extracts as early as 3 hpi, reaching higher levels at 24 hpi ( Figure 1D Then, to ensure that MVA-Tau4R2N and MVA-Tau3RC are stable and can be maintained in cultured cells without the loss of the full-length human Tau4R2N or Tau3RC genes, both MVA-Tau vaccine candidates were additionally grown in DF-1 cells infected at a MOI of 0.01 PFU/cell during 9 consecutive passages, and expression of the human Tau4R2N or Tau3RC proteins during the different passages was determined by Western blotting. The results showed that both MVA-Tau vaccine candidates efficiently expressed the human tau proteins after successive passages, demonstrating that recombinant MVA-Tau4R2N and MVA-Tau3RC are genetically stable ( Figure 2B).
The Human Tau4R2N and Tau3RC Proteins Expressed by MVA-Tau Vaccine Candidates are Located in the Cytoplasm and Co-Localize with Tubulin
The expression and intracellular localization of the human Tau4R2N and Tau3RC proteins expressed by MVA-Tau4R2N and MVA-Tau3RC was studied by confocal immunofluorescence microscopy in HeLa cells infected at a MOI of 0.5 PFU/cell with MVA-Tau4R2N, MVA-Tau3RC or MVA-WT. At 24 hpi cells were permeabilized and stained with antibodies against human tau protein and tubulin (Figure 3). The results showed that the human Tau4R2N and Tau3RC proteins (in green) were abundantly expressed in the cytoplasm of MVA-Tau4R2N-or MVA-Tau3RC-infected cells and co-localized with tubulin (in red). Then, cells were fixed and permeabilized, followed by labeling with a mouse monoclonal anti-tau antibody (7.51), or a rabbit anti-tubulin antibody. Anti-tau was detected with a mouse secondary antibody conjugated with the fluorochrome Alexa Fluor 488 (green), and anti-tubulin was detected with a rabbit secondary antibody conjugated with Alexa Fluor 594 (red). Cell nuclei were stained using DAPI (blue). Scale bar: 10 µm.
MVA-Tau4R2N does not Reduce Significantly Hyperphosphorylated Tau in the Brains of Vaccinated Transgenic P301S Mice
Next, as a first proof-of-concept we studied the efficacy of MVA-Tau4R2N as a vaccine candidate against AD in transgenic P301S mice, a mouse model normally used to study tauopathies [21]. To perform the immunization studies we used a widely employed and potent vaccination procedure based on priming with a DNA vector and boosting with a poxvirus MVA vector that is able to trigger high levels of antigen-specific T and B cell immune responses [50]. Thus, P301S mice (n = 4 mice/group; 13 weeks old) were immunized by the i.m. route with DNA-Tau4R2N (or empty DNA-Φ as a control) and, 4 weeks later, boosted by the i.p. route with MVA-Tau4R2N or MVA-WT, as described in Materials and Methods and in Figure 5A. Furthermore, C57BL/6 WT mice (n = 4 mice; 13 weeks old) inoculated with 2 doses of PBS at weeks 13 and 17 were used as control animals. At week 31, animals were sacrificed and the levels of hyperphosphorylated tau in the brain were determined by Western blotting analysis in hippocampal samples ( Figure 5B) and by immunohistochemistry in dorsal dentate gyrus samples ( Figure 5C). The western blot results, using the phosphoTau-specific AT8 antibody, showed that although the DNA-Tau4R2N/MVA-Tau4R2N immunization decreased the levels of hyperphosphorylated tau, in comparison to animals immunized with DNA-Φ/MVA-WT ( Figure 5B), the differences observed were not statistically significant. Additionally, the analysis by immunohistochemistry of the number of cells containing hyperphosphorylated tau in dorsal dentate gyrus samples, showed that DNA-Tau4R2N/MVA-Tau4R2N immunization decreased the number of cells containing hyperphosphorylated tau, in comparison to animals immunized with DNA-Φ/MVA-WT ( Figure 5C), but again the differences observed were not statistically significant. In summary, immunization with DNA-Tau4R2N/MVA-Tau4R2N does not reduce significantly hyperphosphorylated tau protein in vaccinated P301S mice. Next, we further analyze whether MVA-Tau3RC could control the tau pathology in vaccinated P301S mice by avoiding or diminishing the motor failure, increase survival rate and reduce the hyperphosphorylated tau. Thus, we vaccinated P301S mice (n = 7 mice/group; 22 weeks old) with DNA-Tau3RC (or empty DNA-Φ, as a control) and 4 weeks later (week 26) infected them with MVA-Tau3RC or MVA-WT, as described in Materials and Methods and Figure 6A. At weeks 34, 41, and 47 (months 8, 9.5 and 11, respectively), rotarod tests were performed to analyze the motor capacity of the mice (Figure 6B), and at week 48, mice were sacrificed. The survival rate was analyzed during the whole study ( Figure 6C) and at week 48 the levels of hyperphosphorylated tau were determined by Western blot in hippocampal samples ( Figure 6D) and by immunohistochemistry in dorsal dentate gyrus samples ( Figure 6E). The analysis of the motor capacity using rotarod tests showed that in all animals the latency to fall decreased with time, with no significant differences between mice immunized with DNA-Tau3RC/MVA-Tau3RC compared to DNA-Φ/MVA-WT control mice ( Figure 6B). Furthermore, the analysis of the survival rate showed that at the end of the experiment (week 48) vaccinated mice had a survival rate (60%) higher than control mice (30%) ( Figure 6C), but the differences observed were not significant.
The analysis by Western blotting of the hyperphosphorylated tau in hippocampal samples showed that both immunization groups induced similar levels of hyperphosphorylated tau ( Figure 6D). Furthermore, the analysis by immunohistochemistry of the number of cells containing hyperphosphorylated tau in dorsal dentate gyrus samples also showed that DNA-Tau3RC/MVA-Tau3RC immunization group induced similar amount of cells with hyperphosphorylated tau than DNA-Φ/MVA-WT ( Figure 6E). In summary, combined DNA-Tau3RC/MVA-Tau3RC immunization does not induce significant benefits in the motor behavior and survival rate nor in lowering hyperphosphorylation of tau protein in vaccinated P301S mice.
Discussion
A hundred years after Dr. Alzheimer documented for the first time the presence of tangles in a patient's brain, we still lack a preventive or therapeutic treatment against AD that can successfully reduce the risk or delay the clinical phase of the illness. There have not been newly approved drugs against AD for over 15 years and immunotherapy represents a feasible approach against such a complex disease. Nowadays, there is a major effort in the pursuit of a vaccine against AD and targeting different AD-related antigens have been the main focus of research. As such, the two main targets are β-amyloid and tau. The therapeutic strategy that drives the development of vaccines against tau and β-amyloid protein is that the antibody enters the brain, binds to the protein and causes its subsequent elimination. However, one of the biggest challenges is that antibodies must cross the blood-brain barrier and reach their target. In this sense, an increase in the efficacy of the antibody can be achieved by increasing the natural transport systems of the blood-brain barrier [51]. Another factor to consider is the cellular location of the antigen, in our case the tau protein. The tau protein is an intracellular protein and the anti-tau antibodies enter in the brain [28,29] and can be internalized in neurons [30]. However, the most likely hypothesis is that its mechanism of action consists in eliminating extracellular tau and, therefore, blocking transcellular spreading [51]. However, preclinical and clinical studies against these two targets, mainly against β-amyloid, have not produced encouraging results [12]. These could be, in principle, due to the nature of the antigens or vaccine vectors used, the lack of achieving proper immune responses or the stimulation of the immune system in an undesirable manner, triggering proinflammatory responses with side effects. As yet, we still do not know what the main requirements for an effective immune response against AD are. Since many negative results were obtained for β-amyloid vaccination, we focused our study on tau protein, as it has been described that abnormal intracellular accumulation of hyperphosphorylated tau proteins forming NFTs is a pathological hallmark of AD and other related neurodegenerative disorders collectively termed tauopathies [52]. Therapies looking to decrease the amount of hyperphosphorylated tau gave negative results; thus, we are looking for alternative therapies [4,53]. Although there is no vaccine that can control AD progression, several tau immunotherapeutic approaches using active or passive immunizations have been developed [6,54,55] and are being tested in clinical trials.
Thus, based on the positive findings that have been obtained with poxvirus-based vectors, such as the eradication of smallpox and the use of recombinant poxvirus vectors in several preclinical and clinical trials as candidate vaccines against a wide spectrum of pathologies [32][33][34], we reasoned that using a potent immunization protocol for T and B cell activation based on combined DNA as a priming component followed by poxvirus MVA vectors as booster [50], we could develop a useful vaccination strategy, which in turn could provide important insights on how best to direct a more effective vaccination against AD. Hence, in this study we have generated two novel MVA-based vaccine candidates against AD that express either the full-length Tau4R2N protein or the Tau3RC mutant protein, termed MVA-Tau4R2N and MVA-Tau3RC, respectively. These MVA-based vaccine candidates were combined in a DNA prime/MVA boost approach with DNA-based vectors (DNA-Tau4R2N or DNA-Tau3RC) to define if any tau-specific immune response could be obtained and to evaluate whether a control in the disease progression could be observed in a mouse model of tauopathies, as a step toward developing more effective vaccine candidates.
MVA-Tau4R2N and MVA-Tau3RC vectors expressed high levels of the full-length Tau4R2N protein or the Tau3RC mutant protein in the cytoplasm of infected cells and, as expected, co-localized with tubulin, due to the presence of 4 or 3 microtubule-binding domains in the Tau4R2N or Tau3RC proteins, respectively. When both MVA-Tau vectors were injected in immunocompetent C57BL/6 mice by the i.p. route, they triggered a more efficient recruitment of dendritic cells, neutrophils, NK and NKT cells than the parental MVA vector, indicating that MVA-Tau4R2N and MVA-Tau3RC are able to activate the innate immune responses in vivo. It has been described that the innate immune response has a role in AD, and stimulation of the innate immune system via Toll-like receptor 9 (TLR9) agonists, such as type B CpG oligodeoxynucleotides (ODNs) is an effective and safe method to reduce tau-related pathology in AD mouse model [56,57].
The analysis of the tau-specific immunogenicity in immunocompetent BALB/c and C57BL/6 mice vaccinated with a DNA prime/MVA boost immunization protocol (either DNA-Tau4R2N/MVA-Tau4R2N or DNA-Tau3RC/MVA-Tau3RC) showed no tau-specific CD4 + or CD8 + T cells in the spleen of immunized mice at the peak of the response (10 days after the boost and following stimulation with tau peptides), although we could detect VACV-specific CD8 + T cells, reinforcing that the lack of an immune response is tau-specific. Moreover, binding antibodies against human tau protein were not detected in sera obtained from those immunized animals. The absence of tau-specific T-cellular and humoral immunogenicity could be due to a stabilization of the microtubule network induced by the larger human brain tau isoform (Tau4R2N) expressed by MVA-Tau4R2N or by the tau 3RC mutant protein expressed by MVA-Tau3RC, which would lead to an impairment of clonal expansion upon activation, finally resulting in a poor activation of the immune system of the mouse model, and a lack of function of the immune cells. Moreover, the presence of a microtubule-bound human tau protein may also result in the absence of a tau protein bound to the external membrane or the lack of a secreted tau protein, hence failing to be presented to T and B cells.
Furthermore, the efficacy study in the transgenic P301S mice showed that neither DNA-Tau4R2N/MVA-Tau4R2N nor DNA-Tau3RC/MVA-Tau3RC induced a significant reduction in the levels of hyperphosphorylated tau in the hippocampus and the dorsal dentate gyrus of vaccinated mice. Moreover, no control in the motor failure and mortality of vaccinated transgenic P301S mice was observed. To explain the failure to control the progression of the AD in immunized mice by both DNA-Tau and MVA-Tau vaccine candidates, we suggest that the expression of the human tau proteins (either Tau4R2N or Tau3RC) in immune cells may result in a loss of proliferation due to the microtubule stabilization promoted by these proteins. Thus, the presence of Tau4R2N or Tau3RC may prevent the microtubule depolymerization required for mitosis during cell proliferation. Therefore, future experiments will be performed to evaluate the expression by MVA of other tau isoforms or optimized tau fragments with a diminished microtubule stabilization, such as the C-terminal region of tau that has also been described as the most immunogenic region [55,58], or other regions as the N-terminal domain. Moreover, some tau fragments have been used with success in active immunization against AD [4,55], as well as a phosphorylated tau peptide bound to VLPs [29].
Conclusions
In conclusion, we described here for the first time the generation of novel vaccine candidates against AD based on MVA vectors expressing either the full-length human Tau 4R2N isoform (MVA-Tau4R2N) or the human tau 3RC mutant protein (MVA-Tau3RC). Well-established and potent prime/boost immunization protocols using DNA-Tau and MVA-Tau vaccine candidates did not induce tau-specific T-cell and humoral immune responses or a significant protection against AD-like disease in transgenic P301S mice. These results open the path to generate novel MVA-based vectors expressing optimized tau antigens that could elicit a better tau antigen presentation to the immune cells and to develop novel vaccination protocols that could control AD progression.
|
v3-fos-license
|
2021-11-21T16:23:36.016Z
|
2021-11-19T00:00:00.000
|
244454721
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-1072494/latest.pdf",
"pdf_hash": "7cdcd94656079359cfa5a3119e9f6d0fdd8a10cd",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44179",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"sha1": "8819bf2e788376b369b33564289f211f2576a8cd",
"year": 2021
}
|
pes2o/s2orc
|
Dosimetric Evaluation of a Set of Specifically Designed Grids for Treating Subcutaneous Superficial Tumor with 6 and 18 MeV Electron Beam External Radiotherapy
Background: Conventional electron beam radiotherapy used for treating superficial cancer tumors suffers from the disadvantage of low skin sparing effect. Furthermore, increasing electron energy for treating deeper-seated tumors leads to significant increase of skin dose. To overcome this, various grids are recommended for electron beam radiotherapy of subcutaneous tumors. However, appropriate grids are required to be designed for decreasing skin dose while delivering uniform high doses to deep-seated superficial tumors. Our goal was to design, examine and propose appropriate grid(s) for optimum electron beam radiotherapy of subcutaneous tumors with the best skin sparing with 6 and 18 MeV energies.Materials and Methods: Relevant dosimetric characteristics were determined and analyzed for five grids manufactured from dry lead having various cavity diameters (1.5, 2.0, 2.5, 3.0, 3.5 cm) and shielded areas (0.3, 0.4, 0.5, 0.6, 0.7 cm) among the cavities but the same fraction of cavity/open (68%) and shielded/closed (38%) areas under the grid plates. Isodose distributions and dose profiles resulted from the grids were investigated using EDR2 films and MATLAB software. Results: The grids with 2 and 2.5 cm diameter cavities and 0.4 and 0.5 cm shielded areas were the most appropriate grids for 6 and 18 MeV radiotherapy, respectively. With these grids, the 100% PDDs (percentage depth doses) located at 1.25 and 2.5 cm for an open filed (without the grids) were moved down to 1.87 and 5.4 cm for 6 and 18 MeV energies, respectively. Furthermore, the proposed grids provided the least peak to valley dose variations hence the most uniform doses delivered at their relevant depths of treatment. Conclusions: To decrease the skin dose in 6 and 18 MeV electron beam radiotherapy of superficial subcutaneous tumors, various home-made grids were designed and investigated. The most appropriate grids (having 2 and 2.5 cm cavity diameters for 6 and 18 MeV, respectively) provided the optimum dose delivery for superficial subcutaneous tumors locating around 1.5 and 5 cm depth for 6 and 18 MeV energies. Our comprehensive study provides reliable results that could be considered and developed more for a wider range of MeV electron grid therapies in routine clinical practices.
Background
One of the major problems in radiotherapy treatments of malignant tumors is the tolerance limitation of normal tissues. On the other hand, when the tumor size is increased, the ability of external beam radiotherapy for eradicating the tumors is diminished. To prescribe su cient dose to eliminate a tumor, either an open eld can be used or some parts of the radiation eld can be shielded. By applying an appropriate shielding, it is possible to prescribe a dose up to 7 to 10 times more than the dose applicable with a common open eld [2,3,4,5]. Grid refers to the delivery of a single high dose of radiation to a large treatment area that is divided into several smaller elds. Speci c grids can be used for shielding some parts of treatment elds. Such grids are leaky plates of Cerrobend alloy having several cavities/holes (open areas) with speci c sizes and distances among them (closed/shielded areas). The cavities/holes are arranged in speci c geometries enabling one to increase skin sparing effect and move down maximum dose point covering deep-seated subcutaneous tumors. In other words, the grid radiotherapy divides a uniform radiation eld to high (peaks) and low (valleys) dose regions and actually delivers a spatially fractionated dose to super cial tumors. Hence, a radiotherapy procedure accompanied with grids is also called spatially fractionated radiotherapy, since instead of dividing the overall radiation of a session to several fractions, it is divided to small elds for a single eld. Hence, the small areas of tumors or normal tissues located around the grid holes/cavities receive more radiation dose while areas between them receive less amount of radiation dose [6,7,8].
As mentioned before, the dose pro le resulted from grid therapy resembles a pattern of peaks and valleys enabling one to deliver large doses in a single fraction. Grid therapy has also been successfully used for the management of large tumors with low toxicity by using kilovoltage (synchrotron) x-rays, very highenergy electrons, and protons recently as new therapeutic avenues [9]. It is claimed that such grid therapy avenues share in common the use of the smallest possible grid sizes to exploit the dose-volume effects. It is also reported that high peak to valleys dose differences provided with grid therapy is advantageous for sparing healthy normal tissues. It is also stated [10] that high, single-fraction doses, grid irradiation reveals a therapeutic advantage over uniform dose irradiation whenever the tumor and surrounding normal tissues' cells are equally radiosensitive, or, particularly, if the tumor cells were more radioresistant than the normal cells.
Furthermore, Ashur et al. [11], in a review article, have considered spatially fractionated radiotherapy approaches focusing on GRID and IMRT and presented complementary evidence from different studies which support the role of radiation induced signaling effects in the overall radiobiological rationale for these treatments. All of these advantages have improved the e cacy and safety of grid therapy as reported in several studied in various clinical situations [12,13,14]. With the grid therapy technique using X-ray megavoltage energies (photon grid radiotherapy), we could gain remarkable successes for treating large malignant tumors.
On the other hand, electron beams used in common radiotherpay techniques have an obvious role in delivering an optimum dose to super cial tumor volumes while reducing the dose to deeper normal tissues. However, such beams have the draw-back of skin-sparing effect noted with photon beams. When the high doses of higher energy electron beams are prescribed for deep-seated tumors, relevant surface doses will be increased signi cantly and consequently skin reaction will be more acute. Moreover, the side-effects such as erythema, dry desquamation and wet desquamation may occur. Therefore, we may have to stop such treatment procedure and consequently common electron beam therapy will not be effective enough for treating deep-seated subcutaneous tumors. To overcome such limitations, the grid therapy by using energetic electron beams is used that is known as electron grid therapy. In electron grid therapy similar grids to those used with photon beams are used for treating deep-seated subcutaneous tumors using high energy electrons produced by commercial medical linear accelerators (linacs). Therefore, electron grid therapy could resolve the problem of the lack of skin-sparing effect experienced in common electron beam radiotherapy [1,15].
In electron grid therapy, following the interaction of energetic electrons with the edge of grid cavities and shielded areas, the electrons would deviate from their normal path towards the cavities (open/unshielded areas). Therefore, we would have a high amount of radiation dose below the cavities and less amount of the dose below the shielded areas (among the cavities). Moreover, blocking part of the electron eld generally leads to some changes in the resulting dose rate and distribution and the amount of such variations depends on the extent of the blocked areas, the thickness of the grid plate, and also the energy of electron beam [16]. It must be noted that increasing the space between two adjacent cavities leads to extra shielded area between them and consequently reduces the overall amount of the dose. In addition, the difference of the doses under shielded/closed (valleys) area relative to that of open/cavity (peaks) areas will increase and an unsuitable non-uniform dose may reach to the depth of treatment area (tumor) of interest.
As comes to our knowledge, few studies have been done in the eld of electron grid therapy. A relative comprehensive study done by Lin et al. [15] on electron grid therapy goes back to 2002 that has focused mainly on the amount of skin absorbed (surface) dose. Hence, the purpose of our comprehensive study was to investigate the effect of various home-made grids designed with different cavity sizes and shielded (inter-cavity) areas for electron grid therapy of deep-seated subcutaneous tumors by using EDR2 lms and linac 6 and 18 MeV electron beams. We tried to introduce suitable grids providing appropriate dose distribution for treating deep-seated tumors with 6 and 18 MeV energies while reducing surface dose to have more skin sparing of the patients based on relevant dosimetric characteristics.
In our study, we used dry lead material to construct several grids with various cavity and inter-cavity shielding areas. To investigate the effect of cavity and inter-cavity sizes/areas on the skin dose and overall depth dose distribution, ve different diameters of cavities with a constant relative distance between adjacent cavities (1.2 time of the cavity diameters) were used and studied under a linac's 6 and 18 MeV electron beam energies. We believe that our detailed results provide a useful ground for optimal design and use of grids for electron grid therapy at 6 and 18 MeV energies enabling one to deliver appropriate high uniform doses to deep-seated subcutaneous tumors of interest while reducing average surface doses and a better patients' skin sparing effect.
Materials And Methods
Materials used in this study included: six dry lead grids with different cavity diameters and shielded areas, EDR2 dosimetric lms (Carestream Health, Inc., NY, USA), a plexiglass phantom, several PERSPEX sheets, a commercial medical linac (Varian Medical Systems Inc., USA) having 6 and 18 MeV electron beam energies, a lm processor device, and an appropriate lm scanner for reading the developed lms.
Cerrobend is an alloy compound of 50% bismuth, 26.7% lead, 13.3% tin, and 10% cadmium. Its' melting point is 70 ̊ C. Consequently, it is melted when faced with hot water or liquid. Therefore, cerrobend was not used in our study. Different operations of turnery and molding procedure have to be done to make a mold. Thus, a material made of net lead is not also suitable since it is a soft and shapeable metal and its' shape is changed even under low pressure. The melting point of net lead is 327.4 ̊ C. Essentially lead alloys which contain Antimony known as "dry lead" with a little amount of Arsenic (1%) are used for different purposes. The amount of Antimony and Arsenic of lead alloys is around 15-20 and 0.5-1 percent in usual terms. The dry lead has been used as an optimal alloy that can undergo different operations of foundry and turnery. Moreover, it has a high heat resistance due to its' high melting point and also highpressure resistance to shapeable changes. Finally, the most important point regarding this alloy is its' low cost and availability. Overall, the ability of foundry of dry lead is very well considering several favorable characteristics including high uidities, low voluminal contraction to temperature changes, inability to dissolve in gases, and low reaction towards oxidation process. Therefore, we used dry lead material with a thickness of 1.2 cm and a dimension of 15×15 cm 2 to manufacture the designed grids with various cavity and inter-cavity shielding areas.
Determining geometrical arrangement and design of grids
Dry lead blocks were obtained to make our required grids. After the delivery of lead blocks to a foundry and molding workshop, they were transmitted to the turnery to be built. Five grids were designed with circular cavities of 1.5, 2, 2.5, 3 and 3.5 cm diameters and an appropriate distances between the centers of neighboring cavities being 1.2 times of each grid diameter. Relative distances of various sizes of cavities from each other were chosen in this way to provide a uniform electron beam scattering inside the cavities resulted from the incidence of electron beams with the edges of cavities. Hence, the space between the edges of the neighboring cavities with 1.5, 2, 2.5, 3 and 3.5 cm diameters were 0.3, 0.4, 0.5, 0.6 and 0.7 cm, respectively. A hexangular design was used for making cavities in the grid plates providing an equal shielding area for all the plates with various cavity sizes. The actual physical grid plates designed and manufactured from dry lead material with various cavity diameters but with the same fraction of open (62%) and shielded (38%) areas for all the grids are displayed in Figure 1 (a-e). The schematic speci c hexangular geometry and design of the arrangement of cavities in a grid plate, that can also be considered equilateral triangle, is illustrated in Figure 1 (f).
It must be noted that in some previous studies, a square arrangement of the cavities has been used. Such kind of geometrical arrangement leads to a non-uniform shielded area between neighboring cavities. This is due to the fact that, when the space between two adjacent cavities on the corners of a square at any direction be equal to R, the space of the same cavity to another cavity placed at a diagonal direction will be equal to √2 R This will lead to a non-uniform distribution of electron beams under the grid that consequently affects its' dosimetric characteristics. Hence, in our study, the grids were designed and constructed in a hexagonal arrangement with speci c geometries in a way that the ratio of shielded areas to the whole surface of all the grids be the same although the cavity diameters and shielded areas between neighbor cavities of various grids were different. The pattern of all the grid plates resembled a set of triangles wherein all the grids' unshielded (open cavities/holes) and shielded (closed inter-cavities) areas had the same fraction of 62% and 38% of the whole surface area of the plates, respectively. It is obvious that by increasing the shielded area the skin dose will reduce, however an undesirable nonuniform dose will also be created not being favorable for optimum treatment of deep-seated tumors.
The spatial shape of each cavity along the grid thickness was cylindrical. Because the electron applicators are placed on the entrance of the Varian linac, the output electrons are irradiated in a parallel direction, but diverted immediately as soon as they pass the applicators. The whole surface of the grids' plates was placed inside the applicator at the time of irradiation. The overall size of all the grids was 25×25 cm 2 . The thickness of the grids was about 12 mm considering the direction of the complete shielding of electron beams passed through the entrance of the linac. A radiation eld of 25×25 cm 2 was used for all the irradiation conditions.
Calculating dose calibration of EDR2 lms
Calibration curve of radiographic lms indicate the relation of a lm optical density to a range of ionizing radiation exposure/dose. By using a calibration curve, one can achieve the amount of unknown radiation doses. To do so, separate pieces of the lms are exposed to speci c levels of doses in a range of interest in radiotherapy. Then the lm characteristic curve is drawn by measuring the resulting optical density of the exposed lms with an appropriate calibrated lm scanner. The range of radiation doses used to obtain the EDR2 lm calibration curve was within 25-200 cGy wherein several dose levels of 25, 50, 100, 150 and 200 cGy were used. The irradiation was done with the 6 MeV electron beam produced with the Varian linac with a source to surface distance (SSD) of 100 cm and a eld size of 5×5 cm 2 . For preventing scatter radiation effects on radiation elds, ve pieces of separate lms were used. Separate exposures were made on each lm piece at different dose levels as explained. Then the lms were developed using an automatic calibrated processor and read with an appropriate scanner.
Measuring PDDs for open radiation elds
To determine PDDs for an open eld, two lms were used for each exposure that were pressed between two PERSPEX®/Plexigalss sheets and placed parallel to the linac's central axis. The radiotherapy condition was set using an isocentric set-up with SSD=100 cm having 150 cGy output. The 6 and 18 MeV electron beam energies were used for irradiation procedures. The irradiation condition was the same in all parts of our experimental measurements using the same open eld size for all various grids. Therefore, in this way we were able to obtain the ratio of PDDs in shielded areas of various grids relative to a unique open eld (without any grid).
Measuring PDDs for radiation elds shielded with grids
For measuring PDDs, two pieces of the lms were used. Irradiation was done with 6 and 18 MeV electron beams for various grids with each grid being placed at the entrance of linac electron applicators. Each piece of the lms was placed vertically along the central axis of the linac located below the linac's electron applicator and within the radiation eld ( Figure 2). For all irradiation conditions, the monitor unit (MU) was set in a way to deliver a dose level of 150 cGy at SSD=100 cm.
Reading the irradiated lms to calculate dose distributions
The irradiated lms were developed in a suitable dark room using an automatic calibrated mammography X-ray lm processor (OPTIMAX Mammo, PROTEC GmbH & Co. Oberstenfeld, Germany). Then, the processed lms were scanned with a MICROTECK 98XL scanner (Science Based Industrial Park, Hsinchu, Taiwan) set-out in the transmission mode, grey scale reading with 150 dpi or 0.169 mm/pixel, and 16 bits. The scanned lms were saved as images in TIFF format.
All the images of irradiated EDR2 lms were imported into MATLAB software. Then, based on the calculated calibration curve of the lms, the intensities of the pixels of every TIFF image were converted to relevant dose values and nally isodose distributions were derived as illustrated in Figure 3. It must be noted that the amount of dose acquired from the darkest part of the lms was regarded as 100% dose level and other dark/grey levels of the lms were normalized to it.
Results
To obtain the PDDs from the lms for 6 MeV and 18 MeV energies following the application of various grids, separate exposures were made. Two pieces of the lms were used for every exposure. Then, their average PDD values were calculated for various grids.
Tables 1shows the comparison of data regarding three PDDs (100, 90, and 80%) and the 1st isodose PDD values for the open (cavity/hole) and closed (shielded) areas of various cavity sizes of the grid plates under 6 MeV irradiation compared to the open eld (without any grid). As could be noticed from the data presented in Table 1, by applying the grid with 2 cm cavity diameter (and 0.4 cm inter-cavity shield), the depth of 100% PDD, (D max ), is increased from 1.25 for an open eld (without any grid) to 1.87 cm when irradiation with 6 MeV energy. Furthermore, its' 1st isodose PDD value is 90%. Such dosimetric parameters suggests the 2 cm cavity diameter as the optimum grid for 6 MeV electron beam external radiotherapy. It must also be noted that further increase of cavity diameter does not increase the depth of the 100% PDDs and also their 1st isodose PDD values are less than the 90%. Hence, the 2 cm cavity size could be taken as the optimum selection among other various grids investigated for the 6 MeV electron beam irradiation. Figure 5 illustrates the comparison of isodose curves of the grids with various sizes of the cavity diameters and the open eld (without anu grids) for 6 MeV electron beam energy. As could be seen in Figure 5, for 6 MeV energy, the overlapping of isodose curves for the 2 cm cavity diameter provides a more compatible and uniform pattern below 90% PDD levels. This is probably because the primary electrons collided with the edge of such cavities are deviated inside the cavities with a suitable angle leading to a more uniform isodoses via an optimal overlay of isodose curves inside the cavities plus the increased depth of D max . This compatible overlay also causes the overlapping of 90% isodose curves recorded under the shield (among the cavities) to happen at a depth about 2 cm for 6 MeV electron beam energy ( Figure 5).[IMAGE-C:\Workspace\ACDC\ImageHandler\b3a 1 (1.5)b 1 (2)Open elda 1 (1.5)b 1 (2)Open elda 1 (1.5)b 1 (2)Open eld Therefore, for 6 MeV electron beam external radiotherapy, if a super cial tumor is located at a depth of 1 to 2 cm, by using the optimum proposed grid (2 cm cavity diameter), in addition to signi cant reduction of patients' skin dose, a more uniform dose is delivered to the tumor. However, for other grids (having either smaller or larger cavity diameters), as could also be noticed from Figure 5, not only the treatment depth of maximum PDDs (D max ) is decreased, but also the difference between their peak to peripheral valley doses is increased. For the grid with 3.5 cm cavity diameter, although the depth of the maximum 100% isodoses is about 1.85 cm, the difference of its' peak to peripheral valley dose is more than that of the 2 cm diameter. This means that the non-uniformity isodoses of 3.5 grid diameter is higher than that of 2.5 cm apart from its' less shielding effect of the overall surface of radiation eld that could be attributed to its' higher diameter. By increasing the energy of electron beams from 6 to 18 MeV to obtain deeper effective depth of treatment (as could be seen in Figure 6) a disturbance/overlapping of isodose curves happens at the entrance of the grid with 2 cm cavity diameter. This means that by increasing the electron beam energy, not only the 100% PDD (D max ) of 2 cm cavity diameter does not reach to deeper-seated tumors, but also its' effective depth becomes shallower compared to the open led (without grid).
Therefore, it becomes evident that there should be a balance and compatibility between the energy of electron beams and the diameters of grids' cavity. Based on the data presented in Table 2, the grid with 2.5 cm cavity diameter shows not only a better uniform and steady isodose curves with the 1st isodose value located at 80% PDD, but also its' depth of 100% PDD (D max ) is increased from 4.5 cm for an open eld (without any grids) to 5.4 cm. Such dosimetric parameters suggest the grid with 2.5 cavity diameter as the optimum one for 18 MeV electron beam external radiotherapy.
Discussion
The purpose of this study was to design and examine various home-made designed grids to propose the optimum grid (s) for treating deep-seated subcutaneous tumors with the best skin sparing under 6 and 18 MeV electron beam external radiotherapy produced by a conventional linac. The results of a previous study [15] on 6 various grids indicated that for a speci c energy and distances between the centers of their grids' cavities, the cavity with larger diameter leads to deeper treatment depth. Such ndings means that the D max and isodose curves are located at deeper depths. Their presented depths of isodose curves indicated that applying the grids with 0.45 and 1 cm diameters illustrates the lowest covering depth for 90% isodose curve. Meanwhile, their relevant reported dose difference between the peak and peripheral valley areas of their grids was signi cant. They also reported that only their grid with 1.5 cm diameter could increase the treatment depth to 1 cm and 2.5 cm for 6 and 14 MeV energies, respectively. These quantities were smaller in comparison with their open eld data. Their reported dose difference between the cavities and shielded areas at their reference depth was also quite high. But, in our study, by using 6 MeV electron beams energy and applying our speci cally designed grid with 2 cm cavity diameter, we were able to increase the depth of 90% isodose curve to 2.04 cm. Meanwhile, the 90% curves below the shielded area at the reference depth were completely overlapped providing an appropriate uniform isodose.
On the other hand, the 60 and 70% isodose curves obtained in our study were consistent with the results reported by Lin et al. [15]. Furthermore, similar to our study, they have claimed that the dosimetric characteristics of their grids does not change signi cantly by shifting the location of cavities.
In another study by Meigooni et al. [1] in 2001 in which a Cerrobend grid has been designed with a 2.5 cm cavity diameter and 5×5 cm 2 dimension and tested for 6, 9, 12, 16 and 20 MeV energies, it was revealed that the resulting depths due to their grid measured by TLDs at the mentioned energies are 14, 16, 14, 11, 9 mm, respectively. In addition, the dose determined in the shielded areas between two neighboring cavities of their grid was reported to be 22.7% of the dose at the center of cavities at the depth of D max for 20 MeV. However, it must be noted that the maximum depth (D max ) measured in our study by EDR2 lms for 18 MeV energy occurred at a depth of 5.4 cm. Moreover, by plotting the dose pro les obtained from the readings of the lms irradiated to 6 MeV energies with various grids' cavity diameters, we observed that the grid with 2 cm cavity diameter delivers a more uniform dose to the tumor at the reference depth about 1.5 cm. In addition to increasing the treatment depth at this energy, according to the isodose curves obtained at this depth, the maximum dose recorded in the cavities was attributed to 110% isodose curves and the minimum dose recorded below the shielded areas (between the cavities) was attributed to 90% isodose curves (Figure 7). It must also be noted that the difference between the peaks (cavity) and peripheral valleys (shielded area) doses was more with any other grids having either smaller or larger cavity diameters. Table 3 shows the maximum and minimum (peak to valley) doses at the reference depth (4.9 cm) of the grids with various cavity diameters for 18 MeV energy obtained from the dose pro les. The data presented in this table suggests the grid with 2.5 cavity size as the best grid for 18 MeV irradiation showing the lowest difference between the peak to valley doses attributed to the cavity and shielded areas of this grid. As could also be noted from Figure 8, the dose pro le of the grid with 2.5 cm cavity diameter indicating the peak to valley dose variation along the cavities (open) and shielded (closed) areas are about 100% and 90% PDDs, respectively, illustrating the best and most uniform dose distribution at the treatment depth of interest (about 4.9 cm) for this grid. However, it must be noted that the differences between the peaks and peripheral/shielded valleys doses are not appropriate for any other grids with either smaller or larger cavity diameters.
Conclusions
In conclusion, based on our results it can be con rmed that by using appropriate grids (with 2 and 2.5 cm cavity diameters) for 6 and 18 MeV electron grid radiotherapy, we were able to provide an optimum high dose level to super cial subcutaneous tumors located at speci c depths (around 1.5 and 5 cm) while reducing surface doses and achieve the best skin sparing effect. Therefore, by choosing an appropriate grid size, as designed and manufactured in our study for electron grid radiotherapy at 6 and 18 MeV energies, in addition to decreasing surface doses and achieving better skin sparing effect, a uniform dose can be delivered to deep-seated subcutaneous tumors at the depth of interest compared to an open eld practiced in conventional MeV electron beam external radiotherapy. Our comprehensive study provides reliable experimental results and grounds that could be considered for routine MeV electron grid therapy in clinical practice. Anyway, further studies are recommended to investigate MeV electron beam grid therapy for more set of grid designs and geometries and over a wider range of energies using Monte Carlo simulation methods. advisor. This project has been supported nancially by a grant awarded by Tarbiat Modares University. In addition, access to the Varian linac required for carrying out the practical part of this project was provided generously by Pars Hospital in Tehran/Iran. Therefore, the authors would like to express their sincere appreciation for the nancial as well technical assistances provided by those institutions. The authors would also like to thank sincerely Ms. Fereshteh Koosha for her assistance in writing up and editing the initial draft of this article.
Authors' contributions
Bijan Hashemi and Kamran Entezari are responsible for the study conception, design, data acquisition and analysis, drafting, and nalizing the manuscript. Seied Mehdi Mahdavi contributed in the data acquisition and analysis of experimental data acquired at Pars Hospital (Tehran/Iran). All the authors read and approved the nal manuscript. suggesting the grid with 2 cm cavity diameter as the optimum grid for tumors located around 1-2 cm depth with a better skin sparing effect and a more uniform dose with the 1st isodose happened at 90% PDD level. Comparison of the depth of isodose curves at the reference depth (1.5 cm) of the grids with various cavity diameters (a: 1.5, b: 2, c: 2.5, d: 3, e: 3.5 cm) for 6 MeV energy.
|
v3-fos-license
|
2019-01-19T14:14:58.855Z
|
2018-12-24T00:00:00.000
|
58433684
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.4317/medoral.22526",
"pdf_hash": "5dcffeaf51f86cffe94e76d9926a5f303b43f6a3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44180",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "dae9350a300331922ae3404b7a11be8eb65c3e11",
"year": 2018
}
|
pes2o/s2orc
|
The effectiveness of decompression as initial treatment for jaw cysts: A 10-year retrospective study
Background Decompression is an approved alternative to cystectomy in the treatment of jaw cysts. This study aimed to evaluate its effectiveness as an initial procedure, as well as factors with potential to influence outcome. Material and Methods The frequency of decompression was analysed, whether completed in one session or followed by enucleation at the Division of Oral Surgery and Orthodontics, Department of Dental Medicine and Oral Health, Medical University of Graz, from 2005 to 2015. Further analysis focussed on factors potentially influencing outcome: cyst location, histopathology, means of preserving the cyst opening, cyst size, patient age. Results In all, 53 patients with 55 jaw cysts (mean age of 35.1) were treated by initial decompression in the ten-year period. In the majority of cases, histopathological analysis revealed a follicular cyst (43.6%), followed by odontogenic keratocysts (23.7%), radicular cysts (21.8%), residual cysts (7.3%) and nasopalatine cysts (3.6%) Treatment was completed with a single decompression in 45.5% of the cases. Among those, 72.0% were follicular cysts and 8.0% odontogenic keratocysts. Subsequent enucleation was needed in 54.5% of all cases, with a majority in the keratocystic group (36.7%). Histological findings, means of keeping the cyst open, and patient age were found to influence the effectiveness of decompression. Conclusions Decompression could be performed as a procedure completed in one session or combined with subsequent enucleation, mainly dependent on histopathological findings. Subsequent enucleation of odontogenic keratocysts is highly recommended. Key words:Jaw cysts, decompression, enucleation, histopathology, obturator.
Introduction
Cystic lesions occur more frequently in the upper and lower jaws than in other bones of the human body, mainly due to the presence of cells that are remnants of the embryonal neuroectoderm. One further explanation is that the embryonic teeth are located in the jaw bones. Triggers are either inflammatory stimuli or developmental disorders (1). Because they are usually slow growing and asymptomatic, cysts may grow very large, displacing and even damaging surrounding structures, with subsequent infection, root resorption, nerve injuries or bone fractures (2,3). Treatments range from single decompression, marsupialization, enucleation and bone resection to a combination of these approaches (4,5). While there is no consensus on optimal treatment, complications and further morbidity are to be avoided, particularly with large cysts. Decompression as initial procedure is a common conservative approach requiring preparation and preservation of a cyst opening. The aim is to decrease intracystic pressure by constant drainage, so allowing new centripetal bone growth from the bony cyst walls (6). The cyst opening can be preserved with simple iodoform gauze packing, a custom-made obturator, bracket and chain on involved impacted teeth, and drains (7,8). The main advantages of decompression are that it spares tissue, minimizes the likelihood of damage to adjacent structures, and avoids the cost of hospitalization (9,10). Complications have been reported more frequently when enucleation was performed as a single procedure for extensive jaw cysts. According to the literature, the prevalence of permanent sensory disturbance ranges from 2.0-18.0%, of transient hypoesthesia from 8.0-35.0%, and of incomplete ossification from 12.0-40.0% (11)(12)(13)(14)(15). Disadvantages of decompression include the duration of treatment, discomfort, and reliance on patient compliance. Further, remnants of the epithelial lining can lead to cyst recurrence requiring further surgical treatment (16,17). Some authors have suggested subsequent enucleation for aggressive cysts with a high relapse rate, and when the outcome of decompression is unsatisfactory (18,19). This retrospective study aimed to evaluate the effectiveness of decompression for treatment of jaw cysts with consideration of possible outcome-influencing factors including patient age, cyst location and size, histopathology, and means of preserving the cyst opening.
Material and Methods
After approval of the study by the local ethics committee, data were collected and analysed from patients who had undergone decompression at the Division of Oral Surgery and Orthodontics, Department of Dental Medi-cine and Oral Health, Medical University of Graz, from 2005 to 2015. The inclusion criteria for the study were cyst in the upper or lower jaw treated with decompression and complete medical records. The exclusion criteria were cyst in the upper or lower jaw treated initially with enucleation or resection, soft tissue cysts, and incomplete medical records. The following data were collected and analysed: frequency of decompression and decompression followed by enucleation, patient's age and gender, location and size of the cyst, histopathological findings, means of preserving the cyst opening. Histopathology reports were obtained from the Institute of Pathology of the Medical University of Graz. After surgical decompression, the cyst was kept open with iodoform gauze for the first few postoperative days. Thereafter, besides continued gauze packing, obturators, brackets with chains, and drains were used. Patients were advised to follow all postsurgical instructions scrupulously, rinsing the cyst opening twice a day with 0.9% NaCl solution using a syringe, and cleaning dental devices mechanically with a toothbrush or swabs. For the first 2 days, postoperative care included cryotherapy with cold packs and for pain management, dexibuprofen 200 mg for children and 400 mg for adults (Seractil® 200 or 400mg, Gebro Pharma, Austria) 3 times a day. Routine follow-up included clinical and radiological studies at least every three months. Additional appointments were arranged depending on individual needs and compliance. Digital panoramic x-rays were taken with the Orthophos XG plus DS (Sirona Dental Systems GmbH, Bensheim, Germany) with 60-70kVp and 14-17 mA. Those patients with insufficient cyst shrinkage after decompression later underwent enucleation (Figs. 1,2). -Statistical analysis Data were presented with descriptive statistics. The statistical analyses were performed with SPSS software (IBM SPSS statistics 24.0, IBM Corporation, New York, United States) at a 5% significance level. The chisquare test and Student's t-test were applied to quantitative and continuous variables.
Sex
Cysts n (%) Table 3. Relation between different factors and types of the surgical procedures.
A single decompression completed treatment in 45.5% of cases, mostly in the frontal region of the jaws and in patients under 30 years (56.0%). Among these patients, follicular cysts were most frequent (72.2%) and the most commonly used devices were brackets with chains (44.0%) ( Table 3). Subsequent enucleation was needed in 54.5% of the cases, mostly in the posterior region. Patients were usually 30-60 years old (60%) and odontogenic keratocysts (36.7%) were most common. An obturator was often used after decompression followed by enucleation (63.3%) ( Table 3). The effectiveness of decompression was found to correlate with histopathology, the means of keeping the cyst open, and patients' age (p=0.003, p=0.020 and p=0,025). More detailed information is presented in Table 3. The cyst's diameter was not found to have an influence on the effectiveness of the procedure (p=0.399) ( Table 3).
Discussion
This retrospective study focused on the evaluation of the effectiveness of decompression for jaw cyst treatment over a ten-year period and the influence of different factors thereon. The main limitations of this study were that it was retrospective and that medical data were not always complete.
The study included 53 patients with a mean age of 35.1 years and 55 cystic lesions treated initially with decompression. In accordance with the literature, the most frequent cystic lesions occurred in anterior maxilla in male patients (20)(21)(22), though there were more cysts overall in the mandible than the maxilla. The reason for the higher frequency of the cysts in lower jaw could be the use of enucleation as the initial treatment for cystic lesions in maxilla, while this study focused on jaw cysts initially treated with decompression. Similarly, histopathologically, follicular cysts and odontogenic keratocysts were most frequent (43.6% and 23.7%, respectively). The literature indicates that radicular cysts are the most frequent cysts in the jaws (23). Radicular cysts are smaller and are initially treated with enucleation. Only large radicular cysts are treated with decompression when enucleation could damage e51 surrounding structures, or in the case of geriatric and high-risk patients. The frequencies of residual cysts and nasopalatine cysts of 7.3% and 3.6%, respectively, are in line with the literature averages of 4.2-13.7% and 2.2-4.0% (20,22). Histopathological findings showed that the cyst type influences the surgical approach (p=0.003), with decompression followed by enucleation applied mostly for odontogenic keratocysts. As a single procedure, decompression was most frequently used for follicular cysts (72%), but only for 8% of odontogenic keratocysts. Some authors have advocated decompression for odontogenic keratocysts (24), although surgeons often prefer decompression followed by enucleation for these aggressive cysts that are highly prone to recurrence (18). Various means of preserving the cyst opening have been described in the literature (19,24), but there is little information on their comparative effectiveness. In this study, a statistically significant difference was found in the use of means of preserving the cyst opening and the frequency of decompression or decompression followed by enucleation (p=0.020). Decompression showed more success when brackets and chains were used rather than other devices. The reason could that brackets with chains are mostly used for follicular cysts. Although obturators can be custom made, it may not be easy to create a precise obturator due to the position of the opening in the mouth, tissue remodelling and imprecise impressions of the inner lumen. Iodoform gauze packing and drain tubes might be used less often due to the surgeon's preference and the need for extensive after-care. Some patients have difficulty keeping appointments to have their iodoform gauze changed, while others may be over-challenged with keeping a drain clean (11). (11,25). A shortcoming of this study could be measuring the largest cyst diameter on panoramic records and not taking into account the buccallingual dimensions of the cystic lesions. The relationship between patient age and the reduction rate of the cyst's size after treatment is still unclear. Some studies have found patient age to be an important factor in the cyst's healing process (26,27), while others have failed to find a correlation (28,29). In this study, patient age seemed to have an effect on the surgical treatment chosen (p=0.025). Decompression was more successful in patients under 30 years of age than in older patients, which could be explained by the higher occurrence of follicular cysts in younger patients. Follicular cysts are not as aggressive as odontogenic keratocysts, which is why the process is likely to succeed when decompression is the chosen treatment.
|
v3-fos-license
|
2018-08-22T21:58:42.080Z
|
2012-05-21T00:00:00.000
|
52084749
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://acp.copernicus.org/articles/12/4429/2012/acp-12-4429-2012.pdf",
"pdf_hash": "e935d36c8dfcaeacec3a51340701441b106f6cdb",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44181",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "506c9db9349ad0c7441d861382a8392341e9d0e4",
"year": 2012
}
|
pes2o/s2orc
|
Interactive comment on “ Growth in NO x emissions from power plants in China : bottom-up estimates and satellite observations ” by S
The paper by Wang et al. reports on the growth in NOx emissions in China as caused by (new) power plants in the period from 2005 to 2007. The authors construct a new bottom-up inventory using recent knowledge on electricity production in China, and test the success of this inventory in predicting accurate NOx emissions by evaluation against tropospheric NO2 column observations from OMI. The consistency between the increases in the bottom-up inventory and the OMI observations (with the GEOSChem model as an appropriate intermediate) suggest support for the new inventory. The paper is generally well-written. Yet I couldn’t help feeling that I’ve seen these, or in any case very similar results already before. Zhang et al. [2009]; Lin and McElroy [2011] are two examples that come to mind, and in any case the results obtained in this paper ought to have been compared in perspective of the work by Lin and McElroy.
Introduction
Nitrogen oxides (NO x ≡NO + NO 2 ) play an important role in the photochemical production of tropospheric ozone and are detrimental to human health and the ecosystem.NO x is released to the troposphere as a result of anthropogenic (e.g., fossil-fuel and biofuel combustion and human-induced biomass burning) and natural (e.g., soil emissions, wildfires and lightning) phenomena.During the past two decades, anthropogenic NO x emissions from China have surged simultaneously with the rapid growth in China's economy and hence attract the attention of scientists and policy makers.Coalfired power plants are the largest coal consumer in China and are believed to be the largest contributor to China's NO x emissions (Hao et al., 2002;Zhang et al., 2007).Since 2005, hundreds of large electricity generator units have been constructed all over China.As a result, the total capacity of coalfired power plants has increased by 49 %, from 328 GW in 2005 to 489 GW in 2007.
An understanding of the growth of power plant NO x emissions in China and subsequently a reliable evaluation of their environmental effects using atmospheric chemical models largely depends on how accurately we know the emission budget.NO x emission inventories are traditionally developed by integrating the emissions from all known source types using the fuel consumption data and emissions factors (e.g., Streets et al., 2003), which is the so-called bottom-up approach.China's coal-fired power plant NO x emissions have been estimated in many studies (Hao et al., 2002;Streets et al., 2003;Tian, 2003;Ohara et al., 2007;Zhang et al., 2007Zhang et al., , 2009a;;Zhao et al., 2008).However, the inaccurate information of the locations of power plants (except for Zhao et al., 2008), due to limited access to specific information about point sources in China, is always a defect for studies on individual power plant emissions and seems to lead to intrinsic regional discrepancies between modeled NO 2 columns and satellite measurements over China (e.g., Zhao and Wang, 2009;Lin et al., 2010).Although the uncertainties in power plant emissions are believed to be far less than for other sources (Zhang et al., 2009a), reliable validation of the power plant NO x emissions with independent measurements is still a gap in China.
Remote sensing instruments provide valuable continuous observation data for tracing and evaluating NO x emissions from surface sources.During the past two decades, polar-orbiting satellite instruments such as the Global Ozone Monitoring Experiment (GOME), SCanning Imaging Absorption SpectroMeter for Atmospheric CHartogra-phY (SCIAMACHY), Ozone Monitoring Instrument (OMI), and GOME-2, have sent back spatio-temporally continuous observations of the trace gases and aerosols in the atmosphere.These measurements greatly extended our insights into the temporal trends of atmospheric NO 2 concentrations (e.g., Richter et al., 2005;van der A et al., 2008) and their atmospheric transport (e.g., Wenig et al., 2003), and were applied to derive "top-down" constraints on surface NO x emissions (Martin et al., 2003(Martin et al., , 2006;;Jaeglé et al., 2005;Konovalov et al., 2006;Wang et al., 2007;Lin et al., 2010Lin et al., , 2012;;Lamsal et al., 2011) with the aid of chemical transport models.
With the improvement of the spatio-temporal resolution of satellite instruments, especially OMI, they have been proved to be capable in monitoring emissions from large point sources.Kim et al. (2006Kim et al. ( , 2009) ) found excellent correlations between the satellite measurements (SCIAMACHY and OMI) and WRF-Chem simulations over grids dominated by large power plants in the western United States, benefiting from the Continuous Emission Monitoring System (CEMS) data used in their studies.Carn et al. (2007) observed dense SO 2 concentrations around the copper smelters in Peru using OMI and estimated their SO 2 emissions.Ghude et al. (2008) identified major NO x emission hot spots in India using GOME and SCIAMACHY and analyzed the emission trends and seasonal cycle.In our previous work, we found that the dramatic changes of OMI-derived summertime NO 2 and SO 2 columns in Inner Mongolia, China, could be attributed to power plant construction activities and operation of fluegas desulfurization (FGD) devices (Zhang et al., 2009b;Li et al., 2010), and the growth rates of NO x emissions in the regions where new power plants were constructed could be even quantified by OMI observations (Wang et al., 2010).Lin and McElroy (2011) used thermal power generation (TPG) as the proxy of economy and found that the changes in OMI NO 2 columns were consistent with changes in TPG.They further concluded that OMI NO 2 observations were capable of detecting the variations in NO x emissions stimulated by economy change.However, the contribution of power plants to the overall emission changes was not separated in their work.
In this work, we aim to portray an overall view of the changes of power plant NO x emissions in China during 2005-2007 based on bottom-up emission inventory and satellite observations, and evaluate their contributions to the growth of NO 2 concentrations in China.Section 2 presents the methodology of the unit-based power plant emission inventory and the chemical transport model, as well as the OMI retrievals used in this study.We present the power plant NO x emissions in China in 2005-2007 in Sect. 3 and validate their accuracy using OMI measurements in Sect. 4. Section 5 portrays the growth of power plant NO x emissions viewed by OMI and quantifies their contributions to the growth of regional NO 2 columns with GEOS-Chem.The impacts of the newly added power plant emissions on the a priori NO 2 profiles used in the satellite retrievals are discussed in Sect.6. Section 7 summarizes the conclusions of this study.
Unit-based power plant emission inventory
We develop a unit-based power plant NO x emission inventory for the time period of 2005-2007 for mainland China.Detailed information of ∼5700 generator units was collected for this work, including geographical location, boiler size, coal consumption per unit electricity supply, emission control technology, and the exact month in which the unit formally came into operation and closed.
Monthly NO x emissions are calculated by each unit according to the technology and operation information, following the equation of Wang et al. (2010): (1) where i, j , k, m stand for province, generator unit, boiler size, and emission control technology; 1.4 is the mass scaling factor from standard coal to raw coal; E is the monthly NO x emissions (Mg); U is the unit size (MW); T is the annual operation hours; F is the monthly fraction of annual total electricity generation; C is the specific coal consumption per unit electricity supply (gram coal equivalent kWh −1 ); and EF is the emission factor (g kg −1 ).The dynamical NO x emission factors adopted from Zhang et al. (2007) vary between 5.6 and 10.5 g kg −1 coal burned based on boiler size and the presence or absence of low-NO x -burner (LNB), which are comparable to the values of 4.0-11.5 g kg −1 used in Zhao et al. (2008).
GEOS-Chem Model
We simulate tropospheric NO 2 columns over China for the years 2005-2007 using the nested-grid GEOS-Chem model.
The GEOS-Chem model is a global 3-D chemical transport model (CTM) for atmospheric composition including a detailed simulation of tropospheric ozone-NO x -hydrocarbon chemistry as well as of aerosols and their precursors (Bey et al., 2001).The chemical mechanism includes >80 species and >300 reactions.The GEOS-Chem model is driven by assimilated meteorological fields from the Goddard Earth Observing System (GEOS) at the NASA Global Modeling and Assimilation Office (GMAO: http://gmao.gsfc.nasa.gov/).In this paper we use the nested-grid GEOS-Chem model (v8-02-01) developed by Chen et al. (2009) with GEOS-5 at native horizontal resolution of 0.5 • × 0.667 • .The nested-grid GEOS-Chem model is embedded into the coarse-resolution global model (4 • ×5 • ) through the one-way nested approach, propagating the time-varying boundary conditions from the global model with consistent meteorology, dynamics, and chemistry.The nested domain stretches from 11 • S to 55 • N and from 70 • E to 150 • E, covering most of East/Southeast Asia.GEOS-5 meteorological data are provided every 3-6 h (3 h for surface fields and mixing depths) for 72 hybrid pressure sigma levels in the vertical extending up to 0.01 hPa.For computational expedience the vertical levels above the lower stratosphere are merged, retaining a total of 47 vertical levels, with 14 pure sigma levels resolved within 2 km altitude.In this work, we conduct 3-yr full-chemistry simulations for 2005-2007.The global anthropogenic emissions are from EDGAR (Olivier and Berdowski, 2001) for the base year of 2000 and scaled to 2006 following van Donkelaar et al. (2008).We then replaced the anthropogenic NO x emission inventory over China with our own estimates.For power plant emissions we use the unit-based inventory for 2005-2007 described in Sect.2.1.Other anthropogenic NO x emissions and monthly variations were developed for the years 2005-2007 following the methodology described in Zhang et al. (2007), with dynamic emission factors to reflect the technology innovations.Emissions for other parts of East/Southeast Asia are replaced by the INTEX-B inventory for 2006 (Zhang et al., 2009a).The GEOS-Chem model also includes NO x emissions from soils (Yienger and Levy, 1995;Wang et al., 1998), lightning (Sauvage et al., 2007), biomass burning (van der Werf et al., 2006), biofuel (Yevich and Logan, 2003), aircraft (Baughcum et al., 1996), and stratospheric flux.Table 1 summarizes the NO x emissions over China used in this work.
A 1-yr spin-up was conducted to remove the effects of the initial concentration fields.Monthly varying tropopause heights were used to derive the tropospheric NO 2 columns.Daily 2-h early afternoon modeled tropospheric NO 2 columns were averaged at the local time of 13:00-15:00 h.To be consistent with the OMI observations, we sampled the model at grids coincident with the daily satellite pixels used in the final average columns.
OMI tropospheric NO 2 column densities
The OMI aboard the Aura satellite is a nadir-viewing imaging spectrograph measuring the earthshine radiance and the solar irradiance in the ultraviolet-visible range from 264 to 504 nm (Levelt et al., 2006).The Aura spacecraft, the last of the EOS observatories, was launched on 15 July 2004, into a sun-synchronous polar orbit at 705 km altitude with a 98.2 • inclination and a local equator-crossing time of 13:45 h in ascending node.OMI measures the complete spectrum with the nadir pixel size of 24 × 13 km 2 and daily global coverage.
The NO 2 abundance is quantified along the viewing path (slant column) using DOAS (Differential Optical Absorption Spectroscopy) (Platt, 1994;Boersma et al., 2002;Bucsela et al., 2006) for each pixel.The air mass factor (AMF), defined as the ratio of the slant column abundance to the vertical column abundance, can be formulated as the integral of the relative vertical distribution (shape factors), weighted by altitude-dependent coefficients (scattering weight factors) for optically thin atmospheric species (Palmer et al., 2001).
In this work, the tropospheric slant NO 2 column densities are from the DOMINO product (version 1.0.2,collection 3) (Boersma et al., 2007) available from the Tropospheric Emission Monitoring Internet Service (TEMIS) (http://www.temis.nl/).The tropospheric slant column density is obtained by removing the stratospheric contribution estimated from assimilating slant columns provided by a global CTM, the TM4 model (Dirksen et al., 2011).The cross-track biases were then determined using the average NO 2 slant column densities in the 5th to 95th percentile limits over less polluted areas (30 • S-5 • N) and removed from the tropospheric slant column densities for each orbit dataset, following the approach described by Celarier et al. (2008) and Lamsal et al. (2010).Correction of the cross-track bias is estimated to cause ∼5 % decrease in the average tropospheric NO 2 column (Lamsal et al., 2010).The tropospheric vertical NO 2 column retrieval is sensitive to the a priori NO 2 shape factors.Lamsal et al. (2010) developed an alternative OMI product (DP GC) based on the DOMINO product and validated its accuracy in summer using in-situ measurements carried out in the United States.They used NO 2 shape factors generated from GEOS-Chem (2 • × 2.5 • with GEOS-4 meteorology fields) in DP GC and Average Kernels (A k ) from the DOMINO product to reproduce the AMF for each OMI pixel in order to improve the representation in NO 2 shape factors generated by TM4, which have been found to be insufficiently mixed throughout the boundary layer due to the inconsistency in model sampling (Hains et al., 2010;Boersma et al., 2011) and to ensure self-consistency when compared the OMI retrievals with GEOS-Chem modeled columns (Eskes and Boersma, 2003;Boersma et al., 2004).
In this work, we follow the method in Lamsal et al. ( 2010) but use NO 2 shape factors provided by the nested-grid GEOS-Chem simulation described in Sect.2.2 to calculate the local AMF.The high-resolution shape factors (0.5 • × 0.666 • ) simulated with year-by-year emission inputs improve the representation of the real vertical distributions in OMI pixels and also consider changes in the NO 2 shape factors related to the changes of NO x emissions.The effects of newly added power plant emissions on NO 2 shape factors and OMI retrievals will be discussed in Sect.6.
We used only OMI pixels with solar zenith angle ≤70 • and cloud radiance fraction ≤0.3 in the final average columns.Pixels at swath edges (five pixels on each side) were rejected to reduce spatial averaging.Since 25 June 2007 the cross-track positions 53-54 (0-based) in the OMI data are specified as a row anomaly due to the effect of a partial external blockage of the radiance port on the instrument (http://www.knmi.nl/omi/research/product/rowanomaly-background.php).These affected pixels were removed.Finally, each OMI pixel was allocated by area-weights into 0.5 • × 0.667 • grids with corner coordinate information to achieve daily global tropospheric vertical NO 2 column maps.China, 2005-2007
New power plants
We refer to the power plants with generator units coming into operation in 2005-2007 hereinafter as "new power plants" in this paper.The total capacities of coal-fired power generation have increased by 48.8 %, from 328.4 GW at the beginning of 2005 to 488.8 GW at the end of 2007.The new generator units are mainly concentrated in five provinces (see Table 2), which are Inner Mongolia (18.6 GW), Jiangsu (17.3 GW), Zhejiang (12.5 GW), Shandong (12.2 GW), and Henan (11.2 GW), accounting for 45.0 % of the total capacity additions.
Why have new power plants surged in just two years?Rapid development of the economy and high energyconsuming industry contributed to a shortage of electricity generation in the first few years of this century in China.As a result, as many as 22 provinces in China limited their electricity supply to some extent in 2003.In this context, a batch of large generator units was immediately licensed for construction and consequently came into operation during 2005-2007.Most of the new generator units are large.Figure 1 shows the changes of total capacities for different sizes of generator units in 2004-2007.The capacities of generator units with size <300 MW are 171.3GW in 2004, and remain almost constant in magnitude for the following three years.In contrast, 92.2 % of the total capacity additions in 2005-2007 are from generator units with size ≥300 MW.This reflects the huge electricity demand and also corresponds to a structural readjustment in the power sector aimed at energy conservation and emission reduction.It is noteworthy that generator units with size ≥600 MW began to come into operation throughout the country since 2006, with total capacity increasing from 38.8 GW at the beginning of 2005 to 132.4 GW at the end of 2007, a factor of 3.4.As a result, the proportion of generator units with size <300 MW decreased from 52.2 % in 2004 to 37.6 % at the end of 2007.
Power plant NO x emissions
As a consequence of the new power plant construction, annual power plant NO x emissions increased from 8.11 Tg in 2005 to 9.58 Tg in 2007 in China, based on our unitbased power plant NO x emission inventory.Figure 2 shows (Zhang et al., 2009a), because of the similar province activity data and emission factors used in these two inventories.Our annual power plant NO x emission estimates for 2005 are 16 % higher than the value of 6.97 Tg in another unit-based power plant inventory (Zhao et al., 2008).This difference could be Using a Monte-Carlo approach described in Zhao et al. ( 2011), the average uncertainty of this inventory is estimated to be −20 % to 19 % (at 95 % coincident intervals).Also the uncertainties related to geographical location have been significantly reduced in our unit-based power plant emission inventory compared to previous "bottom-up" estimates.However, uncertainty for an individual unit could be larger because unified emission factors and annual operation hours were applied to group of units.In the US, NO x emission rates for most power plants are measured by CEMS, which represent one of the most accurate parts of the US emission database, but this is not the case for China.Average emission factors from limited local measurements were used for all generator units with similar technologies, which ignored possible variations among individual units and will introduce some uncertainties.Also, the monthly profile was calculated for each province using the monthly fraction of annual total electricity generation and applied to all generator units in the province.This algorithm will not affect the total NO x emission budget but downgrade the accuracy for individual plant estimates, as it misses the variations of operating conditions among individual generator units.We estimate a typical uncertainty level of −42 % to 51 % for an individual unit by using Monte-Carlo approach.
Evaluation of power plant NO x inventory by OMI observations
Power plants in China are often located in populated areas where there is a mix of various anthropogenic NO x sources, such as industrial complexes and vehicles, which adds difficulties to the validation of power plant emissions.To further investigate the impact of the uncertainties associated with other anthropogenic emissions on the evaluation of the power plant inventory, we compare the modeled and observed NO 2 columns over three categories of grids in China as the following three cases: grids dominated by power plant NO x emissions (Case A), all grids with power plants (Case B), and all grids in China (Case C).Case A is defined as grids with urban population <0.5 million and power plant NO x emissions >60 % of total NO x emissions.The urban population data are obtained by masking the LandScan 2006 1 km × 1 km resolution population density (Bhaduri et al., 2002) using Moderate Resolution Imaging Spectroradiometer (MODIS) urban land use fraction data (Schneider et al., 2009) and then degraded to 0.5 • × 0.667 • resolution.Grids with urban population <0.5 million are generally associated with rural areas or small towns in China.
The modeled summer average tropospheric NO 2 columns are compared with OMI measurements for the three cases in Figs.3-5 and Table 3.We only used summer data for comparison, as NO 2 columns have a closer relationship to local emissions in summer than other seasons due to the shorter lifetime of NO x in summer.However, South China and the Sichuan Basin are frequently covered by cloud in summer, resulting in insufficient observation samples in these regions (see Fig. 7).We use only grids with OMI sample number ≥10 to conduct the validation in those regions.
Figure 3 presents the relationship between model and OMI NO 2 columns over the grids where power plant emissions are dominant (Case A) for 2005 and 2007.The grids are colored by the regions defined in Fig. 2.Only ∼4 % of total samples over China were left (see Table 3) after using the filtering criterion described above.As presented in Table 3, modeled columns are 7-14 % lower than OMI retrievals, within the uncertainty range of the power plant emission estimates.The spatial correlations are high (R 2 = 0.79-0.82)with little scatter, lending support to the high accuracy of the unitbased power plant NO x emission inventory.Also it should be noted that emissions in the power plant dominant grids are represented by total NO x emissions, including other anthropogenic emissions, and remotely sensed urban extent, which may or may not be accurate.In previous work it has been concluded that anthropogenic NO x emission estimates for industry and transportation can be significantly underestimated for a specific grid (Zhang et al., 2007;Wang et al., 2010).This probably causes the significant low bias of the modeled NO 2 columns for several grids in North China (see Fig. 3b), where emissions from industries and vehicles are high.Using a threshold in which power plant NO x emissions exceed 80 % of total NO x emissions (instead of 60 %) further improves the R 2 to 0.77-0.91 and slopes to 0.89-0.97,but with only <1 % samples remaining.
Excellent correlations between WRF-Chem modeled and satellite based NO 2 columns over power plant plumes were also observed in western United States regions dominated by power plant NO x emissions (Kim et al., 2006(Kim et al., , 2009)).Those good agreements were partly benefited from the highly accurate NO x emission data measured by CEMS.In this work, although our unit-based power plant NO x emission estimates for individual generator units would not have accuracy as high as CEMS measurements, the similar excellent agreements between model and OMI NO 2 columns (slope = 0.86-0.93,R = 0.89-0.90)may support reasonably high accuracy of our emission estimates.Considering the uncertainties in bottom-up inventories (Zhao et al., 2011), satellite retrievals (Boersma et al., 2004), and the coarse resolution of GEOS-Chem compared to the original fine footprint of OMI, we can conclude that the unit-based power plant NO x emissions developed in this work are reasonably reliable.
We next plotted the correlations between modeled and OMI observed NO 2 columns for all grids with power plant NO x emissions (Case B) in Fig. 4, which include ∼20 % of the total samples over China (see Table 3).Compared with Case A, the R 2 values decreased to 0.62-0.78with considerable scatter observed in grids with elevated NO 2 concentrations, indicating that total NO x emissions are relatively poorly understood over regions where power plant emissions are mixed with other anthropogenic sources.Other anthropogenic emission inventories used in this work are thought to be much more uncertain than the unit-based power plant inventory, as they were estimated at provincial level and then allocated to each grid using various spatial proxies such as population density and road networks (Streets et al., 2003).The uncertainties induced by this "top-down" assignment method could be large for a specific grid, which may result in relatively poor model performance over many grids.However, assessing the uncertainties introduced by the emission gridding process is beyond the scope of this paper and will need to be investigated in future work.
Tropospheric NO 2 columns over all grids in China (Case C) also have good spatial correlations between model and OMI with R 2 of 0.85 and 0.74 for 2005 and 2007 respectively, as shown in Fig. 5. Modeled NO 2 columns are 17-25 % lower than OMI, with more significant biases and scatter over high concentration regions.This bias in modeled NO 2 columns shown here is better than biases of more than 50 % in previous studies that used the GEOS-Chem model (e.g., Martin et al., 2006;Lin et al., 2010).The improvement of model performance here can be partly attributed to the more accurate power plant NO x emissions: the underestimation of modeled columns increased to 29 % in a sensitivity run with power plant emissions from the INTEX-B inventory (Zhang et al., 2009a) instead of the unit-based inventory.The finer model resolution may also play an important role.
Figure 6 shows the OMI and GEOS-Chem tropospheric NO 2 columns for the summers of 2005 and 2007 over China and their differences.Both model and OMI maps illustrate that NO x emissions are concentrated in areas with dense energy consuming sources in eastern of China.Isolated metropolitan areas in northeastern and southeastern parts of China can be identified easily by OMI.Some individual large power plants located in rural areas are also obvious.However, the modeled NO 2 columns are significantly lower than OMI measurements by a factor of 2-3 in Shanxi-Shaanxi-Inner Mongolia region, where there are large coal reserves and many power plants and energy-consuming industrial complexes were built during the past decade.As power plant emissions can be well constrained by OMI, this difference possibly points to missing NO x emissions from other energyintensive industries, which are widespread in these regions but are not well represented in the current bottom-up emission inventory and need to be further investigated (Zhang et al., 2009b).Another possible source of bias could come from errors in simulating NO x chemistry (Valin et al., 2011).7b), with three symbol sizes from small to large indicating the total capacities of new generator units (<500 MW, 500-1200 MW, >1200 MW) in the corresponding power plants.
The ratios are unreliable in background regions due to noise in satellite observations so that only those grids with average NO 2 columns >1.0 × 10 15 molecules cm −2 in both years are colored in the ratio maps.Most of the distinct increases of NO 2 columns viewed by OMI during the two years are found to be collocated with the construction of new power plants in Fig. 7. North China and Inner Mongolia, as the main coal-manufacturing bases in China, contain a large number of new power plants and show the fastest growth rates of NO 2 columns in the entire country.East China and Southwest China also show large increases of NO 2 columns.In contrast, however, there are only a few new power plants in inland areas of South China and the NO 2 columns have not significantly increased in this region.It is noteworthy that there are significant increases in some grids without new power plants, and this could also be the result of atmospheric transport of NO x .This is especially clear in the annual ratio map. Figure 8 presents the ratios of annual and summer average tropospheric NO 2 columns between 2007 and 2005 from the GEOS-Chem model.The modeled annual and summer ratios show similar spatial distributions as indicated by OMI, with the most significant increases in North China and Inner Mongolia and no significant increase in inland areas of South China.The growth rates of summer average columns in North China from OMI are higher than those from the model and have broader spatial extent, possibly related to some missing newly added industrial sources.
Many new power plant clusters locate in East China, but the ratios of NO 2 columns in this region are not as notable as those in North China and Inner Mongolia.This is due to the fact that East China has the most intensive NO x emissions in the whole country, and consequently, the observations of power plant emissions could be interfered with by the large contributions of emissions from other source types.There is a chain of new power plants in Sichuan, Chongqing, and Guizhou in Southwest China, visible in both the OMI and model data.But as the sample amounts of OMI measurements are very small (see Fig. 7) in summer in this region, due to the typical rainy weather, the ratios of summer average columns are not as distinct as the ratios of annual average columns in some grids.
The model shows significant increases of summer NO 2 columns over the new power plants in coastal regions of South China, while OMI observed decrease trend in summer columns over those locations.The reasons of this discrepancy remain unclear, but it is possible due to the undersampled a priori parameters in a large grid in NO 2 retrievals (Heckel et al., 2011), or inaccurate representation of coastal metrological filed in GESO-5 model.In Northeast China, OMI viewed significant increases of summer average NO 2 columns during 2005-2007, surpassing the variations of all major anthropogenic indexes (shown in Fig. 9).Natural emissions may contribute to this increase because summer temperature and biomass burning activities in 2007 were the highest among 2005-2009 in Northeast China.However, further investigation is needed to confirm this guess.
In Fig. 10, we present five power plants as examples to show how OMI can identify the temporal evolution of NO x emissions over individual power plants with large new generator units.The NO 2 columns observed by OMI varied synchronously with the modeled columns, both surging dramatically (decreasing in Fig. 10e) after the new generator units came into operation.The successful identification of the emission evolutions over individual power plants using OMI measurements could be very useful for Ministry of Environmental Protection in China to monitor the current emission status and operations of pollutant control devices in power plants.However, this method greatly depends on the sample amounts and the location of power plants.Averaging and smoothing satellite observations for multi-years could provide more accurate top-down estimates on large point source emissions (Beirle et al., 2011;Fioletov et al., 2011), and we will extend the analysis for a longer period in our future work.
Contributions of power plant emissions to NO 2 columns
In order to quantify the contributions of power plant NO x emissions to NO 2 columns, three scenarios with different power plant NO x emissions were examined in the nested-grid GEOS-Chem for the period of 2005-2007: (1), with the complete unit-based power plant emission inventory (hereinafter referred to simply as GC PP); (2), without power plant emissions (hereinafter called GC NoPP); and (3), with the unitbased power plant emission inventory for the same period, but without emissions from the generator units that came into operation in 2005-2007 (hereinafter called GC NoNPP).
Figure 11 shows the relative contributions of the power plants to the annual average NO 2 columns in 2005 and 2007, which are defined as the relative differences between GC PP and GC NoPP.In Inner Mongolia, central North China and part of Southwest China, power plants contribute to more than 60 % of NO 2 columns.The share of power plant pollution was increased in Inner Mongolia and Southwest China during 2005-2007, where power plants dominated the increases of NO x emissions over that period.The share of power plant pollution decreased in mega-cities (e.g., Beijing and Guangzhou) as no new power plant was built in megacities and emissions from industries and transportation grew fast.
Figure 12 plots the relative contributions of the new power plants to the annual and summer average NO 2 columns in 2007, which are defined as the ratio of the difference between annual (or summer) average NO 2 columns in 2007 (2) As shown in Fig. 12, the impact of the new power plants are well constrained around the emitters in summer due to the short lifetime of NO x , but expand over a wider scale in the annual average map as longer NO x lifetime in other seasons will allow NO x plumes from power plants transport to further distances.This is also consistent with the satellite observations presented in Fig. 7.In Fig. 12, R1 and R2 are two major regions with dramatic increases of NO 2 columns due to the new power plant emissions, as mentioned in Sect.5.1; R3 is a background region where there is scarcely any anthropogenic source.New power plants contributed 10 % and 18.5 % to 2007 annual average NO 2 columns in R1 and R2, respectively, indicating the large environmental impact of new power plant constructions.New power plants have higher contribution to NO 2 columns in R2 compared to in R1 because power plant emissions in R2 are more dominant.
We further examined the evolution of NO 2 columns from the new power plants by season in three selected regions, as shown in Fig. 13.In R1 and R2, the increase of NO 2 columns due to new power plants shows a clear upward trend with strong seasonal variations, reflecting the gradually increased contribution from new power plants and differences of NO x lifetime in four seasons.The contributions of new power plants to NO 2 columns in R3 are very limited in summer, but could be up to 0.21 × 10 15 molecules cm −2 in winter through transportation.
Impacts of new power plant NO x emissions on satellite retrievals
The OMI tropospheric NO 2 column retrievals are sensitive to the changes in the a priori NO 2 shape factors used in the calculation of AMF.A new large surface emitter such as a power plant would cause a considerable change in the local NO 2 shape factor.However, this potential impact has not been explicitly considered yet in the operational space-borne products, because NO 2 shape factors are usually generated by a CTM driven by fixed emissions for all years.
We choose three representative cases to demonstrate the impacts of the new power plant NO x emissions on NO 2 profiles in Fig. 14: new power plants in rural area (Shangdu), new power plants in urban area (Baotou), and no new power plant in urban area (Shanghai).July-average NO 2 profiles from the nested-grid GEOS-Chem model for these three sites are presented in Fig. 14a, c, d, three from GC PP for 2005-2007 and one from GC NoNPP for 2007.To understand the impacts of new power plant emissions on NO 2 profiles at a coarser resolution, which are typically used in the operational satellite products, we conducted the 2 • × 2.5 • GEOS-Chem simulations for 2005-2007 with the same emission inputs used in the nested-grid GEOS-Chem model.Four Julyaverage NO 2 profiles from the 2 • × 2.5 • GEOS-Chem model for Shangdu are presented in Fig. 14b.In Fig. 14a, the NO 2 concentrations in the lower atmosphere in Shangdu dramatically increased in 2007 due to the new power plant emissions added at the end of 2006.The differences between NO 2 profiles at 0.5 • × 0.667 • resolution from GC PP and GC NoNPP for 2007 are very significant up to 3 km in altitude, far above the PBL.In contrast, only minimal increases of NO 2 are found between the two profiles for 2007 in Fig. 14b, indicating that the impacts of new power plants on NO 2 profiles at the 2 • × 2.5 • resolution are not significant.In Baotou, an industrial city in Inner Mongolia, the surface NO 2 concentrations increased gradually in 2005-2007 along with the continuously added NO x emissions from both power plants and other anthropogenic sources.In Shanghai, the differences between NO 2 profiles from GC PP and GC NoNPP for 2007 are very small, while increases of NO x from other anthropogenic emissions contribute significantly to the changes of NO 2 profiles during 2005-2007.
Since the satellite is less sensitive to NO 2 in the lower atmosphere, the increase of surface NO 2 concentrations decreases the local AMF, and thus the retrieved tropospheric NO 2 columns would be underestimated in the grids with new power plants if no correction were conducted to the NO 2 shape factors.We compare summer average OMI NO 2 columns derived with different a priori NO 2 profiles over eight sites in Table 4.The NO 2 columns in summer 2007 calculated using GC PP NO 2 profiles are 3.8-17.2% higher than those calculated using GC NoNPP NO 2 profiles over sites with new power plants, more significantly in rural areas and small towns.The simulation data at the nested-grid resolution (0.5 • × 0.667 • ) used in this study could also improve the expression of the effects of new power plant emissions on the NO 2 profiles.The sensitivity analysis compared to OMI retrievals with NO 2 profiles generated from the global GEOS-Chem simulation (2 • × 2.5 • ) suggests that the use of NO 2 profiles at 0.5 • × 0.667 • resolution could produce more significant growth rates over grids with new power plants (see Table 4).In sites isolated from the populous regions, e.g., Shangdu and Lanxi, the effects from the resolution of the a priori NO 2 profiles used in the NO 2 retrievals can be up to 20 %, which obviously cannot be ignored in any trend analysis or quantification study.
It should be noted that the sensitivity analysis discussed here is subject to uncertainty.The fresh power plant plumes would not be laterally mixed within a 0.5 • × 0.667 • grid.Vertical distribution of emissions could vary within one model grid as functions of wind speed and downstream distances (Weil et al., 2004).Chemical transport model with finer horizontal resolutions and observation measurements should be used in future studies to reduce this uncertainty.Other parameters such as the aerosol profiles could also be affected by the new power plant emissions (SO 2 and NO x ) and cause some biases in the satellite retrievals.The scattering sulfate aerosols could increase the satellite's sensitivity to the NO 2 mixing in and above the aerosol layers (Leitão et al., 2010).However, there is no explicit correction for aerosol changes in the OMI product used in this work.
Concluding remarks
In this paper, we have demonstrated the rapid growth of power plant NO x emissions in 2005-2007 and their contributions to the increasing NO 2 columns in China, based on a unit-based power plant NO x emission inventory for mainland China, nested-grid GEOS-Chem model, and OMI observations.This inventory was based on a Chinese power plant database, and was validated through comparing the GEOS-Chem modeled NO 2 columns with OMI measurements in summers 2005 and 2007 over grids dominated by power plant NO x emissions.The major conclusions and implications can be drawn as follows.
The annual NO x emissions from coal-fired power plants were estimated to be 8.11 Tg, for 2005 and 9.58 Tg for 2007, respectively.The rapid growth of the power plant NO x emissions was mainly due to the 161.4GW of new generator units constructed in the period of [2005][2006][2007], which led to a 48.8 % increase of the coal-fired power generation capacity during this period.Generator units with size ≥300 MW accounted for 92.2 % of the total capacity additions.It is worth emphasizing that the structural readjustment in the power sector aimed at energy conservation and emission reduction is still in rapid progress and will have positive effects on the NO x emissions in China in the future.
The unit-based power plant NO x emissions were validated using the improved OMI NO 2 retrievals and the nested-grid GEOS-Chem model.The OMI-derived and GEOS-Chemmodeled summer average tropospheric NO 2 columns for 2005 and 2007 were well correlated (R 2 = 0.79-0.82)over grids dominated by power plant NO x emissions, with 7-14 % low bias in modeled NO 2 columns.This bias was within the uncertainty range of the power plant emission www.atmos-chem-phys.net/12/4429/2012/Atmos.Chem.Phys., 12, 4429-4447, 2012 estimates, lending support to the high accuracy of the unitbased power plant NO x emission inventory.The comparisons involving more grids produced more scatter over grids with elevated NO 2 concentrations, indicating that NO x emissions were relatively poorly understood over the regions where power plant emissions were mixed with other anthropogenic sources.This validated power plant inventory also facilitates forward investigations of the emissions in other anthropogenic sources by separating the power plant emissions out.OMI observed dramatic increases of NO 2 columns during 2005-2007 in China attributed to the construction of new power plants.North China and Inner Mongolia showed the fastest growth rates of NO 2 columns in the country, followed by East China.Infrequent sampling in the Sichuan Basin and South China made it difficult to capture the signals of some new power plants in summer.The coarse-resolution a priori NO 2 shape factors used in the satellite retrievals also reduced the accuracy of NO 2 columns near the coastline, introducing an additional bias in the observations of new power plants there.We found that OMI had the capability to trace the changes of NO x emissions over individual power plants, e.g., the addition of new generator units, in the cases where there was less interference from other NO x sources.This application can be used to provide useful information to the environmental officials to monitor the emissions and evaluate the possible reductions due to the application of control devices in power plants in the future.
Sensitivity analysis with two scenarios of GEOS-Chem simulations, with and without new power plant emissions, suggested that the relative contributions of these new power plants to the annual average NO 2 columns in 2007 were 10 % in North China and 18.5 % in Inner Mongolia.The contribution of new power plants to NO 2 columns in North China showed a clear upward trend with strong seasonal variations, reflecting the gradually increased contribution from new power plants and differences of NO x lifetime in four seasons.
The new power plant NO x emissions can have a significant impact on the satellite retrieval by changing the NO 2 shape factor.The effects from new power plant emissions caused 3.8-17.2% increases in the summer average OMI tropospheric NO 2 columns for the six selected sites, more significantly in rural areas and small towns.The fine-resolution data used in this study improved the expression of the effects of new power plant emissions on the NO 2 profiles, especially in areas isolated from the populous regions, resulted in up to 20 % increases of the summer average NO 2 column ratios between 2007 and 2005 compared to OMI retrievals with NO 2 profiles generated from a global GEOS-Chem simulation (2 • × 2.5 • ).It is worth considering the use of a priori shape factors generated by a CTM with temporally varying bottom-up emissions and at a reasonably high resolution in the operational satellite retrieval products.The changes of aerosols and plume chemistry over the new power plants should also be further investigated in the future.
Fig. 1 .
Fig. 1.Changes of total capacities for different sizes of coal-fired generator units in 2004-2007 in China.
Fig. 2 .
Fig. 2. Spatial distributions of annual coal-fired power plant NO x emissions for 2005 and 2007 and the changes.The maximum values and minimum values are indicated.Solid lines within the Chinese national boundaries denote the seven regions discussed in the text: NE-Northeast China, IN-Inner Mongolia, NC-North China, EC-East China, SC-South China, SW-Southwest China, and WC-West China.
Fig. 3 .
Fig. 3. Comparisons between GEOS-Chem and OMI summer average tropospheric NO 2 columns for (a) 2005 and (b) 2007 over power plant dominant grids in China.Grids are colored by the regions defined in Fig. 2. The linear fit regression (red line) is based on Reduced Major Axis (RMA) algorithm (Clarke, 1980).Error bars indicate the standard deviations in the summer average columns (only shown for minimum, quartiles, and maximum points in OMI datasets).
Fig. 4 .
Fig. 4. Comparisons between GEOS-Chem and OMI summer average tropospheric NO 2 columns for (a) 2005 and (b) 2007 over grids with power plants in China.RMA algorithm is used in the linear fitting (red line).
Fig. 5 .
Fig. 5. Comparisons between GEOS-Chem and OMI summer average tropospheric NO 2 columns for (a) 2005 and (b) 2007 over all grids in China.RMA algorithm is used in the linear fitting (red line).
5
Figure 7a, b show the ratios of annual and summer average tropospheric NO 2 columns between 2007 and 2005 from OMI, respectively.The two small maps below each ratio map present the sample amounts used in the averages for the corresponding years.The new power plants are indicated as open circles in Fig. 7 (only the units coming into operation between June 2005 and August 2007 are plotted in Fig.7b), with three symbol sizes from small to large indicating the total capacities of new generator units (<500 MW, 500-1200 MW, >1200 MW) in the corresponding power plants.The ratios are unreliable in background regions due to noise in satellite observations so that only those grids with average NO 2 columns >1.0 × 10 15 molecules cm −2 in both years are colored in the ratio maps.Most of the distinct increases of NO 2 columns viewed by OMI during the two years are found to be collocated with
Fig. 7 .
Fig. 7. Ratios of average OMI tropospheric NO 2 columns between 2007 and 2005 using (a) annual averages and (b) summer averages.Only grids inside Chinese boundary and with average NO 2 columns >1.0 × 10 15 molecules cm −2 in both years are colored.Open circles denote the new power plants coming into operation during 2005-2007 (only new power plants coming into operation during June 2005-August 2007 are plotted in (b)), with three sizes from small to large: <500 MW, 500-1200 MW, >1200 MW.Small maps below each ratio map present the sample amounts used in the averages for the corresponding years (note change of scale).
Fig. 8 .
Fig. 8. Ratios of GEOS-Chem modeled tropospheric NO 2 columns between 2007 and 2005 using (a) annual averages and (b) summer averages.The cartography is same as Fig. 7.
Fig. 10 .
Fig. 10.Changes of summer average OMI and GEOS-Chem tropospheric NO 2 columns and power plant NO x emissions during 2005-2007 (2005 reference year) over five large new power plants.Numbers in the parentheses in the subtitles denote the new generation capacities added during June 2005-August 2007.Locations of the power plants are indicated in (f).
Fig. 11 .
Fig. 11.Relative contributions of the power plant NO x emissions to the annual average NO 2 columns in (a) 2005 and (b) 2007, defined as the relative differences between GC PP and GC NoPP.Only grids inside Chinese boundary and with average NO 2 columns >1.0 × 10 15 molecules cm −2 in GC PP are colored.
Fig. 12 .Fig. 13 .
Fig. 12. Relative contributions of the new power plant NO x emissions to the (a) annual average NO 2 columns and (b) summer average NO 2 columns in 2007, defined as Eq.(2).Only grids inside Chinese boundary and with average NO 2 columns >1.0 × 10 15 molecules cm −2 in GC PP are colored.Domains of three studied regions are indicated in green rectangles.Fig.13 1
Fig. 14 .
Fig. 14.NO 2 profiles from surface to 6 km for July in 2005-2007 generated from the nested-grid (0.5 • × 0.667 • ) GEOS-Chem model over Shangdu (a), Baotou (c), and Shanghai (d), and from the 2 • × 2.5 • GEOS-Chem model over Shangdu (b).GC PP denotes the GEOS-Chem simulations with all unit-based power plant emissions, and GC NoNPP denotes the GEOS-Chem simulations without new power plant emissions.Dash lines indicate the average PBL heights.Numbers in the parentheses in the subtitles denote the new generation capacities added during June 2005-August 2007.
Table 1 .
A priori annual GEOS-Chem NO x emissions in 2005-2007 in mainland China.
Table 2 .
Annual and summer (June to August) NO x emissions from coal-fired power plants in[2005][2006][2007]in mainland China.NO x Emissions (Gg NO 2 a −1 ) * Capacities of new generator units which came into operation in 2005-2007.The ≥600 MW generator units contribute 27.1 % to capacity, consume 20.7 % of the coal, and release 16.0 % of the power plant NO x to the atmosphere in 2007.In contrast, the
Table 3 .
Reduced Major Axis regression analysis between OMI (xaxis) and GEOS-Chem (y-axis) summer (June to August) average NO 2 columns in 2005 and 2007.
*Only grids with OMI sample number ≥10 are used.
Table 4 .
Summer average OMI NO 2 columns derived with different a priori NO 2 profiles over various sites a .Values in the parenthesis indicate the ratios of NO 2 columns in 2007 to the corresponding NO 2 columns in 2005.
a b Capacities of new generator units which came into operation in June 2005-August 2007.
|
v3-fos-license
|
2019-01-22T22:24:54.540Z
|
2019-01-30T00:00:00.000
|
58547690
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.1113/JP277292",
"pdf_hash": "c9be0b57df1b98b877308dd81928d0b3c8fb6d34",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44183",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "c9be0b57df1b98b877308dd81928d0b3c8fb6d34",
"year": 2019
}
|
pes2o/s2orc
|
Functional assessment of triheteromeric NMDA receptors containing a human variant associated with epilepsy
Key points NMDA receptors are neurotransmitter‐gated ion channels that are critically involved in brain cell communication Variations in genes encoding NMDA receptor subunits have been found in a range of neurodevelopmental disorders. We investigated a de novo genetic variant found in patients with epileptic encephalopathy that changes a residue located in the ion channel pore of the GluN2A NMDA receptor subunit. We found that this variant (GluN2AN615K) impairs physiologically important receptor properties: it markedly reduces Mg2+ blockade and channel conductance, even for receptors in which one GluN2AN615K is co‐assembled with one wild‐type GluN2A subunit. Our findings are consistent with the GluN2AN615K mutation being the primary cause of the severe neurodevelopmental disorder in carriers. Abstract NMDA receptors are ionotropic calcium‐permeable glutamate receptors with a voltage‐dependence mediated by blockade by Mg2+. Their activation is important in signal transduction, as well as synapse formation and maintenance. Two unrelated individuals with epileptic encephalopathy carry a de novo variant in the gene encoding the GluN2A NMDA receptor subunit: a N615K missense variant in the M2 pore helix (GRIN2A C1845A). We hypothesized that this variant underlies the neurodevelopmental disorders in carriers and explored its functional consequences by electrophysiological analysis in heterologous systems. We focused on GluN2AN615K co‐expressed with wild‐type GluN2 subunits in physiologically relevant triheteromeric NMDA receptors containing two GluN1 and two distinct GluN2 subunits, whereas previous studies have investigated the impact of the variant in diheteromeric NMDA receptors with two GluN1 and two identical GluN2 subunits. We found that GluN2AN615K‐containing triheteromers showed markedly reduced Mg2+ blockade, with a value intermediate between GluN2AN615K diheteromers and wild‐type NMDA receptors. Single‐channel conductance was reduced by four‐fold in GluN2AN615K diheteromers, again with an intermediate value in GluN2AN615K‐containing triheteromers. Glutamate deactivation rates were unaffected. Furthermore, we expressed GluN2AN615K in cultured primary mouse cortical neurons, observing a decrease in Mg2+ blockade and reduction in current density, confirming that the variant continues to have significant functional impact in neuronal systems. Our results demonstrate that the GluN2AN615K variant has substantial effects on NMDA receptor properties fundamental to the roles of the receptor in synaptic plasticity, even when expressed alongside wild‐type subunits. This work strengthens the evidence indicating that the GluN2AN615K variant underlies the disabling neurodevelopmental phenotype in carriers.
Introduction
The NMDA receptor is an ionotropic voltage-dependent glutamate receptor with important roles in synaptic signal transduction and plasticity. NMDA receptor dysfunction has been identified in diverse neuropsychiatric disorders (Paoletti et al. 2013). Unlike AMPA-and kainate-type glutamate receptors, it shows high Ca 2+ permeability (MacDermott et al. 1986) and voltage-dependent Mg 2+ blockade, which allows it to act as a molecular coincidence detector (Mayer et al. 1984;Nowak et al. 1984). NMDA receptors are heterotetramers comprised of two GluN1 and two GluN2 subunits of which four different subunits have been cloned (GluN2A-D) (Watanabe et al. 1992;Monyer et al. 1994). The identity of the GluN2 subunits in the receptor impacts on receptor properties important in synaptic plasticity (Wyllie et al. 2013). Diheteromeric GluN1/GluN2B receptors probably form the commonest forebrain receptor composition prenatally but, after GluN2A expression increases postnatally (Bar-Shira et al. 2015), triheteromeric GluN1/GluN2A/GluN2B receptors comprise a major proportion of NMDA receptors, particularly in the hippocampus (Gray et al. 2011;Rauner & Köhr, 2011;Tovar et al. 2013).
The rapid expansion of whole-exome sequencing over the last decade has shown that many regions of the gene encoding GluN2A (GRIN2A) are intolerant of variation in healthy controls (Ogden et al. 2017) and, instead, a large number of rare and de novo variants (>100) have been identified in patients with a range of neurodevelopmental disorders, most commonly epilepsy aphasia syndromes or other epileptic disorders, but also intellectual disability, autism, attention deficit hyperactivity disorder and schizophrenia (XiangWei et al. 2018). At the most severe end of the phenotypic spectrum are epileptic encephalopathies, where epileptic activity is considered to contribute to severe cognitive and behavioural impairment (Scheffer et al. 2017). Around half of disease-associated GRIN2A variants are gene-disrupting and around half are missense variants, potentially resulting in NMDA receptors with altered function, requiring electrophysiological interrogation for confirmation. This has been a clinically useful strategy, with personalized treatment being offered based on a functional analysis of the variant (Pierson et al. 2014). However, it is improbable that all variants are disease-causing: some are inherited from phenotypically normal parents and some are in regions of the protein that probably exhibit variation without deleterious functional impact (less highly conserved, or more variation present in healthy controls). Furthermore, of those variants where functional consequences have been assessed, a range of effects have been found, some predominantly 'gain-of-function' , some predominantly 'loss-of-function' (Swanger et al. 2016) and some with no effect (Marwick et al. 2017). For these reasons, it is important to identify the functional consequences, if any, of a given variant before giving a genetic diagnosis or embarking on targeted treatment.
The large and increasing number of GRIN2A variants (and variants in other genes) found in patients with neurodevelopmental disorders means that some prioritization for functional investigation needs to be applied. The variant that we selected for intensive further study was GluN2A N615K . This variant was selected because it affects a residue that is already known to be crucial in interacting with Mg 2+ ions, the 'N+1' asparagine (Wollmuth et al. 1998). Missense mutations affecting this residue have a high probability of influencing receptor function. Second, the variant has good genetic evidence of disease-association: it has arisen de novo in two unrelated individuals with similar phenotypes (Endele et al. 2010;Allen et al. 2016). Both individuals presented with early-onset epileptic encephalopathy, associated with severe or profound intellectual disability and electroencephalogram abnormalities. Furthermore, so far, the residue has not been found to be mutated in 60 706 people without severe paediatric disease (Exome Aggregation Consortium database, accessed 3 September 2018).
The GluN2A N615K variant was the first missense GRIN2A variant identified as potentially disease causing and some functional work on its consequences has already been performed when expressed as diheteromers: GluN2A N615K has been found to reduce Mg 2+ blockade and Ca 2+ permeability and impact channel blocker potency but not to affect glutamate or glycine potency (Endele et al. 2010;Pierson et al. 2014). However, no previous work has investigated the impact of this variant when expressed in triheteromers with one wild-type and one mutated GluN2 subunit. This insight would be clinically relevant, both because the variant is present heterozygously and because the majority of NMDA receptors in key regions of the adolescent and adult brain are probably GluN1/GluN2A/GluN2B triheteromers. Moreover, no previous work has investigated the impact of the variant on NMDA receptors expressed in neurons, where neuron-specific trafficking and regulation could potentially negate or compensate for the impact of the variant.
In the present study, we employed a recently developed technique to express triheteromeric NMDA receptors containing only one variant subunit in HEK293T cells and assessed the impact of the variant on physiologically relevant receptor properties important for synaptic transmission using electrophysiological recordings. In addition, we expressed the mutant subunit in primary cultured neurons and found that the GluN2A N615K mutation had a marked impact on key physiological properties of NMDA receptors even in the presence of wild-type subunits, which is consistent with a role in the pathogenesis of the epileptic encephalopathies experienced by its carriers.
Ethical approval
Experiments conducted during the course of this study received approval from the University of Edinburgh's Animal Welfare Ethical Review Board. Animal breeding and maintenance and experimental procedures were performed in accordance with the UK Animals (Scientific Procedures) Act 1986 under the authority of Project Licence 60/4290 (D.J.A.W.). Animal experiments adhered to the ethical principles required by The Journal of Physiology (Grundy, 2015). Mice were housed under a standard 12:12 h light/dark cycle and received food and water ad libitum. E17.5 CD1 mice (sex not determined) supplied by Charles River (Margate, UK), were culled by decapitation (a Schedule 1 Method) shortly following culling of the dam by cervical dislocation (a Schedule 1 method). Dams were typically 12-22 weeks old.
Mutagenesis
The cDNA for wild-type human NMDA subunit GluN1-1a (hereafter GluN1) and GluN2A (GenBank accession codes: NP 015566, NP 000824) (Hedegaard et al. 2012) were gifts from Dr Hongjie Yuan (University of Emory, Atlanta, GA, USA). The GluN2 cDNAs for triheteromer experiments have been described previously: wild-type rat GluN2A (D13211) and GluN2B (U11419), GluN2A C1 , GluN2A C2 , GluN2B AC1 and GluN2B AC2 (Hansen et al. 2014). Expression of GluN1 in HEK293T cells was achieved as described previously (Yi et al. 2018) using a plasmid DNA construct with enhanced green fluorescent protein (eGFP) inserted between the cytomegalovirus promoter in pCI-neo and the open reading frame of rat GluN1 (U08261) (i.e. eGFP and GluN1 were not expressed as a fusion protein). This DNA construct produces high expression of eGFP for identification of transfected cells and maintains a linear relationship between eGFP and GluN1 expression. All cDNAs were in pCI-neo. Site-directed mutagenesis was performed via PCR with overlapping mutagenizing oligonucleotides using a thermostable Pfu high fidelity DNA polymerase (New England Biolabs, Ipswich, MA, USA). The PCR product was recircularized into a viable plasmid using an InFusion HD kit (Clontech, Mountain View, CA, USA). The double-stranded mutant DNA was transformed into TOP10 Competent Cells (Life Tech, Grand Island, NY, USA). Clones were amplified then DNA extracted using QIAPrep Spin MiniPrep Kit (Qiagen, Venlo, The Netherlands) in accordance with the manufacturer's instructions. The mutations were verified by Sanger sequencing through the mutated region.
Preparation and transfection of HEK293T cells
For whole-cell experiments, human embryonic kidney 293 cells containing the SV40 large T-antigen (HEK293T) cells were cultured in Dulbecco's modified Eagle's medium supplemented with 10% dialysed fetal bovine serum, 10 U mL -1 penicillin and 10 mg mL -1 streptomycin). HEK293T cells were chosen for their propensity to grow singly rather than in clumps, facilitating electrophysiological experiments involving rapid solution application. Cells were passaged twice weekly and plated onto cover slips precoated in poly-D-lysine (0.1 mg mL -1 ) to give a density of 10-30% on the day of recording. HEK293T cells were transfected using the calcium phosphate precipitation method. Plasmids containing rGluN1-1a eGFP and the rGluN2 subunits of interest were mixed in a 1:1 mass ratio and diluted to 200 g L -1 J Physiol 597.6 with water. To transfect four wells of a 24-well plate, 10 μL of cDNA was mixed with 25 μL of 1 M CaCl 2 and 100 μL of 2 × Bes (50 mM Bes, 280 mM NaCl and 1.5 mM Na 2 HPO 4 , pH 6.95), then mixed by pipetting. After 10-15 min, 50 μL of this mixture was added dropwise to each well. After 4-6 h, the media was replaced with fresh media supplemented with NMDA receptor antagonists [D-2-amino-5-phosphonopentanoic acid (200 μM) and 7-chlorokynurenic acid (200 μM)]. Recordings were made ß24 h post transfection. HEK293T for single-channel experiments were prepared similarly with minor differences: Dulbecco's modified Eagle's medium was supplemented with Glutamax-I (Thermo Fisher, Waltham, MA, USA) and 1% antibiotic/antimycotic and the NMDA receptor antagonists used to avoid excitotoxicity post transfection were 100 μM D-2-amino-5-phosphonopentanoic acid and 10 μM 5,7-dichlorokynurenic acid.
Whole-cell, voltage clamp recordings in HEK293T cells
Transfected HEK293T cells were identified by eGFP expression (excitation at 470 nm, coolLED pE-100; coolLED Ltd, Andover, UK). Whole-cell, patch clamp recordings were performed using an Axopatch 200B amplifier (Molecular Devices, Sunnydale, CA, USA) at room temperature. The signal was then filtered using an 8 kHz 8-pole low-pass filter (-3 dB Bessel; Frequency Devices, Ottawa, IL, USA) and digitized at 20 kHz using a Digidata 1440A analogue-digital interface (Molecular Devices) using Clampex software (Molecular Devices). Patch pipettes were made from thin-walled borosilicate glass (TW150F-4; World Precision Instruments, Sarasota, FL, USA) using a P-1000 puller (Sutter Instruments, Novato, CA, USA) to give a resistance of 4-5 M when filled with an internal solution containing (in mM): 141 K-gluconate, 2.5 NaCl, 10 Hepes and 11 EGTA (pH 7.3 with KOH) (300 mosmol L −1 ). The extracellular solution was composed of (in mM): 150 NaCl, 10 Hepes, 3 KCl, 0.5 CaCl 2 , 0.01 EDTA and 20 D-mannitol (pH 7.4 with NaOH). Rapid solution exchange (open tip solution exchange with 10-90% rise times of 0.7-0.9 ms) was achieved using a two-barrel theta-glass pipette controlled by a piezoelectric translator (Siskiyou Corporation, Grants Pass, OR, USA). Cells with glutamate-evoked responses of less than 100 pA or greater than 1000 pA were rejected, to reduce error from noise and to avoid leak of undesired subunit pairings (Hansen et al. 2014). To calculate glutamate deactivation rates, multicomponent exponential decay curves were fitted to macroscopic response time courses in ChanneLab software (Synaptosoft, Fort Lee, NJ, USA). t weighted was calculated as : (% amplitude fast * t fast ) + (% amplitude slow * t slow )
Single-channel, voltage clamp recordings in HEK293T cells
Single-channel, voltage clamp recordings were made at room temperature from cell-attached patches formed on HEK293T cells placed in a solution that contained (in mM): 150 NaCl, 2.8 KCl, 10 Hepes, 2 CaCl 2 , 10 glucose and 0.1 glycine (pH 7.35 with NaOH) (300-330 mosmol L −1 ). Patches were recorded at a range of holding potentials. Transfected cells were identified by eGFP expression as above. Patch pipettes were made from thick-walled borosilicate glass (GC150F-7.5; Harvard Apparatus, Cambridge, MA, USA) using a P-87 puller (Sutter Instruments, Novato, CA, USA) and fire-polished to give a resistance of 6-12 M when filled with the external solution plus agonist (glutamate 1 mM). Currents were recorded using an Axopatch 200B amplifier (Molecular Devices). Data were filtered at 2 kHz and digitized at 20 kHz via a BNC-2090A analogue-digital interface (National Instruments, Newbury, UK) using WinEDR software (Strathclyde Electrophysiology Software, Strathclyde, UK).
WinEDR v3 (Strathclyde Electrophysiology Software) was used to idealize the traces (using a transition threshold of 50% of the predominant conductance level and a 100 μs open and shut resolution) and to fit Gaussian curves to amplitude histograms. The relative proportion of time spent in a given conductance state was calculated by fitting areas under the amplitude histograms. To improve accuracy of amplitude estimates, a 1 ms minimum open time duration was applied. Because the membrane potential was not known, the patch potential was unknown. Therefore, recordings were made at a range of pipette potentials (+40 to +180 mV) and the current plotted against voltage: the slope (fitted by linear regression) gave the conductance. Mean open times were calculated using openings at one pipette potential, with a 1 ms minimum open time duration applied, and fitted using single exponentials in WinEDR. Transitions between subconductance states were manually coded from inspection of all openings with duration >1 ms at a single pipette potential. Traces were excluded if: R 2 < 0.9 for the linear regression, fewer than three pipette potentials were recorded from, or fewer than 20 openings occurred at a given pipette potential.
Whole-cell, voltage clamp recordings in cultured neurons
Electrophysiological recordings in cultured neurons were performed as described previously (Marwick et al. 2017). Recordings were made at room temperature with neurons superfused (flow rate of 2 mL min -1 ) with external recording solution composed of (in mM) 150 NaCl, 2.8 KCl, 10 Hepes, 2 CaCl 2 , 10 glucose, 0.1 glycine and 0.003 TTX (pH 7.35 using NaOH) (300-330 mosmol L −1 ). Transfected cells were identified using eGFP expression (excitation at 470 nm, coolLED pE-100; coolLED Ltd). Then, 150 μM NMDA was applied briefly twice to trigger desensitization until a steady response was achieved (ß10 s) at which point 1 mM MgCl 2 was co-applied until the response plateaued. Cells were then perfused with 3 μM ifenprodil for 1 min before the experiments were repeated. Solution application was manually controlled. Patch pipettes were made from thick-walled borosilicate glass (GC150F-7.5; Harvard Apparatus) using a P-87 puller (Sutter Instruments) to give a resistance of 2-4 M when filled with internal solution containing (in mM): 141 K-gluconate, 2.5 NaCl, 10 Hepes and 11 EGTA (pH 7.3 with KOH) (300-330 mosmol L −1 ). Currents were recorded using an Axopatch 200B amplifier (Molecular Devices). Data were filtered at 2 kHz and digitized at 20 kHz via a National Instruments BNC-2090A analogue-digital interface (National Instruments) using WinEDR software. Neurons were voltage clamped at -65 mV, and recordings were rejected if the holding current was greater than 150 pA or if the series resistance was greater than 30 M , or increased by greater than 20% during the course of the recording. Capacitance was calculated by calculating the area under the current response to a 5 mV test pulse plotted against time (giving the charge) and dividing by the voltage of the test pulse. Current density was then calculated as current/capacitance.
Statistical analysis
Data are presented as the mean ± SEM. Graphs depict individual cells (circles), means (columns) and SEM (error bars). N refers to number of cells. R, version 3.1.2 (R Core Team, 2014) was used to perform statistical tests. Comparisons between multiple means are performed by ANOVA, with post hoc tests performed if the F test was significant for a main effect. Comparisons between two means are performed by independent, two-tailed, Welch t tests (which do not assume equal variance between groups) unless otherwise stated. Correction for multiple comparisons is made using the Bonferroni method. P < 0.05 was considered statistically significant ( * P < 0.05, * * P < 0.01 and * * * P < 0.001.
GluN2A N615K reduces Mg 2+ blockade in triheteromeric NMDA receptors
We first investigated Mg 2+ blockade in GluN1/ GluN2A N615K diheteromers expressed in HEK293T J Physiol 597.6 cells (Fig. 1). Consistent with previous work (Endele et al. 2010), we found that the GluN2A N615K variant resulted in negligible Mg 2+ blockade at approximately physiological concentrations. We next investigated the impact of the GluN2A N615K variant when expressed as part of triheteromeric NMDA receptors containing wild-type GluN2 subunits (Fig. 1). Accordingly, we employed receptor subunits with modified endoplasmic reticulum retention signals as developed by Hansen et al. (2014) to express receptors containing zero, one or two GluN2A N615K subunits partnered with wild-type GluN2A or GluN2B subunits. Receptors containing only one GluN2A N615K subunit continued to exhibit markedly reduced Mg 2+ blockade, with a blockade intermediate between wild-type and GluN2A N615K diheteromers ( Fig. 1A and B). Unexpectedly, the identity of the wild-type subunit partnered with GluN2A N615K impacted on Mg 2+ blockade, with a GluN2B WT partner showing lower Mg 2+ blockade than a GluN2A WT partner ( Fig. 1A and B). We confirmed that the modified C-termini did not themselves influence Mg 2+ blockade (Fig. 1C).
GluN2A N615K reduces single-channel conductance in diheteromeric and triheteromeric NMDA receptors
The 'N+1' residue altered by the GluN2A N615K variant is an ion-channel lining residue and contributes to form one of the narrowest regions of the receptor pore (Song et al. 2018). We therefore hypothesized that the variant would influence ion permeation in addition to Mg 2+ blockade, and investigated single channel conductance. We found that the GluN2A N615K variant lead to a substantial four-fold reduction in single-channel conductance in GluN2A N615K -containing diheteromers (Fig. 3A, B, E and F). As a control, we confirmed that the modified C-termini did not influence conductance (Fig. 3D).
In view of the marked effect of only one GluN2A N615K subunit on Mg 2+ blockade in NMDA receptors, we hypothesized that GluN2A N615K containing triheteromeric receptors would also show a conductance intermediate between GluN2A WT and GluN2A N615K diheteromers. Our recordings from cells expressing triheteromeric receptors showed a more complex picture than expected: the vast majority of GluN2A N615K triheteromeric patches showed channels with two conductances (Fig. 3C, E and F). One conductance was indeed intermediate between that of GluN2A WT and GluN2A N615K diheteromers; the other was a prominent subconductance close in value to the primary conductance of GluN2A N615K diheteromers.
We conducted some additional analyses to investigate two alternative explanations that could mediate what appears to be a prominent subconductance in GluN2A N615K containing triheteromeric NMDA receptors. One alternative explanation is that the higher conductance represents simultaneous openings of two triheteromeric channels. However, the higher conductance was equal to more than double the lower conductance (44 pS vs. 15 pS and 44 pS vs. 16 pS) (Fig. 3F). We therefore concluded that the upper conductance did not represent the simultaneous opening of two identical lower conductance triheteromeric channels.
We also assessed mean open times, finding that the GluN2A N615K /2A triheteromeric subconductance openings were briefer than those observed for GluN2A N615K diheteromers (Fig. 3G). This argues against the possibility that the similar conductance is mediated by GluN2A N615K diheteromeric receptors present in triheteromer-expressing patches. Such diheteromeric 'escape currents' can potentially arise when expression levels are high (Hansen et al. 2014) and, indeed, we did observe a small number of putative triheteromeric receptor patches (five out of 27 patches) in which only one conductance was observed (four with conductances close to wild-type diheteromers; one with a conductance close to 2A N615K diheteromers). Because these patches presumably contained escaped diheteromeric receptors, they were excluded from the analysis. Overall, our findings suggest that the GluN2A N615K variant results in a marked reduction in NMDA receptor single channel conductance when one or two copies are present. When two copies are present in a receptor, a single low conductance is seen. When one copy is present, two conductances are seen: low and intermediate.
GluN2A N615K reduces Mg 2+ blockade and current density in cultured neurons
To complete our investigation, we wished to explore whether the pronounced effects of GluN2A N615K seen in NMDA receptors expressed in a human cell line would also be seen in receptors expressed in primary neurons, subject to neuronal specific trafficking and regulation. Accordingly, we used transient transfection to overexpress GluN2A WT and GluN2A N615K subunits in mouse primary cortical neurons. The resulting NMDA receptor population probably comprised a mixture of diheteromeric and triheteromeric GluN2B and GluN2A-containing receptors, formed from both endogenous GluN2B subunits and transfected subunits. We Figure 3. GluN2A N615K reduces single-channel conductance in triheteromeric NMDA receptors A-C, representative voltage clamp recordings made from cell-attached patches from HEK293T cells expressing NMDA receptors containing wild-type GluN2A diheteromers, GluN2A/2B triheteromers and GluN2A N615K -containing diheteromers and triheteromers, partnered with either GluN2A WT or GluN2B WT . All subunits, including wild-type, had modified endoplasmic reticulum signals. The traces show single-channel currents in the presence of glutamate (1 mM) and glycine (100 μM). The pipette potentials used for the traces illustrated in (A) to (C) were +120, +140 and +140 mV, respectively. 'C' = closed; 'g1' = first conductance fitted; 'g2' = second conductance fitted. A number of transitions between the two conductance states can be seen in (C). D, summary data showing single channel conductance for receptors with and without modified C-termini. J Physiol 597.6 first confirmed the expression of transfected GluN2A subunits (wild-type or GluN2A N615K ) by demonstrating a reduction in inhibition by ifenprodil (a selective GluN2B negative allosteric modulator) compared to neurons transfected with an inert control, which express predominantly GluN2B at DIV 9 (McKay et al. 2012) (Fig. 4A-D). We next assessed the impact of the GluN2A N615K mutation on Mg 2+ blockade, in the presence and absence of ifenprodil ( Fig. 4A-C and E). We found that Mg 2+ blockade was reduced in neurons transfected with GluN2A N615K and that this effect was more pronounced in the presence of ifenprodil, when a higher proportion of activated receptors contain GluN2A subunits. Third, we assessed the impact of GluN2A N615K on current density (Fig. 4A-C and F). We found that current density was reduced in neurons transfected with GluN2A N615K and this effect was again more pronounced in the presence of ifenprodil. This reduction in current density is consistent with the reduction in conductance we observed previously (Fig. 3). Taken together, these results show that, when the GluN2A N615K mutation is expressed in cells which endogenously express NMDA receptors, it continues to have a profound influence on Mg 2+ blockade and on current density.
Discussion
In the present study, we investigated the functional consequences of GluN2A N615K , a heterozygous missense variant found to have arisen de novo in two unrelated people with early onset epileptic encephalopathy. Using heterologous systems, we showed that the GluN2A N615K variant results in major alterations to physiologically crucial aspects of NMDA receptor function: an almost complete loss of Mg 2+ blockade and a four-fold reduction in conductance. Importantly, the variant continues to markedly reduce Mg 2+ blockade and conductance when expressed in NMDA receptor triheteromers with one wild-type GluN2 subunit, and also continues to have effect when expressed in cortical neurons. These findings strengthen the evidence that the GluN2A N615K variant causes the severe neurodevelopmental disorder experienced by its carriers.
GluN2A N615K reduces Mg 2+ blockade
The marked reduction in Mg 2+ blockade that we observed with the GluN2A N615K mutation is in keeping with previous work identifying the affected residue as important for Mg 2+ blockade (Wollmuth et al. 1998) and with the disruptive replacement of an asparagine with a positively-charged lysine. Our finding is also consistent with previous work reporting minimal Mg 2+ blockade in GluN2A N615K diheteromers (Endele et al. 2010). We additionally demonstrated a reduction in Mg 2+ blockade when GluN2A N615K is expressed in primary neurons. The smaller magnitude of reduction in Mg 2+ blockade seen in neurons probably reflects the presence of endogenous GluN2B subunits. We directly addressed the impact of GluN2A N615K when expressed as part of triheteromeric NMDA receptors with wild-type partner subunits and found that Mg 2+ blockade was reduced to an intermediate extent. This is an important finding because disease-associated mutations in GRIN2A have so far only been found heterozygously. Observing an effect of the variant despite the presence of wild-type subunits further supports a role for GluN2A N615K in disease causation. This finding indirectly supports the disease relevance of other NMDA receptor pore mutations where functional consequences have been established in diheteromers (Fedele et al. 2018;Fernández-Marmiesse et al. 2018).
A reduction in Mg 2+ blockade would be hypothesized to have substantial impact on neuronal physiology and synaptic plasticity because voltage-dependent Mg 2+ blockade is essential to the role of the NMDA receptor as a molecular coincidence detector (Huganir & Nicoll, 2013). Alterations to this property could therefore be anticipated to result in deficits in learning and memory, as may be reflected in the severe and profound intellectual disability of carriers. However, plasticity is dependent on many interacting factors: additional receptor properties such as calcium permeability, expression of other NMDA receptor subunits and expression of other receptor types. Furthermore, deficits in plasticity caused by reduced or increased expression of NMDA receptor subunits have been associated with behavioural consequences in several (Sakimura et al. 1995;Kiyama et al. 1998;Roberts et al. 2009) but not all studies (Okabe et al. 1998). Further studies are needed to assess the potential impact of the GluN2A N615K variant on synaptic plasticity, which would be aided by a transgenic animal model.
GluN2A N615K reduces conductance
We found that the GluN2A N615K variant reduced single-channel conductance by around four-fold when expressed as diheteromers. Triheteromers containing only one GluN2A N615K subunit showed two conductances: one similar to GluN2A N615K diheteromers and one intermediate conductance equivalent to around two-thirds of wild-type. We also found that NMDA-evoked current density was reduced in neurons expressing GluN2A N615K , suggesting that any acute compensation by other receptor subunits was insufficient to overcome the reduction in conductance associated with the GluN2A N615K variant. Our finding of reduced conductance adds to previous work reporting similarly marked reductions in conductance following mutations of equivalent residues neighbouring the location of the GluN2A N615K variant in GluN1 and GluN2B subunits (Behe et al. 1995;Premkumar et al. 1997;Schneggenburger & Ascher, 1997). In addition to this probable direct impact of the altered residue on ion permeation through the narrow internal region of the pore (Song et al. 2018), it is possible that other charged regions that strongly influence ion permeation (e.g. the DRPEER motif in the GluN1 subunit) (Wollmuth, 2018) may also be indirectly affected by the GluN2A N615K variant, contributing to effects on single-channel conductance. Our finding that glutamate deactivation rates are unaffected by the mutation shows that the overall time course of receptor openings is similar. A reduction in conductance is of physiological importance because it implies that less charge is passed by activated NMDA receptors containing GluN2A N615K subunits. This would reduce the ability of a receptor to contribute to neuronal excitability, and also potentially alter the receptor's ionotropic signaling pathways.
In summary, the present study has strengthened the evidence indicating that the disease-associated variant GluN2A N615K is associated with substantial changes in physiologically crucial properties of NMDA receptors. This information can be used to inform genetic counselling. Our work highlights NMDA receptor-related synaptic transmission as probable candidates for disruption in the pathogenesis of neurodevelopmental disorders. Future work could usefully explore the impact of this mutation in vivo in synaptic plasticity at circuit and behavioural levels.
|
v3-fos-license
|
2018-04-03T02:15:38.760Z
|
2000-12-29T00:00:00.000
|
21471504
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/275/52/41476.full.pdf",
"pdf_hash": "49421d97b5c77c520afbe48eab0cd8b2c325dcb4",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44185",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "897b66450fed897bf5112cc0774d33bcd35aef6c",
"year": 2000
}
|
pes2o/s2orc
|
Disulfide Bonds of GM2 Synthase Homodimers
GM2 synthase is a homodimer in which the subunits are joined by lumenal domain disulfide bond(s). To define the disulfide bond pattern of this enzyme, we analyzed a soluble form by chemical fragmentation, enzymatic digestion, and mass spectrometry and a full-length form by site-directed mutagenesis. All Cys residues of the lumenal domain of GM2 synthase are disulfide bonded with Cys429 and Cys476 forming a disulfide-bonded pair while Cys80 and Cys82 are disulfide bonded in combination with Cys412 and Cys529. Partial reduction to produce monomers converted Cys80 and Cys82 to free thiols while the Cys429 to Cys476 disulfide remained intact. CNBr cleavage at amino acid 330 produced a monomer-sized band under nonreducing conditions which was converted upon reduction to a 40-kDa fragment and a 24-kDa myc-positive fragment. Double mutation of Cys80 and Cys82 to Ser produced monomers but not dimers. In summary these results demonstrate that Cys429 and Cys476 form an intrasubunit disulfide while the intersubunit disulfides formed by both Cys80 and Cys82 with Cys412 and Cys529 are responsible for formation of the homodimer. This disulfide bond arrangement results in an antiparallel orientation of the catalytic domains of the GM2 synthase homodimer.
Ganglioside synthesis is regulated during differentiation, development, and malignant transformation (1)(2)(3) and occurs in the Golgi apparatus by the stepwise addition of monosaccharides to glycolipid acceptors by membrane bound glycosyltrans-ferases. The simple gangliosides GM3, 1 GD3, and GT3 are the precursors of the a-, b-, and c-series of gangliosides, respectively, and are synthesized by the addition of 1, 2, or 3 molecules of sialic acid to lactosylceramide (LacCer). Complex gangliosides are formed by the addition of GalNAc to simple gangliosides by the action of UDP-GalNAc:lactosylceramide/ GM3/GD3 -1,4-N-acetylgalactosaminyltransferase (GM2 synthase) followed by the attachment of Gal and additional sialic acid residues (1). Thus, GM2 synthase is a key enzyme in ganglioside biosynthesis, controlling the balance between the expression of simple and complex gangliosides (4). Genetic ablation of GM2 synthase in mice resulted in male sterility (5) as well as decreased myelination and axonal degeneration of the central and peripheral nervous system (6).
Previously we showed that GM2 synthase is a homodimer formed by disulfide bond(s) in the lumenal domain (7). The importance of Cys residues of glycosyltransferases has been shown in several functional and structural studies (8,9) and also by the fact that the Cys residues of each glycosyltransferase family are conserved in spacing (10,11). In addition a few glycosyltransferases have been shown to be disulfide bonded dimers although the majority are monomeric (see "Discussion").
In this report we have utilized protein chemistry experiments, coupled with mass spectrometric analyses, to determine: 1) that all Cys of the soluble form of GM2 synthase are involved in disulfide bonds; and 2) which disulfides are responsible for dimer formation. These results demonstrate that in the dimer the NH 2 terminus of one subunit is close to the COOH terminus of the other subunit in space.
EXPERIMENTAL PROCEDURES
GM2 Synthase-CHO cell clone GTm1 which stably expresses a soluble form of myc-tagged GM2 synthase was described previously (12). Large scale cell culture was achieved in roller bottles using the serum-free medium CHO-S-SFM II (Life Technologies) according to Kolhekar et al. (13) and in bioreactors at the National Cell Culture Center, Minneapolis, MN. The soluble form of GM2 synthase was partially purified from culture supernatants by SP-Sepharose chromatography using a 0 -0.25 M NaCl gradient in 50 mM Hepes, pH 7.6, 5 mM MnCl 2 .
Liquid Chromatography/Electrospray Ionization-Tandem Mass Spectrometry (LC/ESI-MS/MS) Analyses-A sample of the concentrated soluble form of GM2 synthase (0.05 nmol) was treated with a 20-fold molar excess of PEO-maleimide-activated biotin and immediately denatured with 8 M urea. The sample was incubated for 60 min in the dark at room temperature. The concentration of urea in the sample was reduced to 2 M by adding water followed by the addition of trypsin (1/10 ratio, w/w, of trypsin/protein), and the mixture (50 l) was incubated overnight at 37°C. Oligosaccharides were released from GM2 synthase by PNGase F treatment. PNGase F (Roche Molecular Biochemicals) was dissolved in 100 mM sodium phosphate, 25 mM EDTA at pH 7.2 at a concentration of 200 units/ml. PNGase F digestion was performed on tryptic digests of GM2 synthase by adding PNGase F to a final concentration of 20 units/ml and incubating overnight at 37°C.
GM2 synthase digests were separated using a capillary C18 column (150 ϫ 0.18 mm; Nucleosil, 5-m particle size) and analyzed on a Finnigan LCQ ion trap mass spectrometer (San Jose, CA) with a modified electrospray ionization (ESI) source. A detailed scheme of the experimental setup for the LC/ESI-MS/MS analyses is described elsewhere (14). Briefly, a positive voltage of 3 kV was applied to the electrospray needle, and a N 2 sheath flow (65 scale) was applied to stabilize the ESI signal. The LC/MS analysis was conducted using a Hewlett-Packard 1050 HPLC system (Palo Alto, CA) coupled to the LCQ. The mobile phase was split before the injector by a Tee-connector, and a flow rate of 2 l/min was established through the capillary C18 column. The enzymatically digested peptides were eluted from the column using 0.5% formic acid in water (mobile phase A) and 0.5% formic acid in acetonitrile (mobile phase B) with a three-step linear gradient of 5 to 10% B in the first 10 min, 10 to 35% B in the next 40 min, and 35 to 40% B in the last 5 min. The LC/ESI-MS/MS analysis was accomplished using an automated data acquisition procedure, in which a cyclic series of three different scan modes were performed. Data acquisition was conducted using the full scan mode (m/z 300 -2000) to obtain the most intense peak (signal Ͼ 1.5 ϫ 10 5 counts) as the precursor ion, followed by a high resolution zoom scan mode to determine the charge state of the precursor ion and an MS/MS scan mode (with a relative collision energy of 38%) to determine the structural fragment ions of the precursor ion. The resulting MS/MS spectra were then searched against a protein data base (Owl) by Sequest to confirm the sequence of tryptic peptides.
After analyzing for free Cys-containing peptides, a fraction of the digested protein (40 pmol) was reduced with DTT (dithiothreitol, 200fold molar excess over protein) at 65°C for 20 min and alkylated with iodoacetamide (500-fold molar excess) in the dark for 30 min to detect peptides with disulfide-linked Cys residues.
Partial Reduction-The soluble form of GM2 synthase (0.25 nmol) was incubated with 100 mM DTT in 100 mM Tris, pH 8.0, 0.5 mM EDTA under nitrogen for 2 h at room temperature after which a 10-fold excess of iodoacetamide (in 0.4 M Tris, pH 8.0) over the DTT content was added, and incubation was continued under nitrogen for an additional 2 h at room temperature in the dark. Samples were separated on SDS-PAGE gels, Western blotted with anti-myc or stained with Coomassie Blue, bands cut out, and analyzed by in-gel digestion as described below.
In-gel Digestion and MALDI-TOF-Coomasssie Blue-stained regions from SDS-PAGE were cut from the gel in ϳ1-mm 3 sections and were taken for tryptic hydrolysis using a modification of the method of Jensen et al. (15). Briefly, the gel was washed using NH 4 HCO 3 and CH 3 CN, then proteins were reduced where indicated with DTT and alkyated where indicated with iodoacetamide. After washing, proteins were hydrolyzed using modified trypsin (Promega). Differences from the method of Jensen et al. (15) were the use of higher (20 mM) DTT concentration, larger volume (0.1 ml) washes following alkylation and exclusion of calcium from the trypsin mixture.
Where indicated samples following trypsin in-gel digestion were deglycosylated with PNGase F (Calbiochem 362185, 2.5 milliunits/l). PNGase F was diluted to 0.2 milliunits/l with 50 mM NH 4 HCO 3 , added to in-gel digests at a ratio of 1 to 4 (v/v), and incubated at 37°C overnight.
Peptides were then taken for thin film spotting for MALDI using ␣-cyanohydroxycinnamic acid as matrix and a nitrocellulose film on stainless steel targets with 1-2 l spots. Mass spectral data were obtained using a Tof-Spec 2E (Micromass) and a 337-nm N 2 laser at 20 -35% power in the positive ion linear or reflectron mode, as appropriate. Spectral data were obtained by averaging 10 spectra each of which was the composite of 10 laser firings. Mass axis calibration was accomplished using peaks from tryptic autohydrolysis. Data were analyzed using MassLynx Pro-teinProbe software and the EMBL SwissProt data base.
CNBr Cleavage-The soluble form of GM2 synthase (0.1 nmol) was organic-solvent precipitated (16). The dried protein pellet was resuspended in 0.04 ml of cleavage solution (20 mM sodium citrate, pH 4.5, 0.2% SDS) with or without CNBr (0.5 M stock, Aldrich; final 1.2 mM which was at least a 50-fold molar excess over the Met content) followed by flushing of the reaction tubes with nitrogen, and incubation overnight in the dark at room temperature. Cleavage reactions were diluted 10-fold with water, frozen, and solvent removed by sublimation in a centrifugal vacuum concentrator. The remaining residue was solubilized in nonreducing SDS-PAGE sample buffer by heating at 100°C for 5 min, separated on SDS-PAGE gels under reducing and nonreducing conditions, and stained with Coomassie Blue or Western blotted with anti-myc. Bands of interest were cut out and subjected to in-gel digestion and analysis by MALDI-TOF. Bands from nonreducing gels were also cut out of the gel, minced, boiled in reducing SDS-PAGE sample buffer, and the suspension of gel pieces loaded in the wells of new SDS-PAGE gels.
Site-directed Mutagenesis-Site-directed mutagenesis of full-length GM2 synthase was performed on a pcDNA3 plasmid containing GM2 synthase/myc cDNA using the Transformer Site-directed Mutagenesis Kit (CLONTECH), according to the manufacturer's instructions. The sequence of the mutated construct at the mutation sites was confirmed by DNA sequencing (EPSCoR Sequencing Center, University of Louisville and Biomolecular Research Facility, University of Virginia). Prior to cell transfection, the construct was tested for production of full-length GM2 synthase in vitro using the TnT coupled transcription/ translation system (Promega, Madison, WI). Wild-type CHO cells were transfected, and cells stably expressing mutated GM2 synthase/myc were analyzed by anti-myc immunofluorescence screening as described previously (7,17). Transfected cells were analyzed by flow cytometry with anti-GM2 and cholera toxin for in vivo GM2 synthase activity, and cell extracts were analyzed by Western blotting with anti-myc and in vitro assay for GM2 synthase activity as described elsewhere (7,17).
Determination of the Disulfide-thiol Status of All Cys
Residues-The strategy for determining the oxidation state of Cys residues was to alkylate all free Cys with a biotinylated form of maleimide under denaturing conditions, reduce and alkylate with iodoacetamide all other Cys, digest with trypsin and PN-Gase F, and analyze by LC/MS (14). When the soluble form of GM2 synthase was analyzed in this way, no biotinylated peptides were found, suggesting that no Cys residues with free thiol groups were present. Instead, five peptides were identified which contained alkylated Cys, and the sequence of each tryptic peptide was validated by MS/MS analysis. Thus, the following peptides were identified with reduced and alkylated Cys residues (all residue numbers refer to the position in fulllength GM2 synthase): amino acids 69 -95 containing Cys residues 80 and 82, 398 -414 containing Cys residue 412, 418 -442 containing Cys residue 429, 473-486 containing Cys residue 476, and 527-533 (plus the residues P and EQK of the myc tag) containing Cys residue 529. Fig. 1 shows the MS/MS spectrum for the reduced and alkylated peptide 398 -414 as an example. As shown in this figure, fragment ions were produced at various sites along the peptide via peptide bond cleavage. Both COOH-terminal y n (n ϭ 8 -15) and NH 2 -terminal b n (n ϭ 3, 5, 6, and 12) ions can clearly be identified in the spectrum. Fragment ion assignments are shown in the peptide structure of Fig. 1.
Identification of the Disulfide Bond Pairs-LC/MS analysis of a tryptic digest of unreduced GM2 synthase produced a triply charged ion at m/z 1373.5 and a quadruply charged ion at m/z 1030.8 ( Fig. 2A). These ions correspond to a disulfide-bonded tryptic dipeptide containing amino acids 418 -442 and 473-486. The observed ion (4118.4) is equal to the mass of these two peptides minus the mass of two protons which are lost as the result of the disulfide bond formed, divided by the charge state (i.e. 3 or 4) of the ion. Thus, the molecular weight of peptide 418 -442 is equal to 2718.1, and that of peptide 473-486 is 1402.6. Linking these peptides via a disulfide bond generates a dipeptide with an average mass of 4118.7, in good agreement with the measured mass of 4118.4. MS/MS analysis of the triply charged ion at m/z 1373.5 (Fig. 2B) produced a spectrum with dominant fragments of (Y 14 y 16 ) 2ϩ at m/z ϭ 1548.7, (Y 13 y 22 ) 2ϩ at m/z ϭ 1838.9, (Y 13 y 19 ) 2ϩ at m/z ϭ 1649.9, and y 9 at m/z ϭ 1022.6. These ions were either generated from the COOH terminus of the peptide containing amino acids 418 -442 (denoted as y 9 ), or from the COOH terminus of the peptide 418 -442 (denoted as y 16 , y 19 , and y 22 ) in combination with amino acids 473-486 at the COOH terminus of the peptide (denoted as Y 13 and Y 14 ). The observed fragment peaks at m/z 1367.9 and 1305.7 in Fig. 2B also matched well with the calculated masses for the fragments (M ϩ 3 H Ϫ 18) 3ϩ and (Y 14 y 23 ) 3ϩ , respectively. These data provided a complete verification of the proposed disulfide-bonded tryptic peptide pair.
To confirm this disulfide assignment, the GM2 synthase dimer separated on a nonreducing SDS-PAGE gel was cut from the gel, in-gel digested with trypsin followed by PNGase F, and analyzed by MALDI-TOF. The dipeptide containing the Cys 429 -Cys 476 disulfide was detected at 4118 (Fig. 3A). In addition a small peak consistent with a free thiol for the peptide containing Cys 412 was also detectable (data not shown). However, that ion could be explained by the report of Patterson and Katta (18) that disulfide-linked peptides could be fragmented to reduced forms of peptides during MALDI-TOF. To prove that point, the sample was alkylated with iodoacetamide and reanalyzed. The same small peak for free thiol at Cys 412 was still present indicating that it was not present prior to MALDI-TOF but instead was generated during MALDI-TOF. Thus, Cys 429 and Cys 476 form one disulfide bond.
Evidence for a second disulfide bonded set of tryptic peptides came from the presence of ions with 3 and 4 positive charges at m/z 1950. Occupancy of Asn-linked Glycosylation Sites-Haraguchi et al. (19) found that all three potential N-glycosylation sites of GM2 synthase were utilized during in vitro transcription/ translation. Similarly, our data indicate that all three sites are occupied in the soluble form of GM2 synthase. Specifically, the measured mass (M ϩ H) for the doubly charged ion at m/z 1475.1 and triply charged ion at m/z 983.9 of the peptide (containing alkylated Cys residues 80 and 82 amino acids 69 -95) was 2950 (Fig. 5A), rather than 2949 demonstrating that Asn 79 was converted to Asp 79 by PNGase F. Moreover, the MS/MS spectrum of the doubly charged ion at m/z 1475.1 (Fig. 5B) for this peptide confirmed that Asn 79 had been converted to Asp since the difference between the mass of the NH 2 -terminal fragments b 10 at m/z 1110.7 (containing amino acids 69 -78) and b 11 at m/z 1225.6 (containing amino acids 69 -79) is 114.9 (mass for Asp) rather than 114 (mass for Asn). In addition, MALDI-TOF analyses after PNGase F treatment also demonstrated the appearance of peptides containing Asn 179 and Asn 274 , indicating that those residues were glycosylated (Fig. 6). Moreover, in the reflectron mode the detected mass for the latter peptide was 2503.28 rather than the calculated mass of 2502.36 demonstrating that Asn 274 was converted to Asp 274 by PNGase F (data not shown).
Identification of the Intersubunit and Intrasubunit Disulfide Bonds-Based on the disulfide pairs identified above, there are seven possible models for the intersubunit disulfide bonds responsible for dimer formation. These models include the following combinations of intersubunit bonds: 1) Cys 429 to Cys 476 ; 2) one or both of the disulfides connecting Cys 80 and Cys 82 with Cys 412 and Cys 529 ; or 3) both 1 and 2. To distinguish inter-from intrasubunit disulfides, initially we tested specific proteases to generate diagnostic fragments such as caspase 3 cleavage at residue 315 and furin cleavage at 414, but neither protease cleaved GM2 synthase (data not shown). Therefore, we resorted to two alternative strategies, partial reduction and CNBr cleavage.
Our rationale for partial reduction was to reduce GM2 syn- thase without denaturation so that the more accessible intersubunit disulfide(s) would be reduced whereas the less accessible intrasubunit disulfide(s) would remain intact. Treatment with 100 mM DTT followed by alkylation converted all of the GM2 synthase dimer to the monomer (Fig. 7E). LC/MS of the tryptic digest of this partially reduced sample (data not shown) revealed the triply charged ion at m/z 1373.5 (monoisotopic (M ϩ H) ϩ ion at 4117.3) described above ( Fig. 2A) which is the result of a disulfide-bonded tryptic dipeptide containing amino acids 418 -442 and 473-486. MS/MS analysis of this species (data not shown) verified the structure of this tryptic peptide pair as described above (Fig. 2B). Furthermore, we cut out the monomer band produced by partial reduction and performed in-gel digestion followed by MALDI-TOF (Fig. 3B). The Cys 429 -476 dipeptide was detected at 4118.8 in agreement with the LC/MS data. Following complete reduction, this dipeptide band disappeared (Fig. 3C). In the partially reduced sample there were also ions at 4175.7 and 4232.8. Since these differed from the dipeptide peak and from each other by an average of 57 atomic mass unit, they are due to addition of acetamide. Since these two peptides contain three His, it is likely that these bands are due to alkylation of these His as described previously (20,21). Thus, at least a portion of the disulfide bond between Cys 429 and Cys 476 remained intact under conditions in which dimer was converted to monomer, meaning that the Cys 429 -476 disulfide must be an intrasubunit bond.
LC/MS of the partially reduced and alkylated sample also detected a doubly charged ion at m/z 1475.4 (monoisotopic (M ϩ H) ϩ ion at 2948.4) which is the tryptic peptide containing amino acids 69 -95 as verified by MS/MS analysis (data not shown; the results were identical to those shown in Fig. 5, A and B). Thus, Cys 80 and Cys 82 were both reduced and alkylated by the partial reduction conditions, indicating the likelihood that both participate in intersubunit disulfide bonds. Ions for peptides containing alkylated Cys 412 and Cys 529 were not detected following partial reduction and alkylation. In summary based on the partial reduction experiments, we eliminated any models for GM2 synthase intersubunit disulfide bonds that included Cys 429 to Cys 476 disulfides as intersubunit bonds.
We next used cleavage with CNBr to further identify the intersubunit disulfide bonds. The soluble form of GM2 synthase contains three Met residues at positions 330, 515, and FIG. 3. Partial reduction studies of the disulfide-coupled peptides 418 -442 and 473-486. A, the MALDI-TOF (reflectron) mass spectrum from m/z 3400 to 5000 for the disulfide-linked dipeptide obtained from a GM2 synthase dimer band that was in-gel digested with trypsin and analyzed without reduction. B, the MALDI-TOF (reflectron) mass spectrum for the same dipeptide following partial reduction and alkylation. m/z ϭ 4118 represents the dipeptide ϩ H ϩ , while masses at 4175.7 and 4232.8 represent mono-and dialkylated species, respectively. C, the MALDI-TOF (reflectron) mass spectrum for the same peptide following complete reduction using DTT and alkylation with iodoacetamide. 530 (numbers refer to the position in the full-length enzyme with the amino terminus of the soluble form corresponding to residue 21 of the full-length enzyme). Treatment with CNBr when analyzed under nonreducing conditions produced a monomer-sized band while considerable dimer remained (Fig. 7B, lane 3). When the sample was run under reducing conditions (Fig. 7B, lane 6), considerable monomer remained indicating that cleavage with CNBr had not been efficient. The low efficiency of cleavage may have been due to the unusual conditions used for fragmentation which were originally chosen to ensure retention of a photoaffinity probe used for other studies. 2 CNBr cleavage when analyzed under reducing conditions should produce fragments of GM2 synthase of about 40 kDa (amino acids 21-330, with the precise molecular mass depending on the actual Asn-linked oligosaccharide structures attached at residues 79, 179, and 274), 20.5 kDa (amino acids 331-515), 1.9 kDa (amino acids 516 -530), and 1.6 kDa (amino acids 531-544 which includes the myc epitope). However, if CNBr only cut at residue 330, then a myc-positive, 24-kDa fragment consisting of residues 331-544 plus a 40-kDa fragment consisting of residues 21-330 would be present under reducing conditions. In fact CNBr did produce a set of bands ranging from about 20 to 24 kDa plus a 40-kDa band when analyzed under reducing conditions (Fig. 7B, lane 6). However, the most abundant band among the set of 20 -24-kDa fragments was the 24-kDa fragment which was myc-positive (Fig. 7C, lane 3) by Western blotting indicating that in fact it contained an intact COOH terminus. It was this 24-kDa mycpositive fragment that resulted from single cleavage at 330 that enabled us to define the intersubunit disulfide pattern as described below. We should also note that we attribute the preferential CNBr cleavage at residue 330 to the generally inefficient cleavage conditions that we used as well as to the documented sluggish cleavage that occurs at Met-Thr (23) such as that at position 530.
The acidic conditions used for CNBr cleavage produced a small amount of monomer when analyzed under nonreducing conditions (Fig. 7B, lane 2) which was less than that produced 2 J. Li, manuscript in preparation. in the presence of CNBr (Fig. 7B, lane 3). When analyzed under reducing conditions the acidic buffer produced several fragments (Fig. 7B, lane 5), some of which were detected by antimyc Western blotting and which migrated at 5, 10, 27, and 50 kDa (Fig. 7C, lane 2). The nature of this fragmentation in citrate buffer, pH 4.5, is unknown. CNBr cleavage in the presence of formic acid has been cautioned against because the reducing power of that acid can convert disulfides to free thiols (22). However, the fragments seen in our control sample do not appear to be simply the result of reduction of dimers to monomers. Rather they appear to result from proteolysis, perhaps by a contaminating acid protease. Nevertheless, the fragments produced by CNBr could be clearly distinguished under reducing conditions from the fragments produced by the acidic buffer alone.
The 24-and 40-kDa CNBr fragments were cut from a reducing gel (Fig. 7B, lane 6) and subjected to in-gel digestion with trypsin followed by MALDI-TOF. Nearly all of the peptides were identified for the 24-kDa piece (Fig. 8A, Table I Table I, and Fig. 9B). These MALDI patterns combined with the myc-positive staining of the 24-kDa fragment confirmed that the 40-kDa fragment consists of residues 21-330 and the 24-kDa fragment consists of residues 331-544.
Based on the disulfide pairs we identified above, there are seven possible combinations by which disulfides could link two monomers to form a dimer. However, only one of these combinations would allow CNBr cleavage at 330 to produce the pattern of fragments described above; namely, a monomersized fragment under nonreducing conditions that is converted to 24-and 40-kDa fragments upon reduction. That combination consists of intersubunit disulfides linking both Cys 80 and Cys 82 to Cys 412 and Cys 529 for a total of four intersubunit disulfides per dimer (Fig. 10). The Cys 429 to Cys 476 disulfide is an intrasubunit disulfide in this combination, which is consistent with the partial reduction results. All other intersubunit disulfide combinations would remain as dimers under nonreducing conditions following cleavage at 330. To confirm that in fact CNBr was producing a monomer-sized fragment which upon reduction was converted to the 40-kDa fragment plus the 20 -24 kDa set of fragments, we cut out from a nonreducing gel the monomer band produced by CNBr cleavage (Fig. 7B, lane 3), boiled it in reducing sample buffer, and reran it under reducing conditions (Fig. 7D). The monomer band was in fact converted into 40-and 24-kDa bands (Fig. 7D, lane 5), and the latter band was myc positive (data not shown). Neither the 40-kDa nor the 24-kDa bands were produced from the dimer band of an untreated sample (Fig. 7D, lane 1), the dimer or monomer band from a sample treated under control conditions (Fig. 7D, lanes 2 and 4, respectively), or the dimer band from a CNBr-treated sample (Fig. 7D, lane 3). Therefore, these results confirm that in fact the monomer band produced by CNBr cleavage was the source of these fragments. In summary our CNBr results demonstrate that intersubunit disulfides join Cys 80 and FIG. 8. MALDI-TOF spectra of tryptic peptides generated from the CNBr fragments. CNBr cleavage produced major fragments at 24 and 40 kDa as a result of preferential cleavage at Met 330 . Following electrophoretic separation, bands were taken for in-gel tryptic hydrolysis, and the resulting peptides were analyzed using MALDI-TOF (reflectron) analysis. Cys 82 of one monomer in combination with Cys 412 and Cys 529 of the other subunit (Fig. 10).
Site-directed Mutagenesis-The results obtained from chemical analyses demonstrate that GM2 synthase exists as a dimer in which disulfide bonds connect Cys 80 and Cys 82 with Cys 412 and Cys 529 . Site-directed mutagenesis was used to further support these findings. Thus, a double mutant (C80S/C82S) of full-length GM2 synthase was created to eliminate the interchain disulfide bonding pattern observed in the wild-type enzyme. These mutations should result in GM2 synthase existing only as a monomer. As shown in Fig. 11, the anti-myc immunofluorescence staining pattern of cells expressing this mutated enzyme was similar to that of clone C5 cells expressing full-length, wild-type GM2 synthase/myc in that the most intensely stained region consisted of a cluster of punctate, perinuclear structures characteristic of the Golgi of CHO cells (Fig. 11) as we have described previously (17). In addition there was some staining of the endoplasmic reticulum in both cell populations which was stronger in the cells expressing the mutated enzyme. These findings indicate that the mutated enzyme folded sufficiently to pass through the endoplasmic reticulum quality control system. Cells expressing this mutated protein did not possess in vitro GM2 synthase activity above the background level of untransfected CHO cells (Ͻ0.004 nmol/mg/h) nor did they exhibit cell surface staining with anti-GM2 and cholera toxin greater than that of CHO cells as determined by flow cytometry (data not shown). We tested for cholera toxin staining because we and others found previously that when low levels of GM2 were produced by some mutants of GM2 synthase the GM2 did not accumulate but instead was quantitatively converted to GM1 which could be detected by cholera toxin (24,25). Interestingly, when cell extracts were Western blotted with anti-myc, this mutated protein was detected only as a monomer (Fig. 11). This result is consistent with Cys 80 and Cys 82 being necessary for formation of the disulfide bonds responsible for dimerization, in agreement with the structural evidence described above. In addition these results indicate that individual monomers lack catalytic activity. DISCUSSION Our results demonstrate that the monomers of GM2 synthase homodimers are joined by intersubunit disulfide bonds that pair Cys 80 and Cys 82 of one subunit with Cys 412 and Cys 529 of the other subunit. We have not attempted to determine whether it is Cys 80 or Cys 82 that is joined to Cys 412 or which is paired with Cys 529 . Because a Ser is at position 81, there is no protease that can cleave between Cys 80 and Cys 82 . Future efforts using MALDI post-source decay and partial reduction, cyanylation, and fragmentation in basic solution (26) will be required to answer that question.
Most glycosyltransferases described to date are monomeric. These monomeric forms include the enzyme responsible for N-deacetylation and N-sulfation of heparan sulfate (27) from Bacillus subtilis (29), heparan sulfate 6-sulfotransferase (34), and ␣1,3-galactosyltransferase (30,31). Two Golgi enzymes which have monomeric lumenal domains but which dimerize through intersubunit disulfides in their transmembrane or cytoplasmic domains are 1,4-galactosyltransferase and ␣1,3-fucosyltransferase VI (8,32,33,35). In the present study we analyzed a soluble form of GM2 synthase. The fulllength form of this enzyme has two additional Cys residues in the transmembrane domain (36). However, it is unlikely that these two Cys residues are involved in dimerization because: 1) when the lumenal domain is proteolytically cleaved intracellularly and then secreted from the cell, it is a homodimer (7); and 2) mutation of Cys 80 and Cys 82 to Ser of full-length GM2 synthase resulted in only monomers being produced (Fig. 11).
Golgi proteins that are dimeric include the nucleotide sulfate transporter (37), ERGIC-53 intermediate compartment marker protein (38), GDPase (39), ␣-mannosidase II (40), and the GDPmannose transporter (41). Two other glycosyltransferases in addition to GM2 synthase that have been shown to be dimers are the ␣1,2-fucosyltransferase (H enzyme) (42) and GlcAT-I (43). Interestingly, the intersubunit disulfide bond of this latter enzyme occurs in the stem region at Cys 33 ; thus, the enzyme forms a Y shape with two separate catalytic domains at each arm of the Y similar to the structure of IgG.
The structure of the GM2 synthase dimer that we report here is entirely different from that of GlcAT-I in that the disulfide bond pattern results in an antiparallel orientation of the lumenal domains of the two monomers (Fig. 10). There are precedents for the antiparallel orientation of the monomeric chains of a dimer although there are none among the glycosyltransferases. Cys 42 and Cys 84 of one subunit of interleukin 5 are joined in an antiparallel manner to Cys 84 and Cys 42 of the other subunit (44,45). Similarly, the monomeric subunits of plateletderived growth factor are joined by two intermolecular disulfides between Cys 43 and Cys 52 to form an antiparallel arrangement (46,47). Finally, homodimers of the small glycoprotein sGP of Ebola virus (48) are formed by intermolecular disulfides between Cys 53 and Cys 306 which results in an antiparallel orientation of the monomers and brings the amino terminus of one chain close to the carboxyl terminus of the other monomer, similar to the arrangement for GM2 synthase we describe here (Fig. 10).
To date we have not been able to generate a monomeric form of GM2 synthase that retains catalytic activity. Stepwise reduction and alkylation of GM2 synthase resulted in a parallel decrease in enzyme activity and conversion of dimers to monomers. 3 Also, the double C80S/C82S mutation described here formed an inactive monomer. Therefore, it seems likely that the homodimeric structure of GM2 synthase may be necessary for catalytic activity to be achieved. For example, the antiparallel orientation of the two lumenal domains may force them to face each other to form an active catalytic domain. Oligomerization of Golgi proteins has been proposed to be involved in Golgi targeting and retention (42,49,50). Beyond that proposal, the purposes for dimeric forms of Golgi enzymes have not been defined although it has been speculated that the formation of dimers and oligomers may be necessary to convert monomers of low activity to multimers having higher affinity or avidity (51). The functional relationship between the monomeric subunits of dimeric glycosyltransferases is known in only a few cases. At the one extreme is ␣2,6-sialyltransferase which exists as an active monomer in the endoplasmic reticulum and then moves to the Golgi where a portion forms an inactive homodimer (52). At the other extreme is the endoplasmic re-ticulum resident UDP-GlcNAc:dolichol-P GlcNAc-1-P transferase (53) in which the separated monomers are inactive but cooperate to form an active dimer.
With the goal of being able to use sequence similarities to predict folding similarities, a total of 553 glycosyltransferases were classified into 26 families based on sequence similarities (54). GM2 synthase was placed in family 12 which has only two members, GM2 synthase and the closely related GALGT2 enzyme (55). Therefore, family 12 may have a novel fold. Furthermore, it is possible that the antiparallel orientation of the monomeric lumenal domains may be unique to family 12. Alternatively, we may speculate that in the full-length, membrane bound form of GM2 synthase this antiparallel arrangement forces the catalytic domain closer to the ganglioside GM3 substrate embedded in the Golgi membrane than if the polypeptide chains were extended away from the membrane. Therefore, it is possible that this antiparallel arrangement may be found in other membrane bound enzymes that act on lipid substrates.
|
v3-fos-license
|
2021-08-28T06:17:17.792Z
|
2021-08-01T00:00:00.000
|
237326962
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-9059/9/8/1056/pdf",
"pdf_hash": "de367f255689486d280babef2b8675c04320f3f8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44186",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"sha1": "bef9f52aec63971ecd9044281572f9cddddb18de",
"year": 2021
}
|
pes2o/s2orc
|
African Trypanosomiasis: Extracellular Vesicles Shed by Trypanosoma brucei brucei Manipulate Host Mononuclear Cells
African trypanosomiasis or sleeping sickness is a zoonotic disease caused by Trypanosoma brucei, a protozoan parasite transmitted by Glossina spp. (tsetse fly). Parasite introduction into mammal hosts triggers a succession of events, involving both innate and adaptive immunity. Macrophages (MΦ) have a key role in innate defence since they are antigen-presenting cells and have a microbicidal function essential for trypanosome clearance. Adaptive immune defence is carried out by lymphocytes, especially by T cells that promote an integrated immune response. Like mammal cells, T. b. brucei parasites release extracellular vesicles (TbEVs), which carry macromolecules that can be transferred to host cells, transmitting biological information able to manipulate cell immune response. However, the exact role of TbEVs in host immune response remains poorly understood. Thus, the current study examined the effect elicited by TbEVs on MΦ and T lymphocytes. A combined approach of microscopy, nanoparticle tracking analysis, multiparametric flow cytometry, colourimetric assays and detailed statistical analyses were used to evaluate the influence of TbEVs in mouse mononuclear cells. It was shown that TbEVs can establish direct communication with cells of innate and adaptative immunity. TbEVs induce the differentiation of both M1- and M2-MΦ and elicit the expansion of MHCI+, MHCII+ and MHCI+MHCII+ MΦ subpopulations. In T lymphocytes, TbEVs drive the overexpression of cell-surface CD3 and the nuclear factor FoxP3, which lead to the differentiation of regulatory CD4+ and CD8+ T cells. Moreover, this study indicates that T. b. brucei and TbEVs seem to display opposite but complementary effects in the host, establishing a balance between parasite growth and controlled immune response, at least during the early phase of infection.
Introduction
African trypanosomiasis (AT), also known as sleeping sickness in humans and Nagana in cattle, is a vector-borne disease caused by an extracellular kinetoplastida parasite of the Trypanosomatidae family, genus Trypanosoma, and species Trypanosoma brucei, which is transmitted by the hematophagous dipteran Glossina spp. (tsetse fly). This parasitosis is considered a neglected tropical disease restricted to the intertropical region of Africa following the geographical distribution of its vector. Parasite transmission to mammals occurs by the inoculation of metacyclic trypomastigotes forms during the insect blood meal.
After being introduced into the host dermis, parasites start to replicate, giving origin to the initial lesion or the inoculation chancre [1,2]. After two or three weeks, the chancre tends to disappear, and the disease can evolve in two distinct successive phases [2]: Phase I or the hemolymphatic stage is characterized by successive waves of invasion of the blood and lymphatic system by trypanosomes, which causes intermittent fever [2,3] and phase II or meningoencephalitis stage, where the typical symptoms of sleeping sickness (dementia, cachexia, coma, and death) may become evident [2][3][4]. Trypanosome parasites have a dense surface coat constituted by the variant surface glycoprotein (VSG). These immunogenic coats present at the cell surface as homodimers and anchored in the membrane through glycosylphosphatidylinositol constitute the pathogen-associated molecular patterns (PAMPs) that are recognized by pattern-recognition receptors (PPRs) of innate immunity [2,4].
The innate immune response includes macrophages (MΦ), which can engulf foreign antigens through endocytosis. When soluble parasite factors and VSG interacts with MΦ, they became classically activated (M1-MΦ) and synthesize reactive oxygen intermediates (ROI), produce nitric oxide (NO), and release proinflammatory cytokines, as is the case of tumor necrosis factor (TNF)-α and interleukin (IL)-6 (type I cytokines) [2,5]. To reduce inflammation, trypanosome blood forms release some components, such as Trypanosoma brucei-derived kinesin-heavy chain (TbKHC-1) and adenylate cyclase (ADC), which upregulate the IL-10 production that prevents TNF−α expression [6,7], leading to the differentiation of alternative activate MΦ (M2-MΦ) and production of anti-inflammatory cytokines [5]. The inflammatory response is critical for parasite control in the early stage of infection [5] and the switch from M1-MΦ to M2-MΦ seems to occur four weeks after infection when the patient is in the late stage of disease [8].
Together with dendritic cells, MΦ are antigen-presenting cells (APC), establishing a bridge with adaptive immunity. Parasite antigens complexed with class I molecules of major histocompatibility complex (MHCI) are presented to CD8 + T cells and parasite antigen bound to class II molecules of major histocompatibility (MHCII) complex are recognized by CD4 + T cells. Both CD4 + and CD8 + T cells are crucial in the orchestration of host adaptive immune response against T. b. brucei parasites. However, the successive peaks of parasitemia exhibiting specific VSG induce a continuous activation of T cell clones, which leads to cell exhaustion and then to immunosuppression, and consequently impaired parasite control [9].
Like mammal cells, trypanosomes also secrete extracellular vesicles (EV), which comprise nanovesicles that differ in size and carry parasite molecular components, such as proteins, lipids, and nucleic acids [10]. The parasite T. brucei releases EV (TbEVs) that seems to play a role in both parasite-parasite and parasite-host interactions, influencing host immune response [11]. Moreover, TbEVs appear to incorporate different flagellar proteins, acting like virulence factors, and be highly enriched in oligomers, such as tetraspanins (a broadly expressed superfamily of transmembrane glycoproteins), known to be involved in antigen presentation, T cell signalization and activation, and in MHCI and MHCII generation. Also, it was demonstrated that upon fusion with erythrocytes, these TbEVs are responsible for the removal of red cells from the host bloodstream [12].
For parasite survival in the mammal host and success in being transmitted to the insect vector, ensuring the completion of the T. brucei life cycle, the infected host must develop a balanced immune response able to prevent the host killing and allow parasite replication. Despite all the studies done to understand the immune mechanisms associated with this parasitic disease, the exact role played by TbEVs in host immune response is still evasive. Thus, this work explores the effect of TbEVs in the immune activation of mouse mononuclear cells.
Experimental Design
To explore the effect of TbEVs on host immune response an experimental design comprising three main steps was established. The first step aims to examine the shape, size, and density of purified TbEVs using electron microscopy and nanoparticle tracking analysis. In the second step, the aim was to investigate the innate activity of MΦ when stimulated by TbEVs, including their microbicide and antigenic presentation potential, and the last goal step aims to assess the differentiation of T cell subsets related to effector and regulatory adaptive immune response.
Mouse MΦ-like cells were exposed to T. b. brucei trypomastigotes and stimulated by TbEVS and BALB/c mice peripheral blood mononuclear cells (PBMC) were exposed to T. b. brucei trypomastigotes and stimulated by TbEVS and parasite antigen (Ag). Type I and type II MΦ activity was analyzed by NO and urea production using colourimetric assays. The potential of MΦ to present parasite antigens to lymphocytes was indirectly evaluated by the expression and density of surface molecules MHCI and MHCII by flow cytometry. T lymphocyte subsets were immunophenotyped by flow cytometry through surface expression and density of CD3 and CD25 molecules and intracellular expression of FoxP3. In parallel, resting cells, MΦ stimulate by phorbol myristate acetate (PMA) and PBMC stimulated by concanavalin A (ConA) were also evaluated. Furthermore, obtained data were subject to statistical procedures.
BALB/c Mice
Six to eight-week-old male BALB/c Mus musculus mice were purchased from the Instituto Gulbenkian de Ciências (IGC, Lisbon, Portugal) and maintained in the IHMT animal facility, in sterile cabinets with sterile food and water ad libitum. Mice were used to recover Trypanosome blood forms and to isolate PBMC. The animals were handled according to the Portuguese National Authority for Animal Health (Ref. 0421/000/000/2020, 23 September 2020, DGAV-Direção Geral de Alimentação e Veterinária), in conformity with the institutional guidelines and the experiments performed in compliance with Portuguese law (Decree-Law 113/2013), EU requirements (2010/63/EU), and following the recommendations of the Federation of European Laboratory Animal Science Associations (FELASA).
Trypanosoma brucei brucei Parasites
Peripheral blood of BALB/c mice infected with T. brucei brucei strain G.V.R. 35 was collected (Supplementary Videos S1 and S2) and treated with ammonium-chloride-potassium lysis buffer. Trypanosoma blood forms were then purified using diethyl aminoethyl (DEAE)cellulose columns [13] and maintained in Schneider Drosophila medium (Sigma-Aldrich, Hamburg, Germany) supplemented with 10% (v/v) of heat-inactivated fetal bovine serum (hiFBS) free of extracellular vesicles (FBS-exofree, Thermo Fisher Scientific, Waltham, MA, USA) at 24 • C. Parasite morphology and motility were checked every day by direct microscopy and, parasite topography was evaluated by scanning electron microscopy (SEM, SEM-UR-LBDB Hitachi SU8010 High-Technologies Corporation, Ibaraki, Japan). TbEVs were purified from the supernatant of cultures exhibiting motile trypomastigotes.
T. b. brucei Extracellular Nanovesicles and Parasite Crude Antigen
Trypomastigotes harvested from cultures by centrifugation at 2000× g for 10 min at 4 • C were used to produce parasite antigen (Ag), and TbEVs were obtained from culture supernatants. Parasites were washed twice in phosphate-buffered saline (PBS) 2 mM ethylenediaminetetraacetic acid (EDTA), resuspended in PBS, and then disrupted by eight freeze-thawing cycles ranging from −20 • C to room temperature. After centrifugation, protein content (mg·mL −1 ) was determined using a Nanodrop 1000 spectrophotometer (Thermo Scientific, Waltham, MA, USA), and Ag was preserved at −20 • C. Supernatants of trypomastigote culture incubated for 48 h were filtered through 0.2 mm syringe filters PMBC were isolated from healthy (non-infected) BALB/c mice by density gradient [14]. Briefly, cells at Hystopaque-1077 (Sigma-Aldrich) and plasma interface were removed and washed three times in PBS by centrifugation at 370× g for 10 min at 4 • C. Then, the supernatant was discarded and the PMBC-containing pellet resuspended in PBS. Cell viability was evaluated using the trypan blue staining method and cell concentration estimated in a Neubauer-counting chamber by direct microscopy.
Stimulation of Macrophages and PBMC by TbEVs
MΦ (2 × 10 6 cells per well) and PBMC (1 × 10 5 cells per well) were plated in a sterile 96-well plate with complete RPMI medium and separately exposed to viable parasites (3 parasites per cell) and stimulated with TbEVs (10 µg·mL −1 ). PBMC also was stimulated by T. b. brucei soluble antigen (10 µg·mL −1 ). MΦ plates were incubated for 24 h and PBMC plates for 72 h at 37 • C in a humidified atmosphere with 5% CO 2 . In parallel, unstimulated cells, used as the negative control, and PMA (Promega, Madison, WI, USA) stimulated MΦ and ConA (Sigma-Aldrich)-stimulated lymphocytes, used as the positive controls, were also incubated.
Nitric Oxide and Urea Production by Macrophages
Supernatants of MΦ exposed to viable parasites, stimulated by TbEVs and PMA, and non-stimulate (resting MΦ) were collected and used for quantification of urea by the commercial kit QuantiChrom™ Urea Assay Kit-DIUR-100 (BioAssay System, Hayward, CA, USA) and the indirect measurement of NO levels through the detection of NO 2 − and NO 3 − using the Nitrate/Nitrite Colorimetric Assay (Abnova, Walnut, CA, USA). Both assays were performed according to the manufacturer's instructions.
Macrophage and Lymphocyte Immunophenotyping
Cells (MΦ and PBMC) exposed to motile parasites and stimulated were harvested from plates and washed with PBS. MΦ were labelled with mouse anti-MHCI (H-2Kb) monoclonal antibody FITC directly conjugated and mouse anti-MHCII (I-A/I-E) monoclonal antibody PE directly conjugated (BioLegend, San Diego, CA, USA).
PBMC were resuspended in PBS 0.5% hiFBS 2 mM EDTA and were added magnetic microbeads coated with mouse anti-CD8a (Ly-2) monoclonal antibody (Miltenyi Biotec, Bergisch Gladbach, Germany). Cells were incubated for 15 min at 4 • C protected from light and then washed. CD8 + cells were sorted by positive selection, using a MACS ® system (Miltenyi Biotec) while CD8 − cells were eluted.
Cell acquisition was performed in a 4-colour flow cytometer (BD FACSCalibur, BD Biosciences, USA) and data were analyzed using Flowjo V10 (Tree Star Inc., Ashland, OR, USA). FSC-H vs. SSC-H gate was used to remove debris and pyknotic cells in the lower left quadrant of the plot as well as the large (off-scale) debris found in the upper right quadrant. Singlet gate was used to define the non-clumping cells based on pulse geometry FSC-H vs. FSC-A, eliminating the doublets. CD3 + , CD25 + , and FoxP3 + cell subsets were defined using fluorescence minus one control (FMOs).
To validate the composition of magnetically separated cell fractions, samples of unprimed CD8 + and CD8 − cell fractions were stained with the anti-CD3 monoclonal antibody and anti-CD4 monoclonal antibody PerCp directly conjugated (BioLegend) and evaluated by flow cytometry. In CD8 + cell fraction, less than 20% of cells presented a CD3 + CD8 − phenotype, indicating that this cell fraction mainly was constituted by CD8 + T cells and in the CD8 − cell fraction ≈85% of the cells evidenced a CD3 + CD4 + phenotype, which is consistent with the predominance of CD4 + T cells.
Statistical Analysis
Data analysis was performed using GraphPad Prism version 8 (GraphPad Software, San Diego, CA, USA). After verification by a Kolmogorov-Smirnov test that the data of the current study do not evidence a normal distribution, significant differences were determined using the non-parametric Wilcoxon matched-pair signed-rank test. A 5% (p < 0.05) significance level was used to evaluate statistical significance. The surface and perimeter of TbEVs are represented by violin plots (median, interquartile ranges and distribution). Cell results of at least three independent experiments evaluated in triplicate are expressed by whiskered box-plots, indicating the median, maximum and minimum values or by graph bars (mean and standard error). The relative importance of TbEVs in cell activity was assessed by the principal component analysis (PCA) of exploratory multivariate statistical analysis, using Past4.03 (Natural History Museum, University of Oslo, Oslo, Norway). PCA analysis organizes data in principal components, which can be visualized graphically with a minimal loss of information, making visible the differences between the activation of cells exposed to T. b. brucei parasites, TbEVs and Ag stimulated cells. K-means cluster analysis also was used for cluster validation.
T. b. brucei Trypomastigote Forms Release Extracellular Vesicles
Cultured-derived T. b. brucei parasites observed by direct microscopy and by scanning electronic microscopy showed an elongated body, a flagellum ( Figure 1A), and an undulating membrane ( Figure 1B), which are recognised as the morphological characteristic of the trypomastigote form. Besides, cultured-derived parasite exhibited nanovesicles that seems to bud from the cell surface and flagellum (Figure 2A,B). cells. K-means cluster analysis also was used for cluster validation.
T. b. brucei Trypomastigote Forms Release Extracellular Vesicles
Cultured-derived T. b. brucei parasites observed by direct microscopy and by scanning electronic microscopy showed an elongated body, a flagellum ( Figure 1A), and an undulating membrane ( Figure 1B), which are recognised as the morphological characteristic of the trypomastigote form. Besides, cultured-derived parasite exhibited nanovesicles that seems to bud from the cell surface and flagellum (Figure 2A,B). Cultured-derived T. b. brucei parasites observed by direct microscopy and by scanning electronic microscopy showed an elongated body, a flagellum ( Figure 1A), and an undulating membrane ( Figure 1B), which are recognised as the morphological characteristic of the trypomastigote form. Besides, cultured-derived parasite exhibited nanovesicles that seems to bud from the cell surface and flagellum (Figure 2A Suspension of purified nanovesicles derived from trypomastigote forms of T. b. brucei evidenced a mean protein concentration of 5.107 µg·mL −1 (≈35.28) and NTA analysis revealed that the size of TbEVs was within a range of 50 nm and 350 nm.
Two peaks of higher concentration of TbEVs with sizes around 100 and 170 nm were detected, while TbEVs bigger than 200 nm seem to be rare ( Figure 2C). The topographic analysis place in evidence vesicles with a spherical shape and a smooth surface ( Figure 3A), with perimeter ranging between 335 and 3311 nm (mean 1634 nm ± 94.07) ( Figure 3B) and surface area between 9 and 872 nm (mean 255.2 nm ± 25.40) ( Figure 3C). These data also indicate that cultured-derived T. b. brucei parasites release two main classes of TbEVs: small TbEVs (with perimeter and surface area around 65 nm and 905 nm, respectively) and large TbEVs (with perimeter and surface area around 346 nm and 2088 nm, respectively). 3A), with perimeter ranging between 335 and 3311 nm (mean 1634 nm ± 94.07) ( Figure 3B) and surface area between 9 and 872 nm (mean 255.2 nm ± 25.40) ( Figure 3C). These data also indicate that cultured-derived T. b. brucei parasites release two main classes of TbEVs: small TbEVs (with perimeter and surface area around 65 nm and 905 nm, respectively) and large TbEVs (with perimeter and surface area around 346 nm and 2088 nm, respectively).
T. b. brucei and TbEVs Induced Mouse MΦ to Produce NO and Urea
MΦ activity after stimulation with TbEVs was examined by the ability of cells to metabolize arginine ( Figure 4).
Resting-MΦ incubated for 24 h (negative control) exhibited a residual NO synthesis ( Figure 4B). However, after PMA-stimulation (positive control), cells revealed a significant NO increase (p = 0.0078), thus confirming the viability and functionality of these cells that could transform arginase into NO through the enzymatic activity of NOS2. MΦ stimulated by TbEVs (p = 0.0313) or T. b. brucei parasites (p = 0.0078) ( Figure 4A) showed significant increases in NO levels when compared with resting MΦ. MΦ exposed to parasites exhibited the higher NO production that was significantly different from TbEVs exposed-MΦ (p = 0.0313).
In resting-MΦ were detected low levels of urea ( Figure 4C). However, PMA stimulated cells showed a significant increase in urea production (p = 0.0156), indicating that these cells can convert arginine into urea through arginase enzymatic activity. MΦ stimulated by TbEVs or exposed to parasites also exhibited significant high levels of urea when compared with resting-MΦ (p = 0.0313).
Although both T. b. brucei parasites and TbEVs promote mouse MΦ to produce NO and release urea, parasites elicit the highest secretion. . Production of nitric oxide and urea by macrophages exposed to TbEVs. MΦ (arrows) exposed to trypomastigote forms were observed under an inverted light microscope (A, size bar-20 μm, ×100 magnification). NO (B) and urea (C) production were evaluated in MΦ exposed to T. b. brucei (Tbb) parasites, in MΦ stimulated by TbEVs and PMA, and in resting-MΦ (negative control, NC). Results of three independent experiments (n = 12) and two replicates per sample are represented by whiskered box plots, including median, minimum, and maximum values. The non-parametric Wilcoxon matched-pairs signed-rank test was used for statistical comparisons (p < 0.05). * (p < 0.05) and ** (p < 0.01) indicate statistical significance.
TbEVs Direct the Differentiation of MHCI + , MHCII + and MHCI + MHCII + Macrophage Subsets
To indirectly evaluate the possible presentation of parasite antigens by mouse MΦ, the expression of MHC molecules by MΦ exposed to TbEVs for 24 h and the frequency of MHCI + and MHCII + MΦ subsets were analyzed by flow cytometry (Supplementary Figure S1 and Figure 5). Figure 4. Production of nitric oxide and urea by macrophages exposed to TbEVs. MΦ (arrows) exposed to trypomastigote forms were observed under an inverted light microscope ((A), size bar-20 µm, ×100 magnification). NO (B) and urea (C) production were evaluated in MΦ exposed to T. b. brucei (Tbb) parasites, in MΦ stimulated by TbEVs and PMA, and in resting-MΦ (negative control, NC). Results of three independent experiments (n = 12) and two replicates per sample are represented by whiskered box plots, including median, minimum, and maximum values. The non-parametric Wilcoxon matched-pairs signed-rank test was used for statistical comparisons (p < 0.05). * (p < 0.05) and ** (p < 0.01) indicate statistical significance.
Resting-MΦ incubated for 24 h (negative control) exhibited a residual NO synthesis ( Figure 4B). However, after PMA-stimulation (positive control), cells revealed a significant NO increase (p = 0.0078), thus confirming the viability and functionality of these cells that could transform arginase into NO through the enzymatic activity of NOS2. MΦ stimulated by TbEVs (p = 0.0313) or T. b. brucei parasites (p = 0.0078) ( Figure 4A) showed significant increases in NO levels when compared with resting MΦ. MΦ exposed to parasites exhibited the higher NO production that was significantly different from TbEVs exposed-MΦ (p = 0.0313).
In resting-MΦ were detected low levels of urea ( Figure 4C). However, PMA stimulated cells showed a significant increase in urea production (p = 0.0156), indicating that these cells can convert arginine into urea through arginase enzymatic activity. MΦ stimulated by TbEVs or exposed to parasites also exhibited significant high levels of urea when compared with resting-MΦ (p = 0.0313).
Although both T. b. brucei parasites and TbEVs promote mouse MΦ to produce NO and release urea, parasites elicit the highest secretion.
TbEVs Direct the Differentiation of MHCI + , MHCII + and MHCI + MHCII + Macrophage Subsets
To indirectly evaluate the possible presentation of parasite antigens by mouse MΦ, the expression of MHC molecules by MΦ exposed to TbEVs for 24 h and the frequency of MHCI + and MHCII + MΦ subsets were analyzed by flow cytometry (Supplementary Figure S1 and Figure 5). . Production of nitric oxide and urea by macrophages exposed to TbEVs. MΦ (arrows) exposed to trypomastigote forms were observed under an inverted light microscope (A, size bar-20 μm, ×100 magnification). NO (B) and urea (C) production were evaluated in MΦ exposed to T. b. brucei (Tbb) parasites, in MΦ stimulated by TbEVs and PMA, and in resting-MΦ (negative control, NC). Results of three independent experiments (n = 12) and two replicates per sample are represented by whiskered box plots, including median, minimum, and maximum values. The non-parametric Wilcoxon matched-pairs signed-rank test was used for statistical comparisons (p < 0.05). * (p < 0.05) and ** (p < 0.01) indicate statistical significance.
TbEVs Direct the Differentiation of MHCI + , MHCII + and MHCI + MHCII + Macrophage Subsets
To indirectly evaluate the possible presentation of parasite antigens by mouse MΦ, the expression of MHC molecules by MΦ exposed to TbEVs for 24 h and the frequency of MHCI + and MHCII + MΦ subsets were analyzed by flow cytometry (Supplementary Figure S1 and Figure 5). TbEVs seemed to be responsible for a significative expansion of MHCI + MHCII + MΦ subset (p = 0.0313) ( Figure 5C). Also, in this case, the exposure to parasites caused a TbEVs seemed to be responsible for a significative expansion of MHCI + MHCII + MΦ subset (p = 0.0313) ( Figure 5C). Also, in this case, the exposure to parasites caused a significant reduction of this cell subpopulation when comparing with resting-MΦ and TbEVs stimulated MΦ (p = 0.0313).
When analyzing the levels of fluorescence intensity, no differences were observed between resting-MΦ, MΦ exposed to parasites and TbEVS stimulated MΦ. Only PMAstimulated MΦ evidence a considerable augment of MHCI (p = 0.0313).
Altogether the results indicated that TbEVs enhance the expression of MHCII class I and class II in MΦ, favouring antigen presentation, but did not increase the surface density of these molecules on the cell surface. On the other hand, T. b. brucei parasites promote a reduction of MHCI + , MHCII + and MHCI + MHCII + MΦ subsets, possible avoiding the antigen presentation and activation of cytotoxic and helper T cells.
TbEVs Promoted Specific Mouse Macrophage Activation
To identify correlations between the influence of TbEVs and T. b. brucei parasites on mouse MΦ, data were analysed by PCA. This statistical analysis indicated that the overall influence of TbEVs and PMA on MΦ activity is correlated ( Figure 6A). On contrary, T. b. brucei effects on mouse MΦ were distinct from TbEVs and resting-MΦ. These results were confirmed by cluster analysis, which aggregates in the same cluster (cluster 3) the effects of TbEVs and PMA on MΦ. MΦ activity caused by parasite exposure was found through clusters 2 and 3, with most of the effects in cluster 2 whereas resting-MΦ is mainly localized in cluster 1 ( Figure 6B). between resting-MΦ, MΦ exposed to parasites and TbEVS stimulated MΦ. Only PMAstimulated MΦ evidence a considerable augment of MHCI (p = 0.0313).
Altogether the results indicated that TbEVs enhance the expression of MHCII class I and class II in MΦ, favouring antigen presentation, but did not increase the surface density of these molecules on the cell surface. On the other hand, T. b. brucei parasites promote a reduction of MHCI + , MHCII + and MHCI + MHCII + MΦ subsets, possible avoiding the antigen presentation and activation of cytotoxic and helper T cells.
TbEVs Promoted Specific Mouse Macrophage Activation
To identify correlations between the influence of TbEVs and T. b. brucei parasites on mouse MΦ, data were analysed by PCA. This statistical analysis indicated that the overall influence of TbEVs and PMA on MΦ activity is correlated ( Figure 6A). On contrary, T. b. brucei effects on mouse MΦ were distinct from TbEVs and resting-MΦ. These results were confirmed by cluster analysis, which aggregates in the same cluster (cluster 3) the effects of TbEVs and PMA on MΦ. MΦ activity caused by parasite exposure was found through clusters 2 and 3, with most of the effects in cluster 2 whereas resting-MΦ is mainly localized in cluster 1 ( Figure 6B).
Therefore, this analysis highlights that TbEVs activation of rodent MΦ can be different from cell activation induced by T. b. brucei.
TbEVs Favored the Differentiation of CD4 + T Cells and Increment the Surface Expression of CD3 Molecules
The frequency of CD3 + cells ( Figure 7) and expression level of CD3 molecules were examined in mononuclear blood cells exposed to parasites or stimulated by TbEVs or parasite Ag (supplementary Figure 2).
In both CD4 + ( Figure 7A) and CD8 + ( Figure 7B) cell fractions, the frequency of CD3 + cells were significantly lower in cells exposed to T. b. brucei in comparison to unprimed cells (p = 0.0078). However, ConA stimulation caused a significant increase in CD3 + cells in both cell fractions (p = 0.0078). In both cell fractions, cells exposed to parasites were also statistically different from TbEVs and Ag stimulated cells (p = 0.0313). TbEVs were Therefore, this analysis highlights that TbEVs activation of rodent MΦ can be different from cell activation induced by T. b. brucei.
TbEVs Favored the Differentiation of CD4 + T Cells and Increment the Surface Expression of CD3 Molecules
The frequency of CD3 + cells ( Figure 7) and expression level of CD3 molecules were examined in mononuclear blood cells exposed to parasites or stimulated by TbEVs or parasite Ag (Supplementary Figure S2).
In both CD4 + ( Figure 7A) and CD8 + ( Figure 7B) cell fractions, the frequency of CD3 + cells were significantly lower in cells exposed to T. b. brucei in comparison to unprimed cells (p = 0.0078). However, ConA stimulation caused a significant increase in CD3 + cells in both cell fractions (p = 0.0078). In both cell fractions, cells exposed to parasites were also statistically different from TbEVs and Ag stimulated cells (p = 0.0313). TbEVs were responsible for a significant increase of CD3 + cell subset in the CD4 + cell fraction (p = 0.0078) in comparison with unprimed and parasite exposed cells.
The fluorescence intensity of CD3-labelled cells in both cell fractions significantly increased (p = 0.0313) in Ag and TbEVs stimulated PBMC when compared to unprimed cells. Stimulation by ConA also induced a significant increase (p = 0.0313) in CD3-fluorescence intensity of CD4 + cell fraction. On the other hand, when compared to unprimed cells and TbEVs and Ag-stimulated cells, exposure to T. b. brucei cultured-derived parasites weakened CD3 fluorescence in CD4 + and CD8 + cell fractions (p = 0.0313).
Altogether, these results indicated that TbEVs stimulation seemed to trigger the expansion of the CD4 + (CD3hi) T cell subset and induce the expression of CD3 molecules on CD8 + T cells. In contrast, T. b. brucei parasites impaired the differentiation of CD8 + and CD4 + T cells and promoted the down expression of CD3 molecules at the cell surface.
Biomedicines 2021, 9, x FOR PEER REVIEW 10 of 18 responsible for a significant increase of CD3 + cell subset in the CD4 + cell fraction (p = 0.0078) in comparison with unprimed and parasite exposed cells. Altogether, these results indicated that TbEVs stimulation seemed to trigger the expansion of the CD4 + (CD3hi) T cell subset and induce the expression of CD3 molecules on CD8 + T cells. In contrast, T. b. brucei parasites impaired the differentiation of CD8 + and CD4 + T cells and promoted the down expression of CD3 molecules at the cell surface.
TbEVs Led the Expansion of Regulatory CD4 + T Cells and FoxP3 + CD4 + T Cell Subset and Enhanced Surface CD25 + and Intracellular FoxP3 + Molecules
To assess the effect of TbEVs on CD4 + and CD8 + T cell subsets, mononuclear blood cells were exposed to cultured parasites and stimulated by TbEVs or parasite Ag. The frequency of CD4 + T cells expressing CD25 (Figure 8) molecules was evaluated as well as the density of CD25 molecules on the cell surface and intracellular FoxP3 molecules on stimulated cells. Since parasite-exposed cells evidence a low frequency of CD3 + cells only expression and density CD25 were examined.
Stimulation of PBMC by ConA led to a significant expansion (p = 0.0313) of CD4 + T cell subset ( Figure 8B). On the other hand, parasites, TbEVs or Ag did not seem to affect the CD4 + (FoxP3 − CD25 − ) T cell subset.
TbEVs and parasites promoted a higher fluorescence intensity of CD25-labeled cells when compared with unprimed lymphocytes (p = 0.0313). Furthermore, T. b. brucei parasites induced a higher expression of CD25 molecules in comparison to TbEVs and Ag stimulated cells (p = 0.0313). Significant overexpression of intracellular FoxP3 molecules was also found in CD4 + T cells after stimulation with TbEVs or Ag (p = 0.0331).
TbEVs Led the Expansion of Regulatory CD4 + T Cells and FoxP3 + CD4 + T Cell Subset and Enhanced Surface CD25 + and Intracellular FoxP3 + Molecules
To assess the effect of TbEVs on CD4 + and CD8 + T cell subsets, mononuclear blood cells were exposed to cultured parasites and stimulated by TbEVs or parasite Ag. The frequency of CD4 + T cells expressing CD25 (Figure 8) molecules was evaluated as well as the density of CD25 molecules on the cell surface and intracellular FoxP3 molecules on stimulated cells. Since parasite-exposed cells evidence a low frequency of CD3 + cells only expression and density CD25 were examined.
Stimulation of PBMC by ConA led to a significant expansion (p = 0.0313) of CD4 + T cell subset ( Figure 8B). On the other hand, parasites, TbEVs or Ag did not seem to affect the CD4 + (FoxP3 − CD25 − ) T cell subset.
TbEVs and parasites promoted a higher fluorescence intensity of CD25-labeled cells when compared with unprimed lymphocytes (p = 0.0313). Furthermore, T. b. brucei parasites induced a higher expression of CD25 molecules in comparison to TbEVs and Ag stimulated cells (p = 0.0313). Significant overexpression of intracellular FoxP3 molecules was also found in CD4 + T cells after stimulation with TbEVs or Ag (p = 0.0331).
These results indicated that TbEVs and parasite Ag favoured expansion of regulatory CD4 + T cell subset (FoxP3 + CD25 + CD3 + CD4 + , T reg cells) and FoxP3 + CD4 + T cells. Moreover, T. b. brucei parasites enhanced the frequency of CD25 + CD4 + T cells. Among CD4 + T cells, the overexpression of α-chain of IL2-receptor (CD25) was induced by parasites and TbEVs, and the increased expression of FoxP3 was elicited by TbEVs and Ag.
TbEVs Direct the Expansion of Effector CD8 + T Cells and CD8 + T Cells Expressing FoxP3 + Phenotype and Increase Surface CD25 and Intracellular FoxP3 Molecules
CD8 + T cell subsets were examined by estimating the frequency of CD8 + T cells expressing CD25 in mononuclear blood cells exposed to parasites and stimulated by TbEVs or parasite Ag (Figure 9). The expression of FoxP3 was evaluated in stimulated cells. Furthermore, the potential of these cells to recognize IL-2 and regulate the cell immune activity was indirectly assessed by the density of surface CD25 and intracellular FoxP3 molecules. medicines 2021, 9, x FOR PEER REVIEW 11 T. b. brucei parasites enhanced the frequency of CD25 + CD4 + T cells. Among CD4 + T c the overexpression of α-chain of IL2-receptor (CD25) was induced by parasites TbEVs, and the increased expression of FoxP3 was elicited by TbEVs and Ag.
TbEVs Direct the Expansion of Effector CD8 + T Cells and CD8 + T Cells Expressing FoxP Phenotype and Increase Surface CD25 and Intracellular FoxP3 Molecules
CD8 + T cell subsets were examined by estimating the frequency of CD8 + T expressing CD25 in mononuclear blood cells exposed to parasites and stimulated TbEVs or parasite Ag (Figure 9). The expression of FoxP3 was evaluated in stimul cells. Furthermore, the potential of these cells to recognize IL-2 and regulate the immune activity was indirectly assessed by the density of surface CD25 and intracell FoxP3 molecules.
TbEVs and Ag induced a significant expansion of CD8 + (CD25 − FoxP3 − ) T cell su (p = 0.0156) and promoted the retraction of CD25 − FoxP3 + CD8 + T cell subset (p = 0.0 In consequence of the low levels of CD3 + cells, the frequency and density of FoxP3 were not considered in T. b. brucei exposed-PBMC. Results of three independent experiments (n = 18) performed in duplicate are represented by whiskered box-plot, median, minimum, and maximum values. The non-parametric Wilcoxon matched-pair signed-rank test was used for statistical comparisons. * (p < 0.05) indicated significant differences.
CD8 + T cells exposed to T. b. brucei parasites or stimulated by ConA, TbEVs, and parasite Ag showed a significant increase of CD25 molecules (p = 0.0313) when comparing with unprimed cells. Furthermore, FoxP3 fluorescent intensity also increased in CD8 + T cells stimulated by TbEVs (p = 0.0313).
Altogether the results indicated that TbEVs mainly triggered the expansion of effector CD8 + T cells and CD8 + T cell subsets expressing FoxP3 associated with a higher density of FoxP3 molecules. The overexpression of α-chain of IL2-receptor (CD25) induced by parasites can be mainly associated with the expansion of CD25 + CD8 + T cell subset. with unprimed cells. Furthermore, FoxP3 fluorescent intensity also increased in CD8 + T cells stimulated by TbEVs (p = 0.0313). Figure 9. CD25 + and FoxP3 + CD8 + T cell subsets induced by T. b. brucei EVs. Mouse lymphocytes exposed to T. b. brucei parasites (Tbb), stimulated by Ag, TbEVs, and ConA, and unprimed cells (negative control, NC) were labelled with CD3, CD25 and FoxP3 monoclonal antibodies and evaluated by flow cytometry. CD3 + cells were gated and the frequency of CD25 + FoxP3 + (A), CD25 − FoxP3 − (B), CD25 + FoxP3 − (C), and CD25 − FoxP3 + (D) cell subsets were estimated. In consequence of the low levels of CD3 + cells, the frequency and density of FoxP3 were not considered in T. b. brucei exposed-PBMC. Results of (n = 6) at least three independent experiments performed in duplicate are represented by whiskers box-plot, median, minimum and maximum values. The non-parametric Wilcoxon matched-pair signed-rank test was used for statistical comparisons. * (p < 0.05) and ** (p < 0.01) indicate significant differences.
Altogether the results indicated that TbEVs mainly triggered the expansion of effector CD8 + T cells and CD8 + T cell subsets expressing FoxP3 associated with a higher density of FoxP3 molecules. The overexpression of α-chain of IL2-receptor (CD25) induced by parasites can be mainly associated with the expansion of CD25 + CD8 + T cell subset.
TbEVs and Parasite Ag Triggered Related Influence on the Phenotype of BALB/c Mice T Cells
PCA analysis of CD4 + and CD8 + T cells indicated a positive correlation between the effect of TbEVs and parasite Ag on T cells ( Figure 10A), which contrasted with T. b. brucei effects. These results were confirmed by cluster analysis, which showed the effects of Ag and TbEVs grouped in cluster 3. On the other hand, the influence of parasites in T cells was grouped at cluster 1. Furthermore, ConA stimulation outcomes were localized in cluster 2 and the intrinsic activity of unprimed T cells were mainly within cluster 3 ( Figure 10B). Overall, the effect of TbEVs on BALB/c mice T cells seemed to be similar to cell In consequence of the low levels of CD3 + cells, the frequency and density of FoxP3 were not considered in T. b. brucei exposed-PBMC. Results of (n = 6) at least three independent experiments performed in duplicate are represented by whiskers box-plot, median, minimum and maximum values. The non-parametric Wilcoxon matched-pair signed-rank test was used for statistical comparisons. * (p < 0.05) and ** (p < 0.01) indicate significant differences.
TbEVs and Parasite Ag Triggered Related Influence on the Phenotype of BALB/c Mice T Cells
PCA analysis of CD4 + and CD8 + T cells indicated a positive correlation between the effect of TbEVs and parasite Ag on T cells ( Figure 10A), which contrasted with T. b. brucei effects. These results were confirmed by cluster analysis, which showed the effects of Ag and TbEVs grouped in cluster 3. On the other hand, the influence of parasites in T cells was grouped at cluster 1. Furthermore, ConA stimulation outcomes were localized in cluster 2 and the intrinsic activity of unprimed T cells were mainly within cluster 3 ( Figure 10B). Overall, the effect of TbEVs on BALB/c mice T cells seemed to be similar to cell stimulation caused by parasite Ag and slightly different of unprimed cells, but completely different from T. b. brucei parasites.
Discussion
African trypanosomes have been extensively studied since they are responsible for severe diseases in both medical and veterinary contexts. To survive, this extracellular protozoan affords several mechanisms that use to evade the mammals' immune system. Also, to ensure the completion of its life cycle, the parasite needs to avoid host mortality during the hemolymphatic phase. Therefore, a balance between infection level (parasitemia) and the intensity of the inflammatory immune response must be achieved in the host. Since safe and efficient anti-trypanosomal drugs and vaccines are lacking, many are the studies performed to understand the mechanisms associated with the host immune response direct against T. brucei infection. Although different approaches were applied, and numerous findings reported, some questions remain unanswered, and some mechanisms are not entirely understood or still controversial.
Since EVs released by eucaryotic cells seem to play a role in intercellular communication, interfering with several cellular processes, such as the activation of microbicide processes and generation of immune mediators by changing gene expression and affecting signalling pathways the potential of trypomastigotes derived EVs in influencing the immune activity of innate and adaptive immune cells was explored. Findings of the current study indicate that TbEVs shed by trypomastigotes is enriched in proteins and are a heterogeneous population of spherical vesicles mainly constituted by smaller (<0.17 nm) and biggest (>0.17 nm) EVs, which is in line with previous findings [12].
Since the expansion in different organs and tissues of the MΦ population, as is the case of the liver, spleen, and the bone marrow was described in T. brucei infected mice, the intercommunication of TbEVs with innate immune cells were examined. It is reported that inducible nitric oxide synthase (iNOS) peaked six days post-infection and that oxidized Larginine generates L-citrulline and NO [15]. In the current study, TbEVs trigger mouse MΦ to produce NO as well as T. brucei cultured parasites, suggesting a role for TbEVs at the early stage of sleeping sickness. NO is a highly regulated effector molecule, which participates in several physiological and immune processes and can inactivate pathogens
Discussion
African trypanosomes have been extensively studied since they are responsible for severe diseases in both medical and veterinary contexts. To survive, this extracellular protozoan affords several mechanisms that use to evade the mammals' immune system. Also, to ensure the completion of its life cycle, the parasite needs to avoid host mortality during the hemolymphatic phase. Therefore, a balance between infection level (parasitemia) and the intensity of the inflammatory immune response must be achieved in the host. Since safe and efficient anti-trypanosomal drugs and vaccines are lacking, many are the studies performed to understand the mechanisms associated with the host immune response direct against T. brucei infection. Although different approaches were applied, and numerous findings reported, some questions remain unanswered, and some mechanisms are not entirely understood or still controversial.
Since EVs released by eucaryotic cells seem to play a role in intercellular communication, interfering with several cellular processes, such as the activation of microbicide processes and generation of immune mediators by changing gene expression and affecting signalling pathways the potential of trypomastigotes derived EVs in influencing the immune activity of innate and adaptive immune cells was explored. Findings of the current study indicate that TbEVs shed by trypomastigotes is enriched in proteins and are a heterogeneous population of spherical vesicles mainly constituted by smaller (<0.17 nm) and biggest (>0.17 nm) EVs, which is in line with previous findings [12].
Since the expansion in different organs and tissues of the MΦ population, as is the case of the liver, spleen, and the bone marrow was described in T. brucei infected mice, the intercommunication of TbEVs with innate immune cells were examined. It is reported that inducible nitric oxide synthase (iNOS) peaked six days post-infection and that oxidized L-arginine generates L-citrulline and NO [15]. In the current study, TbEVs trigger mouse MΦ to produce NO as well as T. brucei cultured parasites, suggesting a role for TbEVs at the early stage of sleeping sickness. NO is a highly regulated effector molecule, which participates in several physiological and immune processes and can inactivate pathogens including T. brucei parasites [15]. However, high NO levels can be a disadvantage to the host, given the large spectra of cell disorders that are associated with NO activity. In sleeping sickness, the persistence of M1-MΦ is related to anaemia and systemic immune response [16][17][18][19].
To counteract the possible harmful effect of NO in the host and perpetuate parasite replication, African trypanosomes induce the differentiation of M2-MΦ. L-Arginine can be hydrolyzed by arginase, generating ornithine, urea, proline and polyamines [20]. A previous study performed in T. b. brucei-infected mice indicates that M1-MΦ predominate in the early stage of infection while M2-MΦ prevail at a later infection stage (1-4 weeks) [18]. In the current study was found that TbEVs and cultured parasites induced urea and NO production by mouse MΦ. These findings reinforce the role of TbEVs in the early stage of infection as a MΦ modulator. By inducing the differentiation of mouse MΦ expressing M1 and M2 phenotypes, TbEVs seem to strengthen parasite activity in sustaining a balance between pro-inflammatory and anti-inflammatory responses. Moreover, by simultaneously ensuring the supported production of polyamines, which are essential nutrients for parasite survival and replication [7,21], TbEVs can facilitate the establishment of a chronic infection. The kinesin-1 heavy chain (TbKHC-1) released by African trypanosomes has been described as an activator of the arginine pathway, leading to the production of polyamines [7]. Taking into account the above considerations, TbKHC-1 may form part of TbEVs' cargo.
Contrary to T. b. brucei parasites that cause a marked reduction in the frequency of MHCI + and MHCII + MΦ, affecting the recognition of parasite antigens by T lymphocytes and compromising the adaptive immune response, TbEVs induce differentiation of MHCI + and MHCII + MΦ and double-positive MΦ (MHCI + MHCII + ), indicating that these cells can establish a link with acquired immunity by presenting parasite antigens to helper and cytotoxic T cells. Even so, the density of MHCI and MHCII molecules on the membrane surface of MΦ remained similar to resting MΦ. Though not fully understood similar inhibition of MHC expression avoiding antigen presentation was mentioned in a study carried out on T. cruzi, which is an intracellular parasite responsible for Chagas' disease or American trypanosomiasis [22]. Therefore, TbEVs that appear to exert an action contrary to the parasite may cause the activation of adaptive immunity at least to some point, activating both CD4 + and CD8 + T cells.
During the course of infection, T. brucei releases the stumpy induction factor (TSIF) that has been associated with the impairment of T cell proliferation [23]. A previous study performed in patients infected with T. b. gambiense has shown that T cells were considerably lower when compared with controls [24]. Thus, considering the high importance of cellular activation in orchestrating an integrated host immune response against sleeping sickness, the communication established by TbEVs on T cell expansion was also examined. In agreement with previous findings, a marked reduction of both CD3 + CD4 + and CD3 + CD8 + T cells caused by T. b. brucei parasites was found in the current study. However, TbEVs favour the expansion of CD4 + T cell subpopulation and induce the increase of CD3 expression in both CD4 + and CD8 + subsets (CD3 hi CD4 + and CD3 hi CD8 + T cells). CD3 complex is a T cell co-receptor responsible for intracellular signalling. Altogether, these findings indicate that, contrary to T. b. brucei, TbEVs may have the necessary conditions to stimulate T cells. However, in a previous study, Boda and colleagues [25] reported that effector CD8 + T cells were significantly lower in patients with African trypanosomiasis which corroborates our findings.
In the current study, T. b. brucei parasites induce the expansion of CD4 + and CD8 + T cell subpopulations expressing a CD25 phenotype and favour the expression of CD25 molecules (CD25 hi CD4 + and CD25 hi CD8 + T cells). CD25 is the α chain of the IL-2 receptor, which is expressed by regulatory T (Treg) cells. CD25 is required for interleukin (IL)-2 signalling that mediates T cell activation and proliferation. To become fully activated, CD8 + T cells require the help of CD4 + T cells mediated by IL-2. However, Kalia et al. [26] have found that CD25 hi CD8 + T cells can perceive proliferation signals (IL-2) and differentiate into short-lived effector cells and Olson et al. [27] reported that lymphocyte triggering factor (TLTF) released by T. b. brucei specifically elicit CD8 + T cells to generate IFN-γ. Moreover, it was reported that activated CD4 + T cells secrete IFN-γ and is recognised that this cytokine functions as a T. b. brucei growth factor [28]. Thus, T. b. brucei parasites may favour the expansion of CD25 + T cells to ensure their survival.
FoxP3 nuclear factor is a regulator of gene expression that suppresses the function of NFAT and NF-κB nuclear factors, avoiding the expression of pro-inflammatory cytokine genes, including IL-2. CD4 + Treg cells which constitutively express FoxP3 can suppress leukocyte effector activity, contributing to the maintenance of immune homeostasis. These regulatory cells also exhibit a high expression of CD25 molecules. In African trypanosomiasis, the expansion of Treg cells seems to occur from the parasite establishment to the chronic stage, being associated with parasite tolerance (reviewed in [29]) and in trypanotolerant C57BL/6 mice, the expansion of CD25 + FoxP3 + CD4 + T cell subset was demonstrated after the first peak of parasitemia [30]. On the other hand, the lack of expansion of Treg cells is associated with tissue damage and impaired survival of infected mice [31]. In the current study, TbEVs promote the expansion of CD4 + and CD8 + Treg cells and FoxP3 + CD4 + T cell subset. According to Zelenay et al. [32], CD4 + T cells expressing FoxP3 + phenotype are committed Treg cells able to regain CD25 expression. Thus, TbEVs seem to induce the differentiation of CD4 + and CD8 + Treg cells and unconventional CD4 + Treg cells. In previous studies, it was reported that unconventional-Treg cells also can exert regulatory functions [33]. During chronic parasite infection, uncontrolled inflammation is one of the most clinical features noticed that can become lethal if not controlled by Treg cells [34,35]. Therefore, during infection TbEVs can lead to the establishment of a pool of T cells with a regulatory phenotype able to balance excessive inflammation.
Altogether, these findings indicate that T. b. brucei parasites can activate the metabolization of arginine by MΦ and [36] modulate T cells, ensuring the production of polyamines and IFN-γ, both critical for parasite survival. At the same time, this parasite avoids antigen presentation by MΦ and T cells activation.
However, the findings of the present study place in evidence that TbEVs can establish direct communication with cells of innate and adaptive immunity, and the effect of TbEVs on these cells is different from the parasite itself. TbEVs can elicit parasite antigenic presentation to CD4 + and CD8 + T cells possible leading to the activation of the cellular immune response in addition to the bidirectional activation of mouse MΦ, contributing to the release of factors essential to parasite growth and, also, to parasite destruction. Moreover, TbEVs also seems to have a direct effect on CD4 + T lymphocytes, triggering the expression of T cell co-receptors, which are key players in intracellular signalling. The expression of the nuclear factor FoxP3 was also stimulated by TbEVs, guiding the differentiation of CD4 + and CD8 + Treg cells. imposing regulation on host inflammatory immune response which is a hallmark of sleeping sickness. Since an unbalanced inflammatory response against the parasite can become lethal to the host, the role of Treg cells in protecting from collateral tissue damage is crucial. On the other hand, TbEVs induced the differentiation of effector CD8 + T cells and drive the overexpression of FoxP3 molecules in CD4 + T cells which are involved in maintaining immune homeostasis. Therefore, parasites and TbEVs seem to display complementary effects in the host immune response that can ensure parasite survival, delaying disease severity and the eventual death of the host. Furthermore, TbEVs represent a source of biomarkers that can open new avenues for a better diagnosis of sleeping sickness and the development of prophylactic measures to control the disease caused by T. brucei in Subsaharan Africa.
|
v3-fos-license
|
2022-06-26T15:08:41.630Z
|
2022-06-24T00:00:00.000
|
250028952
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/cin/2022/1405134.pdf",
"pdf_hash": "0b0b11e63c1fcf74e54b03c724120e914379c08d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44188",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a81930055168d999b87f6c6395dced8c6c2ae63f",
"year": 2022
}
|
pes2o/s2orc
|
Intervention Effect of Evidence-Based Nursing on Postoperative Recovery of Vacuum Sealing Drainage in Patients with High Perianal Abscess with Magnetic Resonance Imaging Sequence Images
The purpose of this study was to evaluate the intervention effect of evidence-based nursing (EBN) on vacuum sealing drainage (VSD) recovery of patients with high perianal abscess after vacuum sealing drainage based on magnetic resonance imaging (MRI) sequence images. 60 patients with high perianal abscess were selected, and 30 patients before VSD were selected as the control group. Routine nursing was implemented in the control group, 30 patients after VSD were observed, and EBN was implemented in the observation group. The detection rates of various types of perianal abscess with different sequence combinations were studied, and the effects of EBN on pain and anal function scores of perianal abscess patients were analyzed. Anal function and defecation were assessed, and postoperative complications were calculated. Different combinations of MRI sequences can reach higher detection rates of intersphincter abscess and ischial anal abscess. The observation group had better pain relief and anal function recovery. The complication rate of the observation group was 16.67%, which was significantly lower than that of the control group (P < 0.05). It was confirmed that different MRI sequence combinations had higher detection rates for intersphincter abscess and ischial fossa abscess. EBN can promote the recovery of anal function and reduce complications in patients with perianal abscess.
Introduction
Perianal abscess is an acute suppurative infection of the anal glands which develops into the perianal space through the anal gland duct to form an abscess, which is prone to form anal fistula after the abscess ruptures [1]. Traditional antibiotics can relieve the clinical symptoms of patients in the treatment of perianal abscesses, but cannot prevent the formation of pus and the expansion of the abscess cavity effectively [2]. Magnetic resonance imaging (MRI) is one of the most important examination methods for perianal abscesses. is has the advantages of noninvasive convenience and accurate results and is widely recognized by patients. In clinical medicine, MRI, an advanced medical imaging technology, has been widely used and has become an indispensable means of clinical diagnosis, treatment, and research [3]. However, the lesion area is mainly found by observing the two-dimensional image sequences, which needs to be judged by the doctor's experience to a certain extent. e combination of different MRI sequences can display the structure and characteristics of the lesions more comprehensively and intuitively, improving the detection rate and the accuracy of diagnosis. e main cause of perianal abscess is anal gland infection. Incision and drainage are the key to cure. To ensure smooth drainage, the internal opening of the abscess must be left open to effectively prevent microbial invasion and reduce infection [4,5]. e most effective and common method for clinical treatment of perianal abscess is surgery [6]. Vacuum sealing drainage (VSD) is common in the treatment of inflammatory diseases, with good therapeutic effect, so it has been rapidly promoted and applied in clinical practice [7]. In recent years, VSD has been widely used for large cavity drainage after thoracic surgery, abdominal surgery, and orthopaedic surgery as well as drainage of infected wounds. In particular, VSD has a great effect in the application of perianal abscess surgery, which can reduce tissue edema, promote blood circulation, avoid wound contamination, reduce the probability of infection, and promote early healing of the wound [8]. e disadvantage of VSD is that the optimal pressure value during treatment is not uniform, in which there is a randomness in clinical implementation. Poor control of the pressure value will affect the effect. If the negative pressure is too big, it will cause bleeding and pain in the patients. If the negative pressure is too small to fully drain, the coagulant substances in the secretions are easy to agglutinate and the drainage tube is prone to blockage. Appropriate negative pressure is an important condition to ensure the effect.
Evidence-based nursing (EBN) is a new nursing model that provides high-quality nursing care through problemfinding, observation, and active application [9]. For the EBN model, the nursing activities by nursing staff are arranged to formulate detailed and comprehensive plans, combining scientific research conclusions with clinical experience and patient needs, and certain conclusions are drawn as the basis for decision-making of clinical nursing. is process is an essential link in evidence-based medicine [10]. It has good application effect in various medical practices [11,12]. EBN can not only improve the efficiency of daily nursing operations but also improve the quality of nursing, promote the recovery of patients, and relieve pain and negative emotions of patients, making patients get better comfort and satisfaction [13][14][15]. In recent years, EBN has developed rapidly and clinical nurses have gradually developed the evidencebased awareness, to carry out EBN practice one after another.
In this study, the effects of different MRI sequence combinations on the detection rates of various types of perianal abscesses were discussed. e pain, anal function, defecation, and complications of the two groups were compared, which provided guidance and reference for the nursing methods of patients with perianal abscess.
Research Objects.
A total of 60 patients with high perianal abscess who received treatment in hospital from April 2020 to March 2022 were selected as the research object.
irty patients before VSD were selected as the control group, including 18 males and 12 females, aged 18.43 to 49.75 years, with an average age of 31.78 ± 7.98 years.
irty patients after VSD surgery were selected as the observation group and received EBN after surgery. In the observation group, there were 17 males and 13 females, aged between 18.22 and 50.17 years, with an average age of 31.12 ± 7.33 years. ere was no statistical difference in the general data of patients between groups (P > 0.05), which indicated a comparability. is study was approved by the ethics committee of hospital, and all the enrolled patients were informed and consented. e inclusion criteria were described as follows: all the patients met the relevant diagnostic criteria of Ding's Coloproctology accurately. All the patients and their families voluntarily signed the informed consent. Ultrasound of the rectal cavity showed the high abscess. e exclusion criteria below were followed: the patients had systemic diseases or an allergic constitution; the patients were complicated with rectal and anal canal tumors, or inflammatory bowel diseases.
2.2. MRI Examination. A 1.5 T magnetic resonance scanner was used for image collection at 5-20 images per second using the Cartesian full sampling method. e collecting parameters were listed as follows: the time of repetition/time of echo was 155.35/1.04 ms, the field of view was 300 × 360 mm 2 , the flip angle was 52°, the slice thickness was 7 mm, the matrix dimension was 160 × 192, and the imaging time was 300 s. First, sagittal T2 weighted imaging (T2WI) was performed through the midline of the patient's body to determine the position of the anal canal. en, coronal and axial scanning of the anal canal was performed in axial T1 weighted imaging (T1WI), T2WI, and fat-suppression T2WI (T2WI-FS); coronal T2WI and T2WI-FS; and sagittal T1WI and T2WI.
Methods.
In the control group, routine nursing was adopted, the vital signs of patients were monitored, and the drainage tube was also regularly monitored to ensure the unobstruction of the tube. e color and properties of the drainage fluid were closely observed, and the drugs were administered as prescribed by the doctor.
In the observation group, the patients with perianal abscess were treated with EBN, which was carried out following the methods below: (1) Effective negative pressure was maintained. It was generally believed that the negative pressure value between 200 and 300 mmHg is more appropriate. Regular ward rounds were performed to check the dressing and drainage of VSD. It was observed whether the negative pressure value was normal, whether the VSD dressing was collapsed, and whether the drainage tube has a tube shape. e connection and sealing of the drainage tube and the drainage tube itself were also checked. Listening carefully if there was the hissing sound of air leakage, to exclude the occurrence of air leakage; the seal was reinforced with a semipermeable membrane if necessary. (2) For the drainage nursing, the vacuum connection tube should be fixed effectively to avoid discount and pressure. e patients were told to ensure the smooth drainage of the tube when turning over. e disposable negative pressure closed drainage bottle was replaced every day, the drainage fluid was >2/3, and it was checked whether the drainage bottle was damaged and whether the tube was cracked. When replacing the drainage bottle, the aseptic operation should be strictly performed. e negative pressure should be turned off first, and the drainage tube should be clamped with a hemostatic forceps. e tube was then separated, and the connection of drainage tube should be sterilized with 75% alcohol 2 Computational Intelligence and Neuroscience before reconnecting.
e negative pressure was readjusted after the replacement, and it was observed whether the drainage was effective. e date and time of the replacement were marked. e fixed position of the drainage bottle should not be higher than the wound, so as to avoid retrograde infection. e color, character, and quantity of the drainage fluid were closely observed and recorded in the nursing record sheet in time on each shift. When the fresh blood was drained, it was suspected that there was a possibility of active bleeding and it should be reported to the doctor for treatment in time.
(3) Pain care was conducted. Analgesia should be given according to the duration and property of pain in patients. VSD was propagated well, supplemented with the stories of successful cases to encourage patients to build up confidence. (4) For the psychological care, the patients were cared and comforted. e knowledge about perianal abscess and the influence of constipation on the body and postoperative incision were explained to the patients. Patients were instructed to take deep breaths during defecation to effectively relieve pain and eliminate their fear of defecation. (5) For the rehabilitation treatment, classroom explanation was carried out for health education, in which the etiology of perianal abscess, precautions during and after surgery, and postoperative complications were explained. A guide was given to patients to cultivate a healthy lifestyle and behavior. A diet plan was made on the basis of the patients' postoperative conditions. e patients were instructed to eat more high-fiber foods and avoid fishy, spicy, and stimulating foods. ey were also asked to drink about 1000 mL water every day to ensure smooth defecation. ey were encouraged to take movement in bed 12 hours after surgery to avoid abdominal distension and constipation caused by long-term bed rest. e importance of regular bowel movements was publicized to the patients to cultivate an awareness of regular bowel movements. e patients should focus on when defecation as distraction was prohibited, and they should not squat for a long time to avoid bleeding or edema. ey should not also restrain and endure the urge to defecate, and for those who cannot defecate on their own, 20 mL Kaisailu glycerine enema could be injected into the anus to help defecation. If the stools were hard in the rectum, glycerin or soapy water enema could be utilized. According to the defecation situation of the patients, medicine could be taken as prescribed by the doctor, and the defecation situation after medication should be observed.
Observation
Indicators. e detection rates of various types of perianal abscesses scanned by different sequence combinations were calculated. e sequence combinations mainly included axial T1WI + T2WI-FS, coronal T1WI + T2WI, coronal T1WI + T2WI-FS, axial T1WI + T2WI, and sagittal T1WI + T2WI. e main types of perianal abscesses were intersphincter abscesses, abscesses of ischioanal space, pelvic-rectal abscesses, and abscesses of rectal wall. e visual analogue scale (VAS) was adopted for evaluating the pain degree of patients, with a score of 0-10. e higher the score, the more severe the pain. e anal functions of patients were evaluated using the Wexner anal incontinence score, which ranged from 0-20 points. e higher the score, the worse the anal functions. e pain scores of postoperative Verbal Rating Scale (VRS)-5 were compared, where 0 indicated no pain and 5 indicated the unbearable pain. e anxiety of the two groups of patients was compared. e Self-Rating Anxiety Scale (SAS) was adopted for scoring the psychological status of the patients. Less than 50 points meant the patient had no anxiety, 50-59 points were assessed as mild anxiety, 60-69 points meant the moderate anxiety, and more than 70 points indicated severe anxiety. e lower the score, the better the psychological status of the patients. e defecation situation of the patients was also recorded. It was scored as 0 for smooth defecation and selfdefecating, 1 point for unsmooth defecation but without the aid of drugs, and 2 points for unsmooth defecation or defecation only with the aid of drugs. e degree of edema, anal wetness, infection, anal fistula, and other complications of the patients were observed.
Statistical
Methods. SPSS21.0 was used for statistical analysis of data. Measurement data were expressed in the form of mean ± standard deviation, and the measurement data between two groups were compared using independent samples t-test. Enumeration data were compared in the form of percentage (%), and the comparison between two groups was expressed using the chi-square test. P < 0.05 indicated that a difference was statistically significant. Figure 1 displays different MRI-sequence images of patient 1. e patient had a horseshoe-shaped abscess in the sphincter space and extended to the left sciatic-anal fossa.
Comparison of Detection Rate of Different Sequence
Combinations for Various Perianal Abscesses. Figure 2 represents the detection rate of various types of perianal abscesses scanned by different sequence combinations, in which Figures 2(a)-2(d) show intersphincter abscess, abscess of ischioanal space, pelvic-rectal abscess, and rectal wall abscess, respectively. e detection rate of intersphincter abscess was relatively higher for each sequence combination, among which, the detection rate of T1WI + T2WI-FS in the axial position was the highest, reaching 96.88%. e detection rate of ischioanal abscesses was also relatively high, as Computational Intelligence and Neuroscience that of axial T1WI + T2WI-FS, coronal T1WI + T2WI, coronal T1WI + T2WI-FS, and axial T1WI + T2WI reached 100%. Scanned by T1WI + T2WI in the coronal position, the detection rate of pelvic-rectal abscess was the highest (100%). Coronal T1WI + T2WI, coronal T1WI + T2WI-FS, and sagittal T1WI + T2WI gave a detection rate of 100% for abscess of rectal wall, and the detection rate of T1WI + T2WI-FS in the axial position was 0%. Figure 3 displays the comparison of the VAS score, VRS-5 score, Wexner score, and SAS score before and after nursing. Figures 3(a)-3(d) represent the score of VAS, VRS-5, Wexner, and SAS, respectively. After nursing, the scores of the four terms of patients were all decreased, among which the VAS score decreased the most. e decrease of the scores of these four items in the observation group was significantly greater than that in the control group, showing the differences of statistical significance (P < 0.05).
Comparison of Stool Scores between Two Groups after
Nursing. Figure 4 shows the comparison of stool scores between the two groups after nursing. In the observation group, 77% of patients had a defecation score of 0 after nursing and 17% and 6% had defecation scores of 1 and 2, respectively. e proportion of patients with a defecation score of 0 was relatively higher in the observation group, which was significantly higher than that in the control group (P < 0.05).
Comparison of Postoperative Complications between Two
Groups. e incidence of postnursing complications is shown in Figure 5. After nursing care, the number of patients with anal wetting was 1 in the observation group, and the incidence was 3.33%. ere was 1 case of infection, with an incidence of 3.33%. Edema occurred in 2 cases (6.67%). Anal fistula was found in 1 case, and the incidence was 3.33%. e total incidence of complications in the observation group was 16.67%, which was significantly lower than that in the control group (P < 0.05).
Discussion
Perianal abscess is one of the common diseases in the department of anorectal surgery, which mainly occurs in young and middle-aged men of 20-40 years old; the main cause is the anal sinus anal gland infection [16][17][18]. Surgery is the most common and effective treatment for perianal abscesses Computational Intelligence and Neuroscience [19]. According to the types and conditions of the abscess, there are various surgical options. e main methods of clinical treatment of perianal abscess include simple incision and drainage, disposable radical incision, thread-drawing therapy, and various drainage techniques [20,21]. VSD is a new cut surface treatment method with a smaller incision, requiring only a small radial incision of 1-2 cm, which can significantly reduce the degree of pain in patients [22]. e approach of VSD is outside the sphincter or at the sphincter tendon, reducing the damage to the sphincter during surgery and protecting the anal function to the greatest extent. Continuous negative pressure can collapse the abscess cavity, which is beneficial to the closure of the abscess cavity and the healing of the wound, reducing the length of hospital stay and promoting early recovery of patients [23]. Xue et al. [24] explored the therapeutic effect of VSD on wound repair time and inflammation in patients with soft tissue trauma compared with the traditional treatment. ey found that the wound cleaning time, wound healing time, and hospital stay of patients after VSD treatment were all higher than those in the conventional dressing group, with the differences statistically significant (P < 0.05). VSD has obvious therapeutic effect on patients with soft tissue wounds, which could effectively shorten the wound healing time and reduce inflammation-related indicators. MRI has been widely used in the diagnosis of perianal abscesses, with good diagnostic results and high patient acceptance. Yang et al. [18] studied the diagnosis and prognosis of perianal abscess by MRI under the multimodal feature fusion algorithm, from which MRI image feature analysis under the multimodal feature fusion algorithm had a higher diagnostic performance. It had a positive effect on improving the detection rate, detection accuracy, and disease classification of perianal abscesses. e effect of MRI-sequence images in evaluating the postoperative recovery of VSD in patients with perianal abscess was explored, and the effect of EBN was also After nursing Before nursing (d) Figure 3: Comparison of VAS score, VRS-5 score, Wexner score, and SAS score between the two groups before and after nursing. (a) VAS score, (b) VRS-5 score, (c) Wexner score, and (d) SAS score. * Compared to the data before nursing, (P) < 0.05. # Compared with the control group, (P) < 0.05. explored on the VAS score, VRS-5 score, Wexner score, and SAS score of patients with perianal abscess. e anal functions of the perianal abscess patients who received VSD were evaluated, the defecation situation was counted, and the incidence of various complications of the patients after EBN was also counted. e detection rate of intersphincter abscesses was the highest in the T1WI + T2WI-FS axial plane, which reached 96.88%. e T1WI + T2WI-FS axial plane, T1WI + T2WI coronal plane, T1WI + T2WI-FS coronal plane, and T1WI + T2WI axial plane had a 100% detection rate for abscess of ischioanal space. e detection rate of T1WI + T2WI in the coronal plane also reached 100% for the pelvic-rectal abscess. All the T1WI + T2WI coronal plane, T1WI + T2WI-FS coronal plane, and T1WI + T2WI sagittal plane had a 100% detection rate as well for abscess of rectal wall. After EBN, the patients' VAS score, VRS-5 score, Wexner score, and SAS score in the observation group decreased, and the decreases were greater than those in the control group. is suggested that the pain in patients was greatly relieved and anal functions were well recovered as EBN was given. e patients with a defecation score of 0 accounted for 77% after EBN, having a larger proportion than that in the control group after nursing (P < 0.05). us, EBN could help improve the defecation of patients more quickly. e incidence of all the complications was 16.67% in the observation group, notably lower than that in the control group (P < 0.05). It was proved that, under EBN, the complications in patients became significantly less. EBN could promote the early recovery of patients with perianal abscess and reduce the incidence of complications.
Conclusion
e MRI images showed a good effect on the evaluation of postoperative recovery of patients with perianal abscess. Different MRI sequence combinations have a higher detection rate of intersphincter abscess and ischioanal abscess. EBN was beneficial to reduce the VAS score, VRS-5 score, Wexner score, and SAS score of patients with perianal abscess. erefore, EBN could relieve pain, improve anal function, promote defecation, and reduce the probability of complications in patients. It was worthy of clinical application. e shortcomings of this research lay in that the sample size was small, in which further verification was needed. e sample size could be increased in the future to investigate the effect of EBN on the quality of life of patients with perianal abscess.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare no conflicts of interest.
Acknowledgments is work was supported by the Chenzhou Science and Technology Bureau Project (no. lcyl2021062). Computational Intelligence and Neuroscience
|
v3-fos-license
|
2019-04-14T03:09:22.709Z
|
2005-09-30T00:00:00.000
|
119415310
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.physletb.2006.01.070",
"pdf_hash": "78596281a72a7489faa30590d57c44f53104e3d6",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44189",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "e099547de924e2d56e090bc250f14ed22d93a524",
"year": 2005
}
|
pes2o/s2orc
|
Selectivity of the nucleon-induced deuteron breakup and relativistic effects
Theoretical predictions for the nucleon-induced deuteron breakup process based on solutions of the three-nucleon Faddeev equation including such relativistic features as the relativistic kinematics and boost effects are presented. Large changes of the breakup cross section in some complete configurations are found at higher energies. The predicted relativistic effects, which are mostly of dynamical origin, seem to be supported by existing data.
Recent studies of elastic nucleon-deuteron (Nd) scattering and nucleon-induced deuteron breakup revealed a number of cases where the nonrelativistic description based only on pairwise nucleon-nucleon (NN) forces is insufficient to explain the three-nucleon (3N) data. This happens in spite of the fact, that these high precision NN potentials describe very well the NN data set to about 350 MeV laboratory energy. Those findings extended exploration of the properties of three-nucleon forces (3NFs) to the reactions in the 3N continuum. Such forces appear for the first time in the 3N system where they provide an additional contribution to a predominantly pairwise potential energy of three nucleons. Generally speaking, the studied discrepancies between a theory based only on NN potentials and experiment become larger for increasing energy of the 3N system. Adding a 3N force to the pairwise interactions leads in some cases to a better description of the data. The best studied example is the discrepancy for the elastic angular distribution in the region of its minimum and at backward angles [1][2][3]. This clear discrepancy can be removed at energies below ≈100 MeV by adding 3NFs to the nuclear Hamiltonian. Such a 3NF, mostly of the 2π -exchange character, must be adjusted individually with each NN potential to the experimental binding energy of 3 H and 3 He. At energies higher than ≈100 MeV current 3NFs improve only partially the description of cross section data and the remaining discrepancies, which increase with energy, indicate the possibility of relativistic effects. The need for a relativistic description of 3N scattering was also raised when precise measurements of the total cross section for neutron-deuteron (nd) interaction [4] were analyzed within the framework of the nonrelativistic Faddeev calculations [5]. Also there the NN forces alone were insufficient to describe the data above ≈100 MeV. The effects due to relativistic kinematics considered in Ref. [5] were comparable at higher energies to the effects due to 3NFs. This demonstrates the importance of a study taking relativistic effects in the 3N continuum into account.
Investigation of relativistic effects was focused up to now only on the bound state of three-nucleons. However, even sign of relativistic contribution to the 3N binding energy is uncertain (see [13] and references therein). Recently, first results of relativistic 3N Faddeev calculations for the elastic Nd scattering have become available [6]. The relativistic formulation applied was of the instant form of relativistic dynamics [7]. A starting point of this formulation for 3N scattering is the Lorentzboosted NN potential V ( k, k ; P ) which generates the twonucleon (2N) boosted t matrix t ( k, k ; P ) in a moving frame The NN potential in an arbitrary moving frame V ( P ) is obtained from the interaction v defined in the two-nucleon c.m. system by [8] (2) The relativistic kinetic energy of three equal mass (m) nucleons in their c.m. system can be expressed by the relative momentum k in one of the two-body subsystems and momentum of the third nucleon q (the total momentum of the two-body subsystem is then P = − q) as where 2ω( k) ≡ 2 m 2 + k 2 is the momentum-dependent 2N mass operator. The Nd scattering with neutrons and protons interacting through a NN potential V alone is described in terms of a breakup operator T satisfying the Faddeev-type integral equation [9,10] (4) T |φ = tP |φ + tP G 0 T |φ .
The permutation operator P = P 12 P 23 + P 13 P 23 is given in terms of the transpositions P ij , which interchange nucleons i and j . The incoming state |φ ≡ | q 0 |φ d describes the free nucleon-deuteron motion with the relative momentum q 0 and the deuteron wave function |φ d . The G 0 ≡ 1 E+i −H 0 is the free 3N propagator with total 3N c.m. energy E expressed in terms of the initial neutron momentum q 0 relative to the deuteron The transition operators for elastic scattering, U , and breakup, U 0 , are given in terms of T by [9,10] The state U 0 |φ is projected onto the state |φ 0 which describes the free motion of the three outgoing nucleons in the 3N c.m. system in terms of the relative momentum of the 2N subsystem k 3N c.m. (2)(3), defined in the 3N c.m., and momentum of the spectator nucleon q defined above: This leads to the breakup transition amplitude The choice of the relative momentum k in the NN c.m. subsystem and the momentum q of the spectator nucleon in the 3N c.m. system to describe configuration of three nucleons is the most convenient in the relativistic case. In the nonrelativistic limit the momentum k reduces to the standard Jacobi momentum p [10]. To solve Eq. (4) numerically partial wave decomposition is still required. The standard partial wave states |pqα ≡ |pq(ls)j (λ 1 2 )I J (t 1 2 )T [10], however, are generalized in the relativistic case due to the choice of the NNsubsystem momentum k and the total spin s both defined in the NN c.m. system. This lead to Wigner spin rotations when boosting to the 3N c.m. system [6,7], resulting in a more complex form for the permutation matrix element [6] than used in the nonrelativistic case [10]. A restricted relativistic calculation with j < 2 partial wave states showed that Wigner spin rotations have only negligible effects [6]. Due to this we neglected the Wigner rotations completely in the present study. To achieve converged results at energies up to ≈250 MeV all partial wave states with total angular momenta of the 2N subsystem up to j 5 have to be used and all total angular momenta of the 3N system up to J = 25/2 taken into account. This leads to a system of up to 143 coupled integral equations in two continuous variables for a given total angular momentum J and total parity π = (−) l+λ of the 3N system. For the details of our relativistic formulation and of the numerical performance in the relativistic and nonrelativistic cases we refer to Refs. [6,9,10].
In the present study we applied as a dynamical input a relativistic interaction v generated from the nonrelativistic NN potential CDBonn [11] according to the analytical prescription of Ref. [12]. This analytical transformation allows to obtain an exactly on-shell equivalent to the CDBonn relativistic potential v which provides the corresponding relativistic t matrix. The boosted potential was not treated in all its complexity as given in Ref. [13] but a restriction to the leading order term in a P /ω and v/ω expansion was made The quality of such an approximation has been checked by calculating the deuteron wave function φ d ( k) of the deuteron moving with momentum P for a number of values corresponding to incoming nucleon lab. energy 250 MeV. The resulting deuteron binding energies and the deuteron D-state probabilities for the deuteron in motion are close to the values for the deuteron at rest. In Fig. 1 we show the nucleon angular distribution for elastic nucleon-deuteron scattering at E N lab = 250 MeV. It is seen that, like in a study of Nd elastic scattering in Ref. [6] where the AV18 [14] NN potential instead of CDBonn have been used, relativistic effects for the cross section are restricted only to the backward angles where relativity increases the nonrelativistic cross section. At other angles the effects are small. In spite of the fact that the relativistic phase-space factor increases with energy faster than the nonrelativistic one (at 250 MeV their ratio amounts to 1.175), the relativistic nuclear matrix element outweighs this increase and leads for the cross section in a wide angular range to a relatively small relativistic effect. The breakup reaction with three free outgoing nucleons in the final state provides a unique possibility to access the matrix elements of the breakup operator T with specific values of momenta | k| and | q| in a pointwise manner. Each exclusive breakup configuration specified completely by 3N c.m. momenta k i of outgoing nucleons requires three matrix elements i k( k j , k k ), q = k i |T |φ with (i, j, k) = (1, 2, 3) and cyclical permutations, and k and q providing the total 3N c.m. energy This is entirely different from the elastic scattering where, due to continuum momentum distribution of nucleons inside the deuteron a broad range of | k|-and | q|-values contributes to the elastic scattering transition matrix element. That particular selectivity of the breakup singles out this reaction as a tool to look for localized effects which when averaged are difficult to see in elastic scattering.
This selectivity of breakup helps to reveal relativistic effects in the 3N continuum. Even at relatively low incoming nucleon energy E N lab = 65 MeV they can be clearly seen in cross sections of some exclusive breakup configurations as exampled in Figs. 2 and 3. For the configuration of Fig. 2 the angles of the two outgoing protons detected in coincidence were chosen in such a way that for the arc-length S ≈ 30 MeV all three nucleons have equal momenta which in the 3N c.m. system lie in the plane perpendicular to the beam direction (symmetrical space star (SSS) condition). For the configuration from Fig. 3 at the value of S ≈ 46 MeV the third, not observed nucleon is at rest in lab. system (quasi-free scattering (QFS) geometry). In these two breakup configurations the inclusion of relativity lowers the cross section: by ≈8% in the case of SSS and by ≈10% in the case of QFS. In the lower parts of Figs. 2 and 3 contributions to this effect due to kinematics and dynamics are shown. The five-fold differential cross section can be written as with the kinematical factor ρ kin containing the phase-space factor and the initial flux. The transition probability for breakup | φ 0 |U 0 |φ | 2 , averaged over the initial m in and summed over final m out sets of particles spin projections, forms the dynamical part of the cross section. In the lower parts of figures the ratio of the relativistic to the nonrelativistic kinematical factor ρ rel kin /ρ nrel kin as a function of S is shown by the dashed line. The corresponding ratio for the dynamical parts of the cross section is shown by the solid line. As seen in Fig. 3 for the QFS configuration the whole effect is due to a dynamical change of the transition matrix element. For this configuration the nonrelativistic and relativistic kinematical factors are practically equal for large region of S-values (see Fig. 3). For SSS about 30% of the total effect is due to a decrease of the relativistic kinematical factor with respect to the nonrelativistic one (see Fig. 2).
The cross sections in these particular configurations are rather stable with respect to exchange between modern NN forces, combining them or not with three-nucleon forces [15]. Due to that relativistic effects seem to explain the small and up to now puzzling overestimation of the 65 MeV SSS cross section data [16] by modern nuclear forces and can account for the experimental width of this QFS peak [17].
At higher energies selectivity of breakup allows us to find the configurations with significantly larger relativistic effects. In Fig. 4 this is exampled at E N lab = 200 MeV and the predicted effects of up to ≈60%, which are mostly of dynamical origin, seem to be supported by the data of Ref. [18].
The selectivity of complete breakup is gradually lost when incomplete reactions are considered. In the total nd breakup cross section the effects disappear. Integrating over all available complete breakup configurations provides nearly equal relativistic (90.25 mb at 65 MeV and 43.37 mb at 250 MeV) and nonrelativistic (91.12 mb and 45.41 mb) total breakup cross sections. Also integrated elastic scattering angular distribution (71.25 mb and 9.33 mb-relativistic, and 71.40 mb and 9.57 mb-nonrelativistic) and the total cross section for the nd interaction do not reveal significant relativistic effects. This shows, that the discrepancies between theory and data found in previous studies at higher energies for the total cross section and elastic scattering angular distributions, which remain even after combining NN potentials with 3NFs, have to result from additional contributions to the 3N force, which have different than the 2π -exchange character.
Summarizing, we showed that selectivity of the complete breakup reaction enables us to reveal in 3N continuum clear signals from relativistic effects. Existing breakup data seem to support the predicted effects, when the relativity is included in the instant form of relativistic dynamics. Precise complete breakup data at energies around 200 MeV are welcome to further test these predictions. The QFS breakup configurations due to their large cross sections and insensitivity to the details of nuclear forces are favored for this purpose.
|
v3-fos-license
|
2017-02-09T19:17:08.725Z
|
2017-11-01T00:00:00.000
|
43749459
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://ria.ua.pt/bitstream/10773/26672/1/Tavares_etal_2017_IEEE.pdf",
"pdf_hash": "2919aa55b6c1c65d48b53ac8ec842bafed88ef8e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44190",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"sha1": "2919aa55b6c1c65d48b53ac8ec842bafed88ef8e",
"year": 2017
}
|
pes2o/s2orc
|
A preliminary proposal of a conceptual Educational Data Mining framework for Science Education Scientific competences development and self-regulated learning
— The present paper is part of a wider study, focussed on the development of a digital educational resource for Science Education in primary school, integrating an Educational Data Mining framework. The proposed conceptual framework aims to infer the impact of the adopted learning approach for the development of scientific competences and students’ self-regulated learning. Thus, students’ exploration of learning sequences and students' behaviour towards available help, formative feedback and recommendations will be analysed. The framework derives from the proposed learning approach, as well as from the literature review. Before introducing it, the authors present an overview of the digital educational resource learning approach and the adopted Educational Data Mining methods. Finally, we present the proposed conceptual Educational Data Mining framework for Science Education, focussing its relevance on the development of students' scientific competences and self-regulated learning.
I. INTRODUCTION
In recent years, as a result of technological advances in network systems and intelligent tutoring systems, new methods based on human-computer interaction have emerged in Educational research, namely Educational Data Mining (EDM) and Learning Analytics (LA) [1]- [3].According to John Behrens at the LAK 2012 conference, mentioned by Baker & Inventado [1], EDM has a greater focus on learning, aiming to design/develop educational solutions that automatically adapt to the user [2].Essentially, LA seeks to find ways to report user performance in a certain system -analysis and report of educational data [4].EDM is an interdisciplinary and emerging research area that results from the application of data mining methods to educational technological systems, showing great potential in various educational subjects and for different stakeholders [3].When focussed on the students, EDM methods can be used to customise their learning paths, and help them according to their needs/difficulties.When focussed on the teachers, these methods can be used to identify which students need more educational support, and help teachers analyse and reflect about adopted teaching and learning approaches.When used by course designers and Educational researchers (Cf.[5]) they can be used to evaluate the effectiveness of learning using different environments, and to infer about EDM methods' potential for different research objectives.These methods can also be used by schools/universities and training entities, to suggest new training courses/units, according to students' profiles, and to find class patterns and thus design educational strategies accordingly.
From the exposed, EDM methods can be used in diverse systems, including (i) Learning (Content) Management Systems (e.g., online courses participants' monitoring, supporting, and real-time evaluation); (ii) intelligent tutoring systems (e.g., provide recommendations and feedback through system modelling of students' performance/behaviour); (iii) adaptive and intelligent hypermedia systems (e.g., build students' goals, preferences, and knowledge models based on their interaction); and (iv) test and quiz systems (e.g., assessing students' knowledge in a particular concept/subject, using a sequence of items and storing data related to students' scores and statistics) [3].The present study is focussed on the use of EDM methods for students, teachers and Educational researchers, aiming at the development of a Digital Educational Resource (DER) for Science Education, integrating a framework to infer the impact of the proposed learning approach on students' scientific competences development and self-regulated learning.Thus, students' exploration of learning sequences -correlated (interactive) digital educational contents -and students' behaviour towards available help, formative feedback, and recommendations will be analysed.
Once the wider study predicts the conception and development of components grounded in theory and data collection (e.g., primary school teachers' questionnaire to define DER target), the authors adopted the Educational Design Research (EDR) methodological approach [6].EDR is focussed on "real world" educational problems, aiming to solve them through scientific knowledge deepening and the development of educational solutions.This approach predicts interactive and iterative phases (Preliminary research phase, Development or Prototyping phase, and Assessment phase), developed according to the ADDIE model [ibid.].This paper is part of the Preliminary research, that predicts seven moments related to analysis and design processes.The design of the conceptual EDM framework for Science Education is the last moment of this phase, based on previous moments, namely Moment 6 related to the design of DER learning approach.In this regard, the present paper is structured to first present the DER learning approach (section ii), then clarify the adoption of EDM methods (section iv) and, finally, present the proposed conceptual framework (section v).
II. PROPOSED DER LEARNING APPROACH
The proposed DER learning approach crosses the Inquiry-Based Science Education methodology with the BSCS 5Es instructional model [7], [8].Inquiry-Based Science Education methodology (IBSE) proposes five phases: Orientation, Conceptualization, Investigation, Conclusion, and Discussion [7].The Orientation phase aims to stimulate students' curiosity about a certain scientific concept/subject.The Conceptualization phase aims to confront and/or inquire students' (pre-)concepts and promote new ideas generation and/or assumptions related to a presented problem/challenge.The Investigation phase aims that students plan and apply exploration and investigation processes, collecting, analysing and interpreting data to test the assumptions of the previous phase (e.g., experimental activities).The Conclusion phase proposes that the students draw conclusions about the previous phase, comparing/confronting their (pre-)concepts with the collected evidences.Finally, the Discussion phase is transversal to the previous phases, aiming at students' ideas and/or results confrontation, promoting students' reflection and (self-)evaluation of the learning process.
In the last decade, several authors have approached the IBSE methodology according to the BSCS 5Es instructional model (5Es) [9]- [11].This model also proposes five phases: Engage, Explore, Explain, Elaborate, and Evaluate [8].The Engage phase aims to stimulate students' interest and promote their personal and active involvement in learning.This phase should be of short duration, relating and/or confronting students' previous knowledge.The Explore phase proposes that the students, once involved in the concept/subject, build their own understanding about it, by confronting and experimenting with scientific phenomena (e.g., experimental activities).This phase should foresee moments for the students to inquire, collect and analyse data, and reflect about the processes and results.The Explain phase aims to promote the opportunity for the students to communicate their own findings and establish a theoretical framework about their meaning, predicting students' prior knowledge confrontation.The Elaborate phase aims at students' new knowledge application, to deepen scientific concepts/subjects and/or proceed towards new learning paths.Finally, the Evaluate phase is transversal to the previous phases, and aims to help students realize how much they have learned and how their conceptual frameworks have evolved according to expectations.As much as possible, in this phase formative feedback about the students' learning performance and results should be provided.
Besides the intrinsic relationship between these two approaches, by crossing them we aim to underline and theorize on opportunities to promote primary school students' (a) interest, personal and active involvement in learning (objects); (b) ideas communication, confrontation and inquiry; and (c) real-time reflection and (self-)evaluation about their learning paths.Thus, the proposed DER is designed to first provide the possibility for students to contact with (new) scientific concepts/subjects and/or confronting themselves with their previous knowledge, and then promote students' involvement in active, reflective, exploratory and (self-)evaluative activities [7].For that, DER provides a set of correlated (interactive) digital educational contents, aiming at scientific concepts/subjects' contextualization, exploration, application and deepening, allowing students to go through the five phases of the adopted approaches (the IBSE and the 5Es).Regarding the development of scientific competences and the promotion of self-regulation, it is important to briefly clarify both constructs.In the last years, literature has underlined the holistic character of competences development, predicting not only knowledge, but also skills and attitudes as part of human comprehensive development [10], [12]- [14].Thus, scientific knowledge is the ability to understand and establish relationships, meanings, appreciations and interrelations when confronted with (new) information (factual, conceptual and procedural knowledge of (inter)disciplinary nature) [ibid.].Scientific skills are the cognitive, social, emotional, physical and practical abilities in a certain scientific subject [ibid.].That is, the ability to establish complex and organized schemes of thought and/or action to reach a (personal) goal (e.g., to be able to analyse and critically evaluate (new) information and meanings).Finally, scientific attitudes are the dispositions to use scientific knowledge, to understand and reflect about scientific subjects, and adopt competent, critical and reflexive behaviours regarding Science (e.g., to adopt responsible behaviour towards a problem) [ibid.].Thus, scientific competences are knowledge in action, and so, the three components should be looked at from a holistic point of view.
In line with the exposed, the proposed DER learning approach was designed to promote the development of scientific competences (scientific knowledge, skills and attitudes), highlighting self-regulation as a responsible, critical and reflexive attitude regarding the learning process [15], [16].Self-regulation is a mindful process in which is used a variety of strategies, resulting in the students' ability to think and act in an active, organized, articulated, critical, reflexive and motivated way, regarding learning [16], [17].Some selfregulation abilities include: (a) to identify personal interests and learning needs; (b) to set learning objectives and pathways according to personal interests and needs; and (c) to search for personal skills consolidation and deepening opportunities [12]- [14].In the present study, the emphasis on self-regulation is related to the authors' willingness to promote opportunities for students' self-awareness learning and autonomy, as well as to improve students' ability to adopt informed decision-making, develop self-confidence and remain motivated to learn.To clarify how the intersection of the adopted approaches (IBSE and 5Es) facilitates these aspects, in the following sections we summarize the DER learning approach proposed, exemplifying its operationalization, as well as highlighting its potential for students' scientific competences development and selfregulated learning.
A. Orientation and Engage
For the Orientation and Engage phases, one proposes that the students watch and explore interactive animations (e.g., answering questions about fluctuation so the animation can proceed).This type of animations aims to (i) stimulate students' curiosity about a particular concept/subject, addressing a problem/challenge (Orientation -IBSE).Interactive animations also represent an opportunity for (ii) students' self-evaluation about previous knowledge (e.g., establish relationships between previous learning; contact with new stimuli about a concept/subject).The proposed animations will have a short duration, aiming to (iii) draw students' attention/interest; (iv) involve them in a personal way; and (iv) stimulate them to predict, relate and evaluate their previous knowledge (Engage -5Es).In terms of students' scientific competences development, it is expected that interactive animations will help the students develop factual scientific knowledge (e.g., concept/subject-specific details); scientific skills (e.g., identify or formulate criteria to draw possible answers); and attitudes (e.g., access available help to solve a problem) [12], [14].
B. Conceptualization and Explore
For the Conceptualization and Explore phases, we propose that the students explore games (e.g., catch falling objects that do not float and prevent them from sinking).Games aim to lead the students to form assumptions related to the presented problem/challenge and to test them according to the established dynamics through inquiring (Conceptualization -IBSE).
Games also aim at students' knowledge mobilization, representing an opportunity for them to (i) actively learn; (ii) stimulate them to analyse information, observe and compare phenomena, variables and concepts; (iii) identify requirements and variables that influence outcomes; (iv) interpret results; and (v) draw and confront conclusions (Explore -5Es).In terms of students' scientific competences development, it is expected that games will help students develop conceptual scientific knowledge (e.g., classes, categories, principles, systems and scientific phenomena); scientific skills (e.g., decide (by attempting) about the best action/procedure); and attitudes (e.g., follow recommendations of learning reinforcement and/or deepening) [12], [14].
C. Investigation and Explain
For the Investigation and Explain phases, one proposes that the students explore simulations (e.g., perform experimental activities related to fluctuation controlling variables).Starting from a research question, simulations aim to (i) lead students to form assumptions; (ii) plan processes; (iii) test assumptions; and (iv) collect, analyse and interpret data (Investigation -IBSE).Simulations also aim to (v) stimulate students' reflection about how they structure their conceptual framework and the designed research path; (vi) help students draw conclusions and structure their knowledge; (vii) confront their initial ideas with the results of the experimental activity; (viii) establish a theoretical framework about their meaning; and (ix) establish relationships between their choices and the initial research question (Explain -5Es).In terms of students' scientific competences development, it is expected that simulations will help students develop procedural scientific knowledge (e.g., define and/or interpret experimental procedures); scientific skills (e.g., observe scientific systems and/or phenomenon variations); and attitudes (e.g., find alternatives to validate the set criteria) [12], [14].
D. Conclusion and Elaborate
For the Conclusion and Elaborate phases, we propose that students answer knowledge tests without the possibility to access help.Before proceeding with a knowledge test, the DER recommends that the students access information areas to reinforce and/or deepen knowledge (e.g., access an information area related to the application of the "Archimedes' principle" in ships and submarines design, and then answer a knowledge test addressing fluctuation, predicting questions relating the principle to everyday situations).Information areas aim to (i) lead the students to deepen and (ii) expand their knowledge, as well as (iii) help them clarify doubts (Conclusion -IBSE).Information areas, in addition to these phases, can be accessed at any time during the DER's exploration (a "Help" icon is always available in content screens so that the students can dissipate doubts and/or deepen knowledge) (Elaborate -5Es).In terms of students' scientific competences development, it is expected that information areas will help students develop conceptual scientific knowledge (e.g., deepen scientific phenomena); scientific skills (e.g., identify necessary assumptions to understand scientific concepts/subjects); and attitudes (e.g., find ways to be well informed about scientific concepts/subjects) [12], [14].Knowledge tests, according to these phases, aim to lead the students to (iv) draw conclusions and (v) reflect about how they construct their knowledge in a particular scientific concept/subject (Conclusion -IBSE).Knowledge tests also aim at the (vi) students' knowledge mobilization; (vii) to help them discover and understand the implications of the phenomena explored; and (vii) to establish relationships with other concepts/subjects (Elaborate -5Es).In terms of students' scientific competences development, it is expected that knowledge tests will help students develop conceptual scientific knowledge to deepen their knowledge (e.g., deepen scientific concepts and/or specific details related to the concept/subject addressed); scientific skills (e.g., identify or formulate criteria for possible answers); and attitudes (e.g., analyse statements and (ir)relevant information) [ibid.].
E. Discussion and Evaluate
For the Discussion and Evaluate phases, one proposes the integration of formative feedback about students' results and learning paths and, simultaneously, the availability of recommendations to (i) reinforce or deepen students' knowledge, helping them to (ii) self-regulate their learning (e.g., what content to (re-)explore).Formative feedback and recommendations also aim at (iii) students' reflection on knowledge construction (e.g., decide to access an information area to learn more about a particular concept/subject and, thus, improve their performance); and (iv) self-awareness of their learning (e.g., performance level) [10].In these phases, knowledge tests are also proposed as a knowledge assessment strategy.In this regard, knowledge tests are aimed at (v) the evaluation of the students' understanding of a particular scientific concept/subject.Knowledge tests also aim at leading students to (vi) apply their new knowledge, and (vii) deepen their conceptual framework or advance towards new research paths (Evaluate -5Es).In terms of students' scientific competences development, it is expected that knowledge tests will help students develop conceptual scientific knowledge in order to assess knowledge (e.g., verify the domain of scientific concepts); scientific skills (e.g., interpret statements and answer questions); and attitudes (e.g., use their knowledge to analyse statements, relevant information, and answer correctly and critically) [12], [14].Formative feedback and recommendations aim to lead the students (viii) to constantly and continuously be aware about how much they have learned and how their conceptual framework evolved; (ix) to a greater understanding of the scientific competences developed; and (x) to find ways of self-correction and readjustment according to what is expected (Evaluate -5Es).It is, therefore, a formative and immediate assessment, provided under a simulated studentteacher and peer communication approach, as well as teacher, self-and peer assessment environment (Discussion -IBSE).
Aiming to infer about the potential of the proposed DER learning approach on students' scientific competences development and self-regulated learning, the integration of an EDM framework for Science Education in the DER is proposed and presented in the following sections.
III. EDM METHODS
EDM methods application, as a possibility of knowledge discovery, require the establishment of clear objectives, so data collection, processing, analysis and interpretation can result in relevant inferences [1]- [3], [18].In the last years, several authors have guided their research in order to (a) predict students' learning behaviours (knowledge, motivation and attitudes); (b) study the effects of different types of pedagogical support to improve students' learning; and (c) infer about the optimal (sequences of) contents/subjects for each student, based on their difficulties, gains and preferences (Cf.[19]).According to the intended objective, there are several methods (1, 2,...) and techniques (a, b,...) for data mining in Educational research, as briefly presented below [1], [3], [20]: 1) Prediction: the goal is to infer about a single aspect of the data collected (predicted variable -like the dependent variable in statistical analysis), by combining other aspects of the data (predictor variables -similar to the independent variable in statistical analysis).As the name indicates, it is a method that predicts what will happen in the future, and it can be used to predict students' educational success rates, and the students' behaviour according to certain stimuli.2) Relationship Mining: the goal is to identify relationships between variables and to codify them into rules for later use, trying to find out which variables are most strongly correlated to a particular variable of interest, or what is the correlation between two variables of interest.It can be used to identify students' behaviour patterns and difficulties or learning mistakes that frequently occur at the same time.
a) Association Rule Mining: a technique used to find any relationship between variables, aiming to find "ifthen" rules.It can be used to find relationships such as "if the students intend to improve their performance, then they will frequently use the available help".
b) Sequential Pattern Mining: a technique used to find temporal associations between variables or events.It can be used to find students' requests for help patterns over time in software exploration.c) Correlation Mining: a technique used to find linear correlations between variables (positive or negative).
It can be used to find relationships between students' attitudes towards an activity (positive -they try to finish, or negative -they leave the activity) and help request frequency).
d) Causal Data Mining: a technique used to find relationship causes between variables, i.e., to find out if an event is caused/originated by another.It can be used to predict which factors influence students' performance in an activity, such as acceptance of software recommendations.
3) Structure Discovery: the goal is to find data structure (relationships) without any predefined idea/premise about what should be found.This method is, therefore, opposed to predicting methods, since it does not provide a previous definition of variables correlations before data mining method application.
a) Clustering: a technique used to group similar data into clusters, to discover data groups.It can be used to map students' preferences in the exploration of different types of educational contents, and to find interaction-learning patterns.
b) Factor Analysis: a technique used to find correlated variables, dividing each set of variables into a set of latent facts (i.e., not directly observable).It can be used to determine correlated contents in an online course, and to find which events result in other events.c) Domain Structure Discovery: a technique used to discover which factors influence students' specific competences development.It can be used to map students' performance and interactions during the exploration of an intelligent tutoring system.
Attending to the study goal, to infer about the potential of the proposed DER learning approach (i) to promote students' scientific competences development through the exploration of learning sequences and the available recommendations to reinforce or deepen students' knowledge; and (ii) to promote students' self-regulated learning through recommendations, formative feedback and available help; Prediction, Relationship Mining, and Structure Discovery methods will be adopted in the proposed EDM framework for Science Education.The following section presents the aspects that consubstantiated the authors' options and the proposed conceptual framework.
IV. CONCEPTUAL EDM FRAMEWORK FOR SCIENCE EDUCATION
As previously mentioned, to infer about the potential of the proposed DER learning approach in students' scientific competences development and self-regulated learning, the integration of an EDM framework for Science Education in the DER is proposed.The choice of EDM methods and techniques emerged from the set of questions presented in Fig. 1, as well as the need to collect, analyse and draw inferences about the data resulting from the represented events.
Regarding Q1) What is the impact of correlated (interactive) digital educational contents sequences in students' scientific knowledge and skills development?we intend (i) to infer about the increase of students' scientific knowledge and skills levels, through correctness patterns (mapping students' correct and incorrect answers), according to the defined objectives; and (ii) to find events patterns that influence knowledge and skills development.In other words, we intend to infer about the positive impact of correlated (interactive) digital educational contents in students' educational performance in the learning sequences, using tests to verify knowledge construction.To infer about the data collected and analysed from Q1) we propose the use of Prediction -Latent Knowledge Estimation, to estimate students' scientific knowledge and skills levels, through correctness patterns, and Relationship Mining -Causal Data Mining to find relationship causes between the "complete learning sequences" and "educational performance improvement" events.
Regarding Q2) What is the impact of formative feedback and recommendations in students' self-regulated learning?and Q2a) In what situation do the students accept DER recommendations?, we intend (i) to infer about the increase of students' self-regulation through the awareness of learning path and the availability of recommendations (Q2 -proceed according to the recommendation / do not proceed according to the recommendation); and (ii) to infer about the situations in which the students accept the recommendations (Q2areinforcement / deepening).Simultaneously, we intend (iii) to infer about events caused by another event, that is, to infer about the impact of formative feedback and recommendations on students' scientific knowledge and skills development (Q3).In this regard, we also intend (iv) to infer if the fact that the students accept the recommendations (Q2a) promotes students' educational performance improvement in the learning sequences and in the tests (Q3).To infer about the data collected and analysed from Q2) and Q2a) we propose the use of Relationship Mining -Causal Data Mining to find relationship causes between the "proceed according to the recommendation / do not proceed according to the recommendation", "learning reinforcement / no learning reinforcement", and "learning deepening / no learning deepening" events.To infer about the data collected and analysed from Q2a) and Q3) we also proposed the use of Relationship Mining -Causal Data Mining to find relationship causes between the "learning reinforcement / deepening" and "educational performance improvement" events.Fig. 1 Relational structure: questions and events that result in the conceptual EDM framework for Science Education Finally, regarding Q4) How is available help accessed?, Q4a) What is the impact of available help in students' scientific knowledge and skills development?, and Q4b) What is the impact of available help in students' self-regulated learning?, we intend to infer (i) if the students access available help autonomously or by suggestion (Q4); (ii) if students accept DER help suggestions (Q4b); and (iii) if the available help has impact in students' scientific knowledge and skills development (Q4a), that is, to infer about events caused by another event.To infer about the data collected and analysed from Q4), Q4a) and Q4b) we propose the use of Relationship Mining -Causal Data Mining to find relationship causes between the "access the available help autonomously or by suggestion" and "self-regulated learning levels" events; and the "access the available help autonomously or by suggestion" and "educational performance improvement in activity/learning sequence" events.
In addition to the exposed, and attending to EDM methods potential, we also propose to explore "Other events" that will result in a deeper understanding about the potential of the proposed DER learning approach on students' scientific competences development and self-regulated learning, among others (i) students' most accessed content type; (ii) students' educational performance in each content type; (iii) students' educational performance in a learning sequence each time they explore it; (iv) students' global educational performance; (v) students' time spent in contents/sequences exploration each time they repeat them; (vi) students' most accessed scientific concepts/contents/subjects; (vii) students' autonomous and suggested total accesses to available help; (viii) students' total acceptances of DER recommendations; and (ix) students' total times they complete and abandon a content/learning sequence.To infer about the data collected and analysed from "Other events" we propose the use of Structure Discovery -Domain Structure Discovery, to unveil which unpredicted correlated events influence educational performance improvement, and therefore, students' scientific competences development and self-regulated learning.
V. CONSIDERATIONS
The proposed conceptual EDM framework for Science Education presents a holistic approach, attending to its application potential, as well as to the knowledge that may emerge from it.Once it allows to infer about students' scientific competences development and find events' patterns that influence scientific knowledge and skills development, this framework presents potential benefits for students (e.g., learning personalization); teachers (e.g., identify learning needs/gaps); and Educational researchers (e.g., evaluate the effectiveness of students' exploration of learning sequences to reinforce/deepen Science learning).
Regarding the possibility to find causal relationship between events, inferring about the impact of formative feedback, help and recommendations as to students' selfregulated learning and scientific competences development, the framework presents potential increments for students (e.g., recommend students' most appropriate contents to improve their educational performance); teachers (e.g., identify which students need more educational support); and Educational researchers (e.g., investigate new approaches to improve students' Science learning).
Not less important, deriving from "Other events" data, the framework offers great potential to infer about several aspects that influence students' learning and pedagogical approaches (e.g., students' most accessed content type; and students' most accessed scientific concepts/contents/subjects).
From the exposed, the proposed framework will allow us to (1) infer about the DER learning approach potential on students' scientific competences development and selfregulated learning; (2) validate the proposed DER for Science Education in primary school; (3) improve future developments (e.g., improvement of the (interactive) digital educational contents); and (4) conduct new studies based on the data collected and analysed (e.g., extended implementation of the proposed DER learning approach).
|
v3-fos-license
|
2020-07-30T02:05:34.195Z
|
2020-07-29T00:00:00.000
|
224922564
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://erar.springeropen.com/track/pdf/10.1186/s43166-020-00015-4",
"pdf_hash": "c95733447951f175a4ebd4c022d3a3790a51704b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44191",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "9062feee2fcac12ff1e71368ffa4dda822f8d0e4",
"year": 2020
}
|
pes2o/s2orc
|
Serum sclerostin in rheumatoid-induced osteoporosis
Rheumatoid arthritis (RA) is characterized by presence of localized and generalized osteoporosis. The mechanism of decreased bone mass is complex and multifactorial, a possible mechanism behind increased bone loss in RA is upregulation of sclerostin. The aim of this work was to evaluate serum sclerostin level in RA patients and its relation with bone mineral density (BMD) and disease activity. Serum sclerostin level in RA patients was significantly higher than the controls (p < 0.001). Osteopenia and osteoporosis were more prevalent in RA patients (22.5% and 7.5% respectively) compared to controls (15% and 2.5% respectively) (p = 0.006). Serum sclerostin level was significantly correlated with tender joint count (p = 0.014), swollen joint count (p = 0.036), erythrocytes sedimentation rate (p = 0.010), C reactive protein serum level (p = 0.025), disease activity score (DAS) 28-ESR (p = 0.018), DAS28-CRP (p = 0.005), and radiological modified Sharp erosion score (p = 0.049). The correlation of serum sclerostin level in RA patients with BMD and with T-score in all sites revealed an inverse relationship with p values insignificant. Serum sclerostin is a major player in bone metabolism as a negative regulator of bone growth through inhibition of Wnt signaling that is largely influenced by the disease activity. Controlling the disease activity is a major factor for prevention of local as well as generalized osteoporosis and is essential for the reparative local and systemic bone health.
Background
Rheumatoid arthritis (RA) is a chronic inflammatory autoimmune disease characterized by chronic synovial inflammation with subsequent joint destruction and deformity [1]. Osteoporosis is more common in RA patients than in the general population [2]; with estimates revealed that approximately 32% and 26% of patients with longstanding RA had evidence of osteoporosis in lumbar spine and femoral neck respectively, almost twofold of the prevalence found in matched controls [3][4][5]. Interestingly, osteoporosis was observed in RA patients who were never exposed to corticosteroids, indicating that RA by itself is a significant risk factor for accelerated bone loss [6].
Sclerostin, encoded by SOST gene, is a glycoprotein secreted by osteocytes [7]. Sclerostin is upregulated during inflammation, interacts with low-density lipoprotein receptor-related protein 5/6 (LRP5/6), and displaces Wnt/β-catenin proteins, hence, inhibits osteoblasts differentiation and proliferation [8,9]. Provided that RA is characterized by an increased rate of localized and generalized osteoporosis, it is hypothesized that the inflammatory background of the disease may contribute to the upregulation of sclerostin resulting in increased bone loss in RA.
However, studies investigated the association of sclerostin with RA had emerged conflicting results. For example, serum sclerostin levels had been reported to be significantly higher [10] or not statistically different [11] than the matched controls. Moreover, studies investigated the relationship between the sclerostin and bone loss in RA revealed controversial results as one study had demonstrated that elevated serum sclerostin levels in RA were not associated with bone loss [12] meanwhile another study observed that sclerostinneutralizing antibodies had prevented or decreased rate of bone mass loss in RA murine models [13]. Review of literature of some Egyptian studies done in psoriatic arthritis (PsA) and ankylosing spondylitis revealed that Sclerostin expression is impaired in early ankylosing spondyltis and was linked to disease activity [14], also it has been found that there is a significant role of sclerostin in the development of inflammation-associated bone damage in PsA [15].
In view of these conflicting data, the aim of this work is to evaluate serum sclerostin level in RA patients and its correlation with bone mineral density (BMD) and disease activity.
Methods
The present study included 40 RA patients recruited from the outpatient clinic of Rheumatology and Rehabilitation Department and 40 age-and sex-matched apparently healthy controls. The study was conducted in the period from June 2016 to January 2018. All patients fulfilled the 2010 American College of Rheumatology/ European League against Rheumatism classification criteria for RA [16]. Prior to inclusion, all participants were adequately informed about study aim and procedures in addition to be provided an informed written consent. The study was approved by the local ethics committee.
Subjects with other autoimmune diseases, secondary causes of osteoporosis (such as diabetes, celiac disease, and thyroid disease), vitamin D deficiency, chronic liver or kidney diseases, and those who were on medications that possibly alter BMD (e.g., estrogen, steroids, calcitonin, bisphosphonates, anticonvulsants, and thyroxin) were excluded from the study.
Clinical assessment
All participants enrolled in the study were subjected to full history taking and thorough clinical examination. The medical records of the RA patients were reviewed. Demographic characteristics including age, gender, body mass index (BMI) were obtained from all participants. History taking for RA patients included recording of duration of RA, medications and history suggestive of other autoimmune disorders. Clinical evaluation included the 28 tender (TJC) and swollen joint count (SJC), joint pain assessment on 100 mm visual analog scale (VAS-pain), assessment of functional disability using the health assessment questionnaire-disability index (HAQ-DI) score [17]. RA activity was assessed using the disease activity score 28 (DAS28) [18].
Laboratory investigations
Venous blood samples were withdrawn from all participants for the determination of serum sclerostin using enzyme-linked immunosorbent assay (ELISA) (TECO Medical®, USA) following the manufacturer protocol. In RA patients, rheumatoid factor (RF) was measured by immuno-turbidimetry using Cobas Integra RFII (Roche Diagnostics GmbH, Mannheim, Germany), and serum anti-cyclic citrullinated peptide (anti-CCP) antibodies were measured by ELISA kits (DIASTAT Axis-Shield, Dundee, UK) following the instructions of the manufacturer. RF values > 15 IU/mL and ACPA values > 20 IU/ mL were considered positive [19,20]. In addition, ESR by the Westergren method and C reactive protein (CRP) were assessed in RA patients.
Determination of bone mineral density
For measurement of the BMD, the dual-energy X-ray absorptiometry (DEXA) scanning was done for all participants, using the GE Lunar Prodigy Primo Bone Densitometer, General Electric. All DEXA scans were performed by the same operator. The BMD values were presented as g/cm 2 . Cutoffs of T-score were determined based on definitions of the World Health Organization [21].
Plain radiography
Plain radiographs for hands, wrists, and feet were performed for all patients. Radiological scoring was performed based on the modified Sharp erosion score [22].
Statistical analysis
All statistical calculations were done through the SPSS version 20.0 statistic software. All continuous data were normally distributed and were presented in mean ± SD. Categorical data were presented in number and percentage. The comparisons were performed using independent sample Student's t test for comparison between two variables with continuous data. Chi-square test was used for comparison of variables including categorical data. The 95% confidence interval of mean difference for sclerostin serum level between patients and controls was calculated. Correlation coefficient test was used to explore the relationship between two continuous variables. Statistical significance was set at p ≤ 0.05.
General characteristics of patients and controls
As shown in Table 1, no significant difference is found between the RA group and the control group. Table 2 demonstrates the clinical, treatment, laboratory, and radiographic findings of RA patients.
DEXA findings and serum sclerostin level in patients and controls
The BMD at the lumbar spine region, neck of femur, and wrist were significantly lower in RA patients than controls; p = 0.016, p = 0.020, and p = 0.007 respectively. In addition, T-score of RA patients at the lumbar spine, neck of femur, and wrist were significantly lower than controls; p = 0.019, p < 0.001, and p = 0.007 respectively ( Table 3).
Serum sclerostin level in RA patients ranged between 0.37 and 0.54 ng/ml with a mean of 0.46 ± 0.05 ng/ml meanwhile serum sclerostin level in controls ranged between 0.27 and 0.50 ng/ml with a mean of 0.37 ± 0.06 ng/ml. This difference was significant (p < 0.001) ( Table 3).
In RA patients, 22.5% and 7.5% had osteopenia and osteoporosis respectively compared to 15% and 2.5% of controls had osteopenia and osteoporosis respectively. This difference was significant (p = 0.006) (Fig. 1).
Relationship of serum sclerostin level with clinical and laboratory findings in the patients
No significant association was observed between sclerostin serum level and sex, postmenopausal state, family history of osteoporosis, medications intake, RF positivity, nor anti-CCP positivity in RA patients (Table 4). No significant correlation was observed between serum sclerostin and age, BMI, duration of RA, duration of morning stiffness, HAQ-DI, or VAS-pain in RA patients as well (Table 5).
On the other hand, serum sclerostin level was significantly correlated with TJC (p = 0.014), SJC (p = 0.036), ESR (p = 0.010), CRP serum level (p = 0.025), DAS28-ESR (p = 0.018), DAS28-CRP (p = 0.005). The correlation of serum sclerostin level in RA patients with the BMD and with T-score in all sites revealed an inverse relationship, however, with p values approaching but not reaching threshold of significance. Serum sclerostin level was significantly correlated with radiological modified Sharp erosion score (p = 0.049) ( Table 5).
Discussion
The main findings of this study were (a) RA patients had significantly higher serum sclerostin level than controls; (b) serum sclerostin level in RA patients showed an inverse relationship with the BMD and with T-score in all sites, however, with p values approaching but not reaching borderline of significance; (c) serum sclerostin level was significantly correlated with parameters with disease activity in RA patients; (d) serum sclerostin is significantly correlated with the modified Sharp X-ray score of joint damage in RA patients.
Prevalence of osteoporosis in RA patients varied greatly among studies. The previously reported prevalence of osteoporosis in RA patients was 22.0% [23], 25.0% [24], 30.0% [25,26], 32.4% [27], 40.4% [28] while in one study it was as high as 79.8% [29]. In patients with recent-onset RA (< 6 months duration), osteoporosis was found in 25.0% of the patients [30]. The findings of this study supported the fact that RA patients had significantly lower BMD than healthy controls. In the present study, 30% of RA patients had low BMD (22.5% had osteoporosis and 7.5% had osteopenia).
A major finding of this study is that RA patients had a significantly higher sclerostin serum level than the controls. This finding comes in agreement with the findings of several previous studies [28,29]. In Egypt, El-Bakry et al. [13] enrolled 31 RA patients (3 males and 28 females) and 10 healthy controls and reported significantly higher median serum sclerostin in RA patients than in controls a finding that is consistent with the findings of the present study.
In contrast, other studies reported no difference of serum sclerostin level between RA patients and controls [12,31]. This discrepancy can be attributed to the characteristics of the RA patients included in these two studies as most of the patients were in clinical remission or had low disease activity which indicates a very low degree of inflammation. Sclerostin expression is upregulated by proinflammatory cytokines during inflammation [9]. Sclerostin serum level depends on genetic factors, age, sex, adiposity, kidney function, and presence of diabetes mellitus which may account for the discrepancy of the results among the studies [32]. Mehaney et al. [33] studied 40 RA Egyptian patients (12 males and 28 females) with an average age of 48.9 ± 11.6 years and 40 healthy controls and found no significant difference of serum sclerostin level between RA patients compared to controls. This difference may be explained by the differences in the study population as > 62% of the females in the control group in their study were postmenopausal compared to 38.7% in our study.
In the current study, no association had been found between sclerostin serum level with age, sex, or menopausal state. These findings were consistent with results of previous studies [33][34][35][36].
Regarding the association between serum sclerostin concentration and RA activity indices, the previous studies revealed conflicting results. The present study showed that serum sclerostin level was significantly correlated with indices of disease activity, which comes in agreement with findings of a previous study on Egyptian patients [37]. Moreover, Brabnikova-Maresova et al. [38] reported that serum sclerostin level is significantly correlated with TJC, CRP, and DAS in juvenile patients with RA. The authors of that study concluded that the association between RA activity indices and serum sclerostin levels indicate inhibition of new bone formation during active inflammation process in juvenile idiopathic arthritis patients. Conversely, El-Bakry et al. [13] found that serum sclerostin level was significantly correlated with TJC but inversely correlated with CRP and DAS.
In the present study, serum sclerostin level showed a significant correlation with the radiological score. El-Bakry et al. [13] and Ibrahim et al. [37] reported significant positive correlation between serum sclerostin and modified Larsen score while Eissa el al [11]. and Mehaney et al. [34] reported no significant correlation between serum sclerostin and radiological grading. This discrepancy can be explained by the different radiological scores chosen. The correlation between the serum sclerostin level and erosion score seems reasonable. Impaired repair of bone erosion in RA can be attributed to mechanisms that inhibit new bone formation. Upregulation of sclerostin, the Wnt antagonist, is enhanced during inflammation leading to suppression of repair of bone erosions [39]. Blocking of Wnt antagonists by sclerostin may induce repair or even healing of bone erosions [12]. Several studies assessed the relationship between serum sclerostin level and BMD and the T-score at lumbar spine and at the neck of femur had found conflicting results. In one hand, Mehaney et al. [33] found that although the serum sclerostin level was higher in RA patients with low BMD in comparison to those with normal BMD, these differences were statistically insignificant. On the other hand, other studies found that serum sclerostin is inversely correlated with BMD in healthy and osteoporotic women [40,41]. Moreover, it was previously reported that sclerostin serum level is positively correlated with BMD at the lumbar spine [31] and at the neck of the femur [38]. In the present study, serum sclerostin level in RA patients had an inverse relationship with the BMD and with T-score in all sites with p values approaching but did not reach threshold of significance, in agreement with the results of Mehaney et al. [33].
In two studies by Ardawi et al. [40,41], who observed an inverse significant correlation between the serum sclerostin level and the BMD, all enrolled women were pre-and post-menopausal healthy (non-RA) women in one study and in the other study all enrolled women were postmenopausal which may account for the discrepancy from the current study. On the other hand, the positive association between BMD and serum sclerostin is difficult to explain, however, Paccou et al. [31] suggested that the sclerostin level may simply reflect the bone mass and the number of active osteocytes.
More studies on sclerostin, in larger RA population, are required for better discrimination of role of sclerostin in patients with RA. In addition, it will be necessary to determine whether anti-sclerostin antibodies could slow the progression of bone mineral loss in RA patients.
Conclusion
Serum sclerostin is a major player in bone metabolism as a negative regulator of bone growth through inhibition of Wnt signaling that is largely influenced by the disease activity. Controlling the disease activity is a major factor for prevention of local as well as generalized osteoporosis and is essential for the reparative local and systemic bone health. More follow-up longitudinal studies are recommended to detect the effect of control of disease activity on serum sclerostin and the validity of this serum marker to reflect bone health in RA patients.
|
v3-fos-license
|
2017-05-03T15:59:14.184Z
|
2016-01-01T00:00:00.000
|
26869446
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://thesai.org/Downloads/Volume7No4/Paper_52-A_Format_Compliant_Selective_Encryption_Scheme.pdf",
"pdf_hash": "58537b19fdc3646897cec7df0463e50782206d23",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44192",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "58537b19fdc3646897cec7df0463e50782206d23",
"year": 2016
}
|
pes2o/s2orc
|
A Format-Compliant Selective Encryption Scheme for Real-Time Video Streaming of the H . 264 / AVC
H.264 video coding standard is one of the most promising techniques for the future video communications. In fact, it supports a broad range of applications. Accordingly, with the continuous promotion of multimedia services, H.264 has been widely used in real-world applications. A major concern in the design of H.264 encryption algorithms is how to achieve a sufficiently high security level, while maintaining the efficiency of the underlying compression process. In this paper a new selective encryption scheme for the H.264 standard is presented. The aim of this work is to study the security of the H.264 standard in order to propose the appropriate design of a hardware cryptoprocessor based on a stream cipher algorithm. Since the proposed cryptosystem is mainly dedicated to the multimedia applications, it provides multiple security levels in order to satisfy the requirements of various applications for different purposes while ensuring higher coding efficiency. Different performance analyses were made in order to evaluate the new encryption system. The experimental results showed the reliability and the robustness of the proposed technique. Keywords—component; Video coding; Data encryption; Data compression; H.264/AVC
INTRODUCTION
Different multimedia applications have become increasingly popular due to the fast development of communication technologies.Since communications across public networks can easily be intercepted, privacy becomes a major concern for commercial uses of multimedia communication.Encryption is an important tool for providing the security services in different fields of applications.Thus, since the 1990s, many research efforts have been devoted to the development of certain video encryption algorithms.Therefore, many algorithms have been proposed to ensure the confidentiality of video data.
Multimedia data requires either full encryption or selective encryption depending on the application requirements [1].For example military and law enforcement applications require full encryption.However, there is a large spectrum of applications that demands a lower security level.These applications require the development of a cryptosystem using a selective encryption.
To clearly identify the characteristics of video encryption algorithms, the encryption algorithms can be divided according to their association (or not) with the video compression process.We distinguish the encryption algorithms joint compression and others independent of the compression.In fact, there are three different approaches which combine encryption and compression.As shown in "Fig.1", an encryption algorithm could be placed before, during, or after the compression process [2] [3].The video encryption algorithms placed before or after compression are called encryption algorithms independent of the compression while those executed during the compression process are called encryption algorithms joint compression.
The direct approach consists in the encryption of the entire compressed video stream using a conventional cryptographic method, such as the Advanced Encryption Standard (AES).This approach is called the naive approach.However, conventional cryptographic algorithms, which are generally designed to encrypt text data, are not well suited for video encryption because they can"t treat the large volume of video data in real time.In addition, it is almost impossible to adapt www.ijacsa.thesai.orgthem to specific paradigms of video applications which pose specific requirements that are never encountered during the encryption of text data.These requirements are related to the efficiency of encryption, the security needs, the code conformity of the video stream, the compression efficiency, the respect for the syntax and the perception.They can be ensured using the selective encryption.In fact, this kind of encryption treats a part of the plaintext and presents two main advantages.First, it reduces the computational requirements, since only a part of plain-data is encrypted.Second, encrypted bitstream maintains the essential properties of the original bitstream.It just prevents abuse of the data.In the context of video encryption, it refers to destroying the commercial value of the video stream to a degree which prevents a satisfying viewing capability.The H.264/AVC-based selective encryption schemes have been already presented on CAVLC and CABAC [3].These two previous methods fulfill real-time constraints by keeping the same bitrate and by generating completely compliant bitstream.This paper presents a new selective encryption method for the H.264/AVC videos.The second section is devoted to introduce the H.264/AVC standard and the related encryption schemes.The third section will discuss the system specification, the choice of algorithms and the cryptographic techniques (scenarios).The fourth section is devoted to the design and the implementation of proposed cryptosystem.The next step is the Hardware/software validation on FPGA platform taking into account the real-time aspect.
II. H.264/AVC -BASED VIDEO ENCRYPTION
In this section, we will present the H.264/AVC video coding as well as its bit stream syntax structure.Then, we will discuss some key parameters which are imperative to design a format-compliant encryption scheme.Finally, some related works will be evaluated.
A. Overview of H.264/AVC
In terms of classification, video encryption algorithms respect in a proportional manner certain criteria such as the efficiency of encryption, the security level, the conformity to standard video codecs and the compression efficiency.The latter two are closely related to the video compression process.In fact, the Standardized video compression technologies such as MPEG-1 (ISO/IEC, 1993) [5], MPEG-2 (ISO/IEC, 2000) [6] Most video coding standards use hybrid coding approach that consists on compressing the video data using simultaneously the "intra" and the "inter" encoding.Although there are differences among the applied coding algorithms, compression standards are built on the same set of basic operation elements.
H.264/AVC, known also as MPEG-4 Part10, has an enormous improvement in term of the compression performance.Thus, the compressed sequence is usually 30 to 50% shorter when compared to the previous MPEG-4 Part2 standard [4].The block diagram of the H.264/AVC encoder/ decoder is presented in "Fig.2".
B. H.264/AVC Bitstream Syntax Structure
The main aim of the present research is to find a compromise between the speed of transfer and the preservation of a significant security level of multimedia data, while respecting the constraints that are imposed by the dedicated application (occupation, time, consumption ...).Accordingly, a mixed approach of encryption and compression is chosen in the present work.Thus, the cryptosystem must ensure not only confidentiality but also low power consumption and a very small occupation on FPGA.Furthermore, to ensure its integration into a compression sequence, different key parameters of the compression standard must be evaluated.This section is devoted to study the design constraints and various properties of the H.264 standard.
In fact, in a video stream, the data is presented in a hierarchical way.First, the video begins with a start code sequence (header).It contains one or more groups of pictures (GOP), and ends with an end code sequence.
The group of pictures (GOP) consists of a periodic sequence in the compressed images.In reality, there are three types of compressed images.The I-image (Intra) is compressed independently of the other pictures.The P-image (predictive) is coded using prediction of a previous image of type I or P. Finally, the B-images (Bidirectional) are encoded by double prediction using as reference a previous and next image of I or P type.A group of pictures starts with an I-frame, contains a periodic sequence of P-frames separated by a constant number of B-frames (see "Fig.3") [8][9].Fig. 3.The structure of a GOP www.ijacsa.thesai.orgA GOP structure is defined by two parameters.These are the number of images and the distance between I-images and P-images.In fact, an I-image is inserted every 12 frames.An image consists of three matrices where each matrix element represents a pixel.The YUV model defines a color space with three components.The first is the luminance and the others present the chrominance.The U and V matrices have smaller dimensions than the matrix Y (relatively to the used format).The most important information of the picture is stored in the matrix Y [8] [9].
The image is cut in slices whose purpose is to limit the errors propagation in image transmission/storage.A slice is a sequence of macros blocs.A macro-block represents a portion of the image of 16 × 16 pixels size.A block is a 4 × 4 matrix of coefficients each one represents one of the three components of a pixel, Y, U or V [8] [9]."Fig.4" below describes the hierarchical aspect of a video sequence from the GOP to the 4x4 blocks.
C. Key Parameters for a Selective Video Encyption
The process of video compression involves three processes: Discrete Cosine Transform (DCT), quantization and coding.To achieve the best choice of the location of the designed cryptosystem in the chain of compression, it is indispensable to take in consideration the execution time, the level of security, and the complexity of the system.
Observing the structure of a video encoder, we realize that if the proposed cryptosystem is placed after the DCT transformation, a decryption system is needed to be added in the decoder which aims to build on the temporal redundancies of a video streaming.The principle is to predict the content of an image and to encode only the error made in this prediction.Thus the existence of a cryptosystem increases the processing time and affects the complexity of the encoder.However, a cryptosystem inserted after the quantization step will not require an additional time for a decryption process.
In fact, the DCT is used to move the spatial domain to the frequency domain and also to collect as much information as possible in a small number of frequency coefficients.The DC coefficient shows the average of samples processed and presents the most important details in the raw of an image (lower spatial frequency).The AC coefficients represent the fine details of the image (higher spatial frequencies) [10].Thus, the DC coefficients carry more useful information than the high frequency components.Moving away from DC components of the image, not only the coefficients tend to have low values, but also, they become less important for the description of the image."Fig.5" shows that the number of the DC coefficients represent (1 / 16) of all coefficients in a macro-block that contains 24 DC coefficients and 384 AC coefficients.Therefore, DC coefficients of an image I present (1 / 192) of the total coefficients.In consequence, if we assume that TG represents the required time to encrypt a video stream, hence the required time to encrypt only the I-frames of this flow will be reduced to (TG / 12) while maintaining a considerable security level.Moreover, if only the DC coefficients of Iframes are encrypted the required time for encryption process will be (TG / 192).
D. Review of the Related Work
In this section, we will describe the currently known encryption algorithms for MPEG video streams in order to evaluate them with respect to three metrics: security level, encryption speed, and encrypted MPEG stream size.
In fact, several selective encryption schemes have been previously discussed in the recent past.In [11], an efficient encryption system for the H.264/Scalable Video Coding (SVC) codec is presented.The proposed selective encryption scheme is suitable for video distribution to users who have subscribed to differing video qualities on medium-to-high computationally capable digital devices.Another idea of a selective encryption on SCV is proposed in [12].It involves the encryption of signs of coefficients, sign of motion vectors, and the alteration of DC values to ensure three different security levels.Although the sign encryption has no effect on the compression efficiency and the bitrate, the alteration of the DC values changed the video statistics and affected the compression efficiency.
In [13], the proposed scheme encrypts the video by scrambling the Intra-Prediction Mode (IPM) of intra macroblocks.The main limit of this scheme is that it offers less security level due to the length of the pseudo number sequence.In [14], two fold video encryption techniques applicable to H.264/AVC are presented.In fact, the authors proposed an encryption of the DCT coefficients which affects the statistical characteristics of data.In addition, the compression ratio is affected which consequently increases the bitrate.This paper proposes a combination of pseudo-random key generator and permutation code algorithm.The main objective is to enhance the security of H.264 video.In the next section, the proposed scheme is discussed in detail along with the generation of pseudo-random keys.
III. THE PROPOSED SELECTIVE ENCRYPTION SCHEME
The purpose of this work is the design of a cryptographic processor mainly dedicated to multimedia applications.The obtained cryptosystem will be placed on a prototyping platform based on FPGA to encrypt video transmissions in real-time conditions.In this context, the H.264 AVC part 10 standard is chosen.It is defined in most multimedia applications such as video conferencing, Internet video, media players, video mobile, and some satellite channels.
The design of the cryptosystem can be studied in two directions: The first one consists on proposing cryptographic protocols that should be appropriate for applications presenting time and security constraints.In the second direction, it is essential to realize the implementation of the system in a compression sequence that presents the constraints of the target application.
A. Design Flow
Designing systems with high architecture performance requires the choice of the most appropriate algorithms.Similarly, the definition of the design flow from functional level to physical level is a crucial step.It greatly affects the time of conception and the realization of the target system.
The proposed design flow is based on five strategic points.First, the definition of the requirements and the specification of the encryption techniques is an important step that consists on setting the goals of the project and studying the various constraints.The latter are related to target applications in order to ensure the conception coherence.Secondly, according to the study of the constraints imposed by the target applications, different cryptographic protocols will be proposed in order to achieve a hierarchy of security levels.Then, modeling the security IP requires architectural optimizations in order to adopt the cryptosystem to both application needs and used platform.Fourth, the logic synthesis and the performance evaluation of the designed cryptosystem ensure the validation of the proper functioning of the IP under real-time constraints.Finally, the hardware/software validation (Co-simulation) of the proposed cryptosystem verifies the architecture of the final prototype in a hardware environment.This will enable us to achieve real-time evaluation of system performance in terms of execution time and throughput.The tools provided by the reconfigurable platform and the electrical measurements allow us to evaluate the energy consumed by the proposed cryptosystem.
B. Proposed Cryptographic Scenarios
As mentioned before, encrypting the entire video is not always reasonable.This is mainly due to the large size of videos.Thus this kind of encryption approach is not recommended for embedded systems where the energy capabilities are limited.In such cases, saving time and energy consumption becomes an important issue.Hence, a selective encryption is compulsory.Accordingly, in this paper, four different encryption scenarios were proposed.They consist on encrypting only the most important data.In order to deal with the constraints of a real-time transmission, the least significant information will be switched while the most important data will be encrypted using a sufficiently secure algorithm.Therefore, the proposed scenarios are described below: The first scenario consists in encrypting the DC coefficients of the I-frames using an algorithm A. As shown previously, the images I carry the most useful information of the video stream.Hence, this scenario guarantees a high security level.
The second scenario encrypts the I-frames.Thus, the DC coefficients of the I-frames are encrypted using an algorithm A while the AC coefficients are enciphered using an algorithm B. Therefore, this scenario has greater security level compared with the first scenario although it requires more execution time.
The third scenario encrypts all the DC coefficients in the video stream using an algorithm A. Since the DC coefficients present the most important information of an image, this scenario provides a better security level.www.ijacsa.thesai.org The fourth scenario consists in the encryption of the DC coefficients of all the images by an algorithm A and the AC coefficients of the I-frames by an algorithm B. This scenario provides a very high security level.However, it needs much execution time due to the large number of coefficients to be treated.
The table II summarizes the different proposed scenarios.It illustrates the speed, the security level, and the influence of encryption on the compression rate.
Scenario1
Only Since the influence of the encryption on compression ratio depends only on the quantity of the encrypted data, the choice of encryption algorithms does not affect this parameter.However, while selecting the encryption algorithms, it is indispensable to take in consideration the coefficient nature and the desired security level which affect the encryption time and the compression ratio.Thus, in order to respect the constraints imposed by the characteristics of different levels and profiles, the choice of the encryption algorithms (A and B) must consider the speed of processing.Therefore, it is to guarantee a balance between the speed, the compression ratio, and the security level.
The table III shows the minimum speed needed to ensure the application of different scenarios.The minimum speed required for each treatment is equal to the maximum number of coefficients to encrypt multiplied by the size of a single coefficient (in bits).
C. Choice of the Encryption Algorithms
While encrypting a video stream, the transmission speed is a fundamental criterion.Therefore, the symmetric key algorithms are suggested to be used.In fact, the main disadvantage of asymmetric algorithms is that their treatment is slow.In addition, they require a lot of calculation.Therefore, their use becomes impossible for real-time applications.Concerning security, they present problems related to the structure of the public key systems.In fact, to ensure adequate security, the generated keys are larger in size compared to the symmetric key.
The main types of private key cryptosystems used today can be classified into two categories.These are the block ciphers that treat data blocks of fixed size and the stream ciphers that treat the data bit by bit.For the block cipher, good security is defined by a long key.This implies some drawbacks.In fact, the large blocks are safer but are heavier to implement.However, stream ciphers are very fast.The hardware implementation of the latter needs few gates, so they are suitable for real-time applications and often used to protect multimedia data.Generally, they are presented as a generator of pseudorandom numbers.A bit XOR is operated between the generator output and a bit from the data.However, the XOR is not the only operation possible.
In order to choose the appropriate key generator, a comparison between the most known stream ciphers has been made.We synthesized using the "Synplify Pro" component packages and the target component Virtex2 XC2v2000-6ff896.The table IV below summarizes the obtained results.According to the table IV, we note the following observations: A5/1 has an acceptable speed and occupation rate (2%), and a relatively low consumption ratio.These results justify the use of this generator in GSM applications.
The W7 frequency is the lowest.Whereas, its period is greater than that obtained by the other generators.Thus, it ensures a good security level.
Grain consumption is the least compared to the other pseudo random generators.The frequency and the occupation values are acceptable for the real time www.ijacsa.thesai.orgapplications.However, its security level has to be checked.
Thus, randomness is very important to evaluate the quality of the generated keys.It presents one of the most critical points of configuring a crypto processor.In fact, to test quantitatively the randomness of the generated keys, the National Institute of Standards and Technology (NIST) announced, in 2001, a standard called FIPS 140-2.It covers four types of tests, namely, Monobit test, frequency test, Runs test and Longest test runs.A sequence is considered to be random if the probability P-value for each test is greater than 1% (0.01).The results of the various tests applied to the algorithms A5/1, W7, CA and Grain are presented in the following table V.The obtained results show that Grain provides a higher security level compared to the other well-known ciphers such as A5/1, W7, and CA.Grain provides higher security while maintaining a small hardware complexity.Accordingly, grain-80 will be used to ensure the key generation in the proposed cryptosystem.
As mentioned before, the cryptosystem will be integrated after the quantification step.Therefore, the AC and DC coefficients, resulting from the quantization step will be treated with appropriate systems and crypto-coded in order to achieve a crypto-compressed video.Thus, the DC coefficients will be encrypted using the key generated by Grain-80 while the AC coefficients will be switched.Fig. 6 illustrates the new Crypto-Compression Process.
IV. DESIGN AND IMPLEMENTATION OF THE PROPOSED CRYPTOSYSTEM
From the system specification and cryptographic techniques, developed previously, it results the selection of the appropriate cryptographic algorithms as well as the location of the proposed cryptosystem in the compression process.
The implementation of the designed system is based on the complementarily of four different blocks.These are the Algorithm A (key generator: Grain-80), the configuration processor, the encryption processor, the re-configuration unit, and the permutation tables.
This structure allows for a good distribution of tasks between the blocks so that the proposed system can be adaptable to various applications.First, the key generation algorithm A and the permutation tables are defined with respect to the need.In addition, the function performed by the Encryption-module can be easily modified."Fig.7" shows the general structure of the proposed system.In order to achieve the scenarios described above, Grain has been chosen as encryption algorithm to process the DC coefficients.The AC coefficients will be swapped using predefined permutation tables.
A. Grain Implementation
Grain is a stream cipher algorithm that appeared in 2005.It is designed to be very small and efficient in material implementation [15].Gain family currently consists of two types of encryption.The first uses a key of 80 bits while the other uses a 128-bit key.Grain uses two registers.These are the LFSR (Linear Feedback Shift register) and the NFSR (Nonlinear Feedback Shift register).The output result is generated through a non-linear filter that takes two inputs of the shift registers.The following figure "Fig.8" describes the structure of the Grain Stream Cipher.The implementation and simulation of the Grain algorithm was achieved in VHDL.The Key Initialization phase ensures the initialization of the cipher using the initial key and the init-IV vector.This step is crucial before generating the key stream.
Grain is intended to be used in environments where gate counts and the power consumption as well as the memory needs to be very small.In fact, several ciphers are designed www.ijacsa.thesai.orgwith better software efficiency compared to Grain.In fact, they are more appropriate when high speed in software is required.
In reality, the basic implementation has 1 bit/clock rate.The speed of a word oriented cipher is typically higher since the rate is 1 word/clock.Grain is a bit oriented cipher but it has compensated this problem by the possibility to increase the speed.Accordingly, a designer could choose the appropriate speed of the cipher according to the amount of hardware available.The following "Fig.9" illustrates the cipher process when the speed is doubled.We implemented all the possible versions of Grain-80 in order to choose the appropriate speed and performances for the target application.The synthesis results presented in table VI proved that the speed changes proportionally to the occupancy.In addition, the consumption ratio becomes increasingly significant from one version to another.For example from the standard version of Grain to Grain-16 version (where the speed is multiplied by 16), the change in consumption is negligible compared to the evolution of the speed (≈7x230.9= 1652.8Mb/s).The generic version gives the opportunity to choose the version that is compatible with dedicated applications, but it has a loss in speed, frequency and occupation.For example, compared to the original version the frequency of the generic version (with N=1) decreases from 230.9 to 39.9 MHz, while the occupancy reaches a value equal to 4533 Luts (≈12x 336).According to these results, it is clear that each version has its own characteristics.Thus, choosing the appropriate version is based on the constraints of the target application.The proposed cryptosystem is dedicated to the real-time video application.Thus, the version Grain-V4 was chosen where the speed is multiplied by four.
B. Configuration and Re-Configuration Units
The scenarios configuration and assignment are carried out by the configuration module.It ensures three important functions.These are the level identification, the scenario specification, and the classification of images and coefficients."Fig.10" illustrates the process of this unit.The configuration module is fundamental in order to ensure synchronization between the other modules.Similarly, the reconfiguration module's role is to restore the flow of input coefficients and to reconstitute the encrypted video streaming.
C. Encryption Unit
This block is responsible for performing an XOR or an XNOR operation between the key generated by Grain-V4 and the coefficients to be encrypted (in case of DC coefficients).The AC coefficients are swapped using predefined permutation tables.
The encryption key generated by Grain is 80-bit size.Therefore, it can serve to encrypt 6 different DC coefficients.To improve the robustness of the proposed cryptosystem, two different functions were chosen to be performed.These are the XOR and the XNOR.
Since Grain takes 20 clock cycles to manage its first key, it is needed to manage the first coefficients reaching this block before the generation of the cipher key.However, after 20 clock cycles only 2 DC coefficients and 18 AC coefficients are ready for encryption.Thus, two registers have been defined to ensure this task.
D. Permutation Unit
As previously mentioned AC coefficients are switched following permutation tables that were defined for this purpose.Only 16 permutation tables were chosen to meet the design requirements.First, it is important to reduce the used memory in order to consume less in terms of occupation.Secondly, the key generated by GRAIN can be used to define only 6 different addresses (if the number of tables increases, more than 4 bits will be needed to represent the table number).In fact, 50 different tables were generated (based on Grain keys).Then, four different cryptographic tests were applied in order to evaluate the cryptographic properties of the generated tables.These are the nonlinearity, the strict avalanche criterion (SAC), bijection, and the BIC (output bits independence criterion).In fact, the generated tables satisfy the requirement of bijectivity since they have different output values.In www.ijacsa.thesai.orgaddition, the average value of nonlinearity of the 16 generated tables is equal to 102.Furthermore, the mean value of the dependence matrix (SAC) of the chosen tables is equal to 0.5281 which is very close to the expected value 0.5.All these results justify the choice of the used permutation tables.
The following "Fig.11" illustrates how the Grain key is used to choose the permutation table for the encryption process.In fact, the same table cannot be used to encrypt two successive blocks of data.
E. Synthesis of the Proposed Cryptosystem
Synchronization between system units is an imperative operation.In fact, the management of the clock has a fundamental role in system performance such as its total consumption.In this context, the Grain cipher is activated all throughout the treatment, although it is used only for specific times to encrypt DC coefficients.This gave us the idea to design a second version in order to optimize the used resources.
The Grain process was examined in order to be activated only when a key is needed.The management of the activation and deactivation of this generator allows us to use all the produced keys and to benefit of the provided security level.In the same context, the "Encryption" block can be activated in need.To manage the activation and deactivation of these two blocks, we used a clock generation processor which was implemented in the configuration block.
Moreover, different improvements have been carried out in order to optimize the used resources in the proposed cryptosystem.To evaluate the impact of these modifications, the synthesis of the proposed cryptosystem was performed using the component packages "Synplify Pro 9.6" and the target component Virtex5-XC5VLX50-FF676.The obtained results are presented in table VII.The following "Fig.12" illustrates the conception flow of a real-time design for the proposed cryptosystem.The System Generator provides a hardware co-simulation to incorporate an architecture running on the FPGA directly in a Simulink simulation.The video model tested and verified in the previous step, must be compiled for hardware cosimulation.The selection of the target platform for the compilation must be made.In fact, Spartan 3A DSP 3400 Platform offers us the opportunity to implement and verify the hardware implementation results.
A. Integration of the Proposed Cryptosystem in the H.264
Encoder Zexia provided H.264 encoder implemented in VHDL [20].It is designed as a modular system with small and efficient components using low power ressources.The proposed cryptosystem was integrated in the Zexia-H.264-encoder in order to validate its process.The following figure 13 shows the structure of the obtained crypto-encoder.The proposed cryptosystem is adapted in order to be integrated into the compression process."Fig.14" shows the simulation results of the obtained crypto-compression system.It presents the major signals of the different blocks when the fourth scenario is applied to encrypt the video stream.
B. Integration of cryptosystem model of Camera Design
This section presents the integration of the proposed cryptosystem, developed in VHDL, using the System Generator Black Box in the model of camera design.In fact, the reference design was used.It includes a VSK-Camera-VOP Bayer filter to restore the image in RGB format.The generated PCORE is exported as a new EDK-PCORE in the proposed project.The design shown in "Fig.15" consists of the Starter Kit video (VSK) Spartan 3A DSP FPGA XCSD3400A.This card is used to decode the data that came through the serial port interface LVDS Camera.www.ijacsa.thesai.org
C. Real time validation on Spartan 3A DSP platform
In the Hardware Co-simulation of real-time cryptosystem, the string contains the entire cycle of acquisition, processing and retrieval of a video signal from a video source (camera).The results of the Hardware Co-simulation presented in the following "Fig.18", allow us to verify the efficiency and the robustness of the proposed model HDL.Image processing in real time requires the use of fast electronic circuits that are capable of handling large amounts of information generated by the video source.That"s why FPGAs are ideal for this kind of application.
D. Security analysis
In order to analyze the security of the proposed cryptosystem against most known attacks, security tests were conducted on Foreman video (352x288, 164 frames).Then, the entropy values, the PSNR (Peak Signal-to-noise ratio), and the Horizontal and vertical correlation coefficients were observed.
The correlation provides a quantitative representation of the similarities between the original and the encrypted frames.In fact, low correlation coefficient indicates that there is less similarity between the original and encrypted video, which shows the efficiency of the encryption scheme.
The PSNR is the most widely used metric to estimate image distortion measure.This metric compares the visual quality between the plain image and the ciphered one.The PSNR is based on the Mean Squared Error value (MSE) that delivers the error between two images.
The information entropy is one of the most important features of randomness.In fact, the source is considered to be truly random if the information entropy of the ciphered image is close to eight.
The following table IX presents the different analysis results.They justify the efficiency of the proposed cryptosystem.
VI. CONCLUSION
In this paper, a new cryptosystem dedicated for multimedia applications is proposed.It is designed to be integrated into the H.264 encoder.It provides four different encryption scenarios.The proposed structure is essentially based on a pseudorandom generator, a configuration unit and an operator performing an XOR/XNOR between the generated keys and the appropriate data which are identified by the configuration processor.This operator is also responsible, of the data swapping based on highly nonlinear permutation tables.
The choice of cryptographic algorithms was based on the study of environmental constraints imposed by the targeted applications such as the real-time transmission, the speed, the influence on the compression ratio and the desired security level.Hence, Grain-80-V4 was chosen to encrypt the DC coefficients which have the most important information of the video stream.The permutation was elected to encrypt the AC coefficients that are more numerous than the DC coefficients.
In order to deal with the real-time multimedia applications, we chose the joint compression and encryption approach that does not require too much time for encryption/decryption www.ijacsa.thesai.orgprocess while maintaining a considerable amount of compression ratio.
Several perspectives emerge as a result of the present research.In fact, it is important to study the resistance of the proposed cryptosystem against certain types of attacks such as the fault injection attacks.Appropriate counter-measures should be proposed if necessary.In addition, the chaos-based selective encryption is a new and an efficient approach used for the multimedia application.It is attracting an increasing research effort due to its favorable properties such as the good pseudo randomness and the high sensitivity to the initial values.
Fig. 4 .
Fig. 4. Data hierarchy in a video stream
Fig. 5 .
Fig. 5.The structure of 4 : 2 : 0 macro-blocksBefore defining the encryption scenarios, it is required to know the maximum number of different types of coefficients processed per second.It helps us to choose the most appropriate cryptographic algorithm.In this context, all the necessary calculations for the design of the proposed cryptosystem were performed.The following table I summarizes the obtained results.
the DC coefficients of the I-images are encrypted.of all the images and the AC coefficients of the I-images are encrypted ****** **** ****
Fig. 9 .
Fig. 9.The cipher process when the speed is doubled
Fig. 11 .
Fig. 11.The choice of the permutation tables
Fig. 12 .
Fig. 12.The choice of the permutation tables
Fig. 13 .
Fig. 13.Architecture of the new crypto-compression system
Fig. 14 .
Fig. 14.Simulation results of the optimized crypto-compression system
Fig. 15 .
Fig. 15.Architecture of the integration of hardware cryptosystem in the Design of Camera Frame Buffer "Fig.16"shows the external structure model VSK-Camera-VOP and "Fig.17"details its internal structure.
TABLE I .
KEY PARAMETERS FOR VIDEO ENCRYPTION
TABLE II .
THE PROPOSED ENCRYPTION SCENARIOS
TABLE IV .
COMPARISON BETWEEN PSEUDO-RANDOM GENERATORS
TABLE V .
SECURITY TESTS OF PSEUDO-RANDOM GENERATORS
TABLE VI .
SYNTHESIS RESULTS OF GRAIN STREAM CIPHER
TABLE VII .
SYNTHESIS RESULTS OF THE DIFFERENT UNITS OF THE PROPOSED CRYPTOSYSTEM www.ijacsa.thesai.org
TABLE IX .
SECURITY ANALYSIS OF THE DIFFERENT PROPOSED SCENARIOS
|
v3-fos-license
|
2019-03-04T14:45:53.898Z
|
2017-09-26T00:00:00.000
|
71143440
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2308-3417/2/4/31/pdf",
"pdf_hash": "d1fe65d2c4ecea6f6d4448affad42c420d0fa98b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44194",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d1fe65d2c4ecea6f6d4448affad42c420d0fa98b",
"year": 2017
}
|
pes2o/s2orc
|
Healthcare Cost Reductions after Moving into a Wet Nursing Home Stay—A Case Series
Serious alcohol dependence is associated with high healthcare costs, especially when patients have chronic problems with alcohol, dementia and exhibit externalizing behavior. One option is to offer a wet nursing home for seriously ill patients for whom abstinence from alcohol is not a feasible option. In this case series, we present the healthcare costs 18 months before moving into a “wet nursing home”, and in the first 18 months of their stay, for three cases, one with low needs of care, one with medium needs, and one with high needs. Results: for all three patients, hospital costs were reduced by between 83.7 and 97.9% for patients with dementia, externalizing behavior, and chronic alcohol problems, a wet nursing home can produce substantial cost reductions in other parts of the healthcare sector.
Introduction
Although the course of alcohol problems over a lifetime is highly heterogeneous, a significant proportion of people with alcohol problems experience serious problems well into old age [1]. Prolonged heavy drinking is associated with considerable health problems [2,3], which in turn leads heavy drinkers to seek a considerable amount of healthcare services [4].
At the same time, alcohol problems, as well as many of the problems associated with it, may lead patients to decline healthcare until the situation becomes unbearable or dangerous, which ends up requiring more intensive treatment for longer periods. Conversely, general healthcare systems may try to avoid admitting patients with current alcohol problems to treatment, or may release patients prematurely due to behavioral problems in medical settings.
This state of affairs has in recent decades led to the view that treatment for substance use disorders should adopt a "chronic care" perspective [5]. Under such a perspective, clinicians need to adjust services and interventions to the course of the illness, rather than assume that the condition can be managed without regard to the duration of illness, complications, and other factors such as the age of the patient or the socioeconomic status of the patient.
In general, attending low-threshold services can improve functioning in socially marginalized alcohol-dependent people, even without any requirement that they decrease their use of substances [6]. Some experts within the broader addictions field refer to such pragmatic strategies as "harm reduction" [7]. This approach entails reaching patients where they are, rather than assuming an ideal state for patients, and goes beyond treatment, into the broader management of risk, including legislation, outreach programs, patient education, and providing services in a non-judgmental fashion.
There is some evidence that shelter-based administration of alcohol can help seriously alcohol-dependent homeless people cope better and function better [8]. This suggests that some form of "supported controlled drinking" can help a substantial proportion of seriously alcohol-addicted people function better and obtain a better quality of life.
For severely alcohol-dependent patients who are older or physically impaired, the next logical step is the wet nursing home. The wet nursing home is the nursing home for alcohol-dependent people who need an amount of care that cannot be provided in one's own home, but whose alcohol use and associated problems is an obstacle for living in a general nursing home.
Setting
The setting is "E-huset", a wet nursing home for patients with alcohol problems, dementia, and externalizing behavior. Patients referred to the nursing home must be in contact with healthcare, or have a home nurse provided by social services. The patient must also have a severe alcohol use disorder as evaluated by the referring help system, and associated behavioral problems. The nursing home does not require residents to become alcohol abstinent or to reduce their alcohol use. If the resident stops drinking he or she will be referred to an ordinary nursing home. There are no restrictions on alcohol use, but staff members consistently attempt to direct the use into lower-percent alcohol beverages and reduce episodes of binge-drinking.
In most other ways, the home provides the care and activities of a typical nursing home, although the residents are considerably younger than in the typical nursing home.
During the months from September to December of 2006, three cases were selected by staff at the home to assess cost reductions in healthcare, to represent a "high needs", "medium needs" and "low needs" case. No data was collected on healthcare utilization prior to selecting the cases. The cases were selected to represent varying degrees of need for care, and to be able to provide a formal consent.
Healthcare costs for inpatient treatment were estimated based on data from the patients' medical files. Additional information about the costs of particular types of healthcare services was sought from regional hospital services. Outpatient treatment and treatment in emergency rooms are registered but it has not been possible to estimate the price of these services. Thus, the hospital costs in the following cases are based exclusively on inpatient services.
Changes in costs were estimated as the difference between the total costs of inpatient services for the patient in the eighteen months prior to moving in to the nursing home minus the total costs of inpatient services during the eighteen months following admission to the home. Further, the types of services used by each patient are described, along with information about the residents' medical histories.
Ethics
Consent to participate was sought with the patient and presented to the health authorities.
Results
The costs before and after moving into the home are summarized in Table 1. The following provides a narrative description of the three cases.
Case 1. Low Level of Need
Case 1 was a 60-year-old woman. She was raised in a coastal area in a small town, in what is described as a well-functioning family, and has been trained as a healthcare assistant. Prior to moving into the nursing home, she had been married and had two adult children, with whom she had no contact. She had gradually increased her consumption of alcohol over the years, and in spite of several contacts with outpatient services for alcohol dependence, her drinking had steadily increased. After losing her job, she became increasingly socially isolated. Prior to moving into the nursing home, her home nurses visited several times per day, and they often found her in severe withdrawal, occasionally convulsing. She was underweight and incontinent. Her apartment was untidy and rarely cleaned, smelled of urine and feces and evinced her lack of personal hygiene. She was depressed and talked about suicide.
After moving into the home, she gradually became stable, and was able to manage her personal hygiene with minimal assistance. She ate at meals and began to look better. She was still drinking, but at a level that did not cause problems with other residents. Occasionally, she drank heavily for 1-2 weeks. Her contact with other residents and staff stabilized, and she participated in simple practical activities. She seemed less anxious, and did not go through serious withdrawal.
During the 18-month period before moving into the nursing home, she had been hospitalized nine times for periods ranging from 1 to 19 days; in total, she spent 43 days in hospital, had one outpatient visit and several ER visits. The total cost of her hospital-based care was estimated to be 154,649 DKK (20,798.74 Euros).
After moving into the nursing home, she was admitted to inpatient treatment on two occasions for a total of 4 days. The total cost was 25,226 DKK (3392.64 Euros). She also had two visits at an emergency room (ER) and four outpatient visits.
Case 2. High Need of Care
Case 2 was a 62-year-old man. He had been raised in what is described as a well-functioning family. He had no formal training, but had been working for most of his life as an unskilled worker. He had been married three times, and had three adult children, with whom he had no contact. After his last divorce, his consumption of alcohol increased rapidly, causing him to lose his job. Because of alcohol problems and depression, he was repeatedly hospitalized. After an acute cerebral infarction at age 60, he was left with brain damage that rendered him unable to take care of himself. Apart from alcoholic drinks, he started drinking chlorine, denatured alcohol and toilet cleaner. He was described as depressed, lonely and completely without initiative.
After moving into the nursing home, he started to eat, consumed alcohol in an acceptable manner, and his health condition improved considerably. He also made contact with the other residents and staff, reducing his loneliness.
During the 18-month period prior to moving into the nursing home, he was admitted to inpatient wards eleven times, and spent a total of 237 days in hospital. The total cost of these hospitalizations is 1,023,830 DKK (137,694.90 Euros). Due to aggressive/psychotic behavior during intensive care, he also had to be closely supervised by extra staff, but the costs associated with this extra staff could not be estimated by the unit. Further, he had four emergency room visits, one psychiatric emergency and eight outpatient visits.
During his first 18 months in the nursing home, he was hospitalized once for three days. The total healthcare cost was 21,564 DKK (2900.57 Euros). In that period, he had two outpatient visits and no ER visits.
Case 3. Medium Level of Severity
Case 3 was a 70-year-old man, who had a diverse history of employment, including military service, working as a plumber, and running his own business. He had been married four times and had two daughters. After his fourth divorce, he stated that he intended to drink himself to death. By age 50, he obtained a disability pension due to rheumatism, and developed a serious prescription opioid dependence. He would increasingly leave his home and walk around drinking until he would pass out on a bench or in a park. He was unable to cook meals for himself, and repeatedly forgot to turn off his stove. He had a number of somatic complaints, and asked doctors and nurses for painkillers. Additionally, he had serious financial problems, and was often aggressive and dissatisfied.
After moving into the wet nursing home, he became able to manage his personal hygiene, and made and maintained contact with his sister. His response to pain medication improved, and he appeared to be satisfied with living in the home. He continued to drink, but was almost never seen intoxicated.
In the 18 months prior to moving into the home, he had been hospitalized nine times for a total of 77 days, had one ER visit and 5 outpatient visits. The total cost was 32,8579 DKK (44,190.59 Euros). After moving into the home, he was hospitalized once for two days, and had three visits to general ER and four outpatient visits. The total cost of inpatient care during this period was 9458 DKK (1273 Euros).
Discussion
This case series illustrates the clinical value and the potential cost-benefit of the implementation of the wet nursing home. Given the high amount of healthcare services needed for these patients, the implementation of wet nursing homes is likely to reduce healthcare costs for such patients considerably.
Some important limitations must be acknowledged. First, the absence of a control group means that the patients in the study may have been at a lifetime peak level of healthcare costs when they first moved into the nursing home, and that their level of healthcare service use would have declined regardless of whether they had stayed in their homes or not. We consider this a possibility, but given the serious nature of their alcohol problems and other health issues, we doubt very much that the decline would be comparable to a similarly affected control group outside a specialized nursing home. Also, we did not have an alcohol-free alternative to compare with the wet nursing home. It is uncertain whether the subjects in our study would have been willing to move into a nursing home where they were required to be abstinent or to never be intoxicated, or if they could have complied with those rules. However, our experience from other settings is that a general alcohol-abstinence rule for severely demented people with addiction is an object of constant conflict and results in much less control of drinking.
However, regardless of whether such an alternative would have been feasible for this group of patients, the main focus of this case series was not to assess reductions in alcohol consumption for elderly heavy drinkers who were admitted to a wet nursing home. The significant finding is that the subjects improved considerably in several areas although they were all allowed to continue consumption of alcohol.
A further limitation is that we did not have access to standardized measures of the residents' degree of dementia, such as the Mini Mental State Examination [9], or to measures of the severity of their alcohol problems.
Future studies should test the wet nursing home model against other forms of care, including continuing care in the home or traditional nursing homes. If it is not feasible to conduct randomized controlled trials, residents in new nursing homes can be compared to historical control groups or to residents in nursing homes outside of the uptake area of the wet nursing home.
Additionally, future research could look into the social costs associated with people with severe alcohol dependence, including social services costs and impact on others, such as neighbors, adult children, and others.
Conclusions
This case series illustrates how in a wet nursing home, patients with severe alcohol problems can be moved from unacceptable living conditions to conditions that are stable, where immediate risks can be removed from the patients, and where the patients gain access to social contacts. Alcohol harm reduction is potentially important in the context of the most severely affected patients with alcohol use disorders. These targeted services for long-term alcohol users with dementia causes a significant rise in quality of life, just as nursing homes serving the needs of Alzheimer patients elevate the general quality of life for those patients. Another important finding, for policy makers in particular, is that this targeted service also produces a very significant reduction in healthcare costs (Table 1).
|
v3-fos-license
|
2023-01-06T15:35:00.211Z
|
2022-01-21T00:00:00.000
|
255449234
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12015-021-10319-3.pdf",
"pdf_hash": "e8359ddf35127aa479012f0562df5fc7468b0d01",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44195",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "e8359ddf35127aa479012f0562df5fc7468b0d01",
"year": 2022
}
|
pes2o/s2orc
|
Downregulated Calcium-Binding Protein S100A16 and HSP27 in Placenta-Derived Multipotent Cells Induce Functional Astrocyte Differentiation
Little is known about genes that induce stem cells differentiation into astrocytes. We previously described that heat shock protein 27 (HSP27) downregulation is directly related to neural differentiation under chemical induction in placenta-derived multipotent stem cells (PDMCs). Using this neural differentiation cell model, we cross-compared transcriptomic and proteomic data and selected 26 candidate genes with the same expression trends in both omics analyses. Those genes were further compared with a transcriptomic database derived from Alzheimer’s disease (AD). Eighteen out of 26 candidates showed opposite expression trends between our data and the AD database. The mRNA and protein expression levels of those candidates showed downregulation of HSP27, S100 calcium-binding protein A16 (S100A16) and two other genes in our neural differentiation cell model. Silencing these four genes with various combinations showed that co-silencing HSP27 and S100A16 has stronger effects than other combinations for astrocyte differentiation. The induced astrocyte showed typical astrocytic star-shape and developed with ramified, stringy and filamentous processes as well as differentiated endfoot structures. Also, some of them connected with each other and formed continuous network. Immunofluorescence quantification of various neural markers indicated that HSP27 and S100A16 downregulation mainly drive PDMCs differentiation into astrocytes. Immunofluorescence and confocal microscopic images showed the classical star-like shape morphology and co-expression of crucial astrocyte markers in induced astrocytes, while electrophysiology and Ca2+ influx examination further confirmed their functional characteristics. In conclusion, co-silencing of S100A16 and HSP27 without chemical induction leads to PDMCs differentiation into functional astrocytes.
Introduction
Due to their properties of self-renewal and differentiation, stem cells, including mesenchymal stem cells and induced pluripotent stem cells (iPSCs), are promising for regenerative medicine [1]. Previously, our group isolated stem cells from human placenta, termed placenta-derived multipotent stem cells (PDMCs) and found that they can be induced and differentiated into hepatocytes, bone cells and neurons [2,3]. We also found that, under chemical induction, heat shock protein 27 (HSP27) downregulation leads to a highly efficient PDMC differentiation into glutamatergic neurons [4]. Despite iPSCs' potential in biomedical research and personalized regenerative medicine, however, there remain several challenges such as genomic instability [5], the epigenetic memory from the somatic cell source during reprogramming [6] and the altered iPSC characteristics and differentiation potentials by expression of reprogramming factors [7]. Thus, its availability and naïve background make PDMCs a good resource for the study and application of regenerative medicine.
Unwanted proliferation or differentiation has restricted the clinical application of iPSCs [8], as the tumorigenicity of these cells is long recognized [9]. Understanding the mechanisms that control the differentiation and proliferation of implanted stem cells is crucial for future implantation of the tissues or organs derived from these cells. To better understand the underlying mechanisms, we applied a double-cross-comparison screening strategy to match the most consistently expressed molecules in genomic and proteomic databases with the genes found expressed in clinical tissue.
Astrocytes, the most abundant cells in the CNS, play critical roles in the maintenance of neural homeostasis; they are involved in neurotransmitter trafficking and recycling [10], clearance of neuronal waste [11], and protection against oxidative stress [12]. Astrocyte dysfunction has been found to be involved in some neurological disorders, such as sporadic amyotrophic lateral sclerosis epilepsy, autism, lysosomal storage diseases, and Alzheimer's disease (AD) [13,14]. Despite the effectiveness of some methods for the differentiation of embryonic stem cells [15] and human iPSCs [16] into astrocytes, either the differentiation efficiency was low or the process duration long-on the order of monthswhich limits their clinical applications. In addition, in most cases, this differentiation was induced by defined chemical environments [17,18]. For example, overexpression of the transcription factors SOX9 and NFIB in human pluripotent stem cells can induce astrocyte differentiation [19,20]. In contrast, herein we provide another approach involving co-silencing of HSP27 and S100A16, selected through a double-cross-comparison screening strategy, to efficiently induce PDMCs differentiation into functional astrocytes within three weeks, thus largely reducing the time for astrocyte differentiation.
PDMC Cell Culture
PDMCs were obtained as previously, with some modifications [21]. Healthy donors provided fully informed consent, and the study protocol was approved by the Institutional Review Board of Cathay General Hospital under approval number CGH-P105098. After arriving in the laboratory, placentas were cut into pieces using sterilized scissors and washed with phosphate-buffered saline. Next, adequate volumes of normal saline with 0.25% trypsin-EDTA (Gibco/ Life Technologies, Carlsbad, CA, USA) were poured over the tissue samples, which were then incubated for 10 min at 37 °C. Then, the samples were centrifuged, and the cells resuspended in DMEM (Gibco/Life Technologies, Carlsbad, CA, USA) with 10% FBS (HyClone/GE Healthcare, Novato, CA, USA), 100 U/mL penicillin and 100 g/mL streptomycin (MilliporeSigma, St. Louis, MO, USA). Cell cultures were maintained at 37 °C with 5% CO 2 . Fig. 1 Double-cross-comparison for screening neural regenerationrelated genes. (A) PDMCs were induced to differentiate into neural cells by 0.4 mM IBMX. mRNAs and proteins were extracted and used for mRNA expression microarray and shotgun proteomic analyses, respectively. The results from the two high-throughput omics approaches were compared and plotted (blue dots). In order to investigate the crucial genes with the same expression trends, we set the exclusion criteria as 1.28 in log 2 notation. Using this strategy, we narrowed the list of gene candidates to nine upregulated genes and 17 downregulated genes at both the mRNA and protein levels. The selected genes are showed as red dots and indicated with names. (B) To address Alzheimer's disease (AD)-relevant genes, we compared the expression of these 26 genes with expression array data originating from an AD patient. In this analysis, we double comparison of genes with opposite expression patterns because AD is a neurodegenerative disease. (C and D) mRNA expression of the selected genes from the double-cross-comparison strategy verified by qPCR in the PDMCs induced neural cell model (C for upregulation and D for downregulation). We removed KCTD12, SRXN1 and AKR1C1 from the upregulated candidate genes in C, and HMOX1 and BLM from the downregulated candidate genes in D because their mRNA expression showed opposite regulation patterns or because their expression showed too much variation. (E) The remaining genes were further tested for protein expressions by immunoblotting. From those results, PDGFRA, S100A16, PLCB3, HSP27, and MT1E showed the same trends for protein expression as for mRNA expression; therefore, we kept these genes in the candidate list. The other proteins showed opposite regulation patterns or exhibited no changes in protein expression during differentiation. (F) The band intensities were digitized from the immunoblotting results from (E). The results of each proteins were all divided by the intensity of GAPDH of individual time points to show the relative expression fold. Abbreviations used: methyltransferase Like 7A, METTL7A; aldo-keto reductase family 1 member C1, AKR1C1; serine/threonine-protein phosphatase 2A 65 kDa regulatory subunit A beta isoform, PPP2R1B; plateletderived growth factor receptor A, PDGFRA; BTB/POZ domain-containing protein KCTD12, KCTD12; aldehyde dehydrogenase 1 family, ALDH1A1; FK506 binding protein 7, FKBP7; ras-related protein rab-31, RAB31; sulfiredoxin 1, SRXN1; caspase 3, CASP3; metallothionein-IE, MT-IE; bloom syndrome protein, BLM; nocturnin, CCRN4L; histone H1.5, HIST1H1B; S100 calcium-binding protein A16, S100A16; tropomyosin alpha-3 chain, TPM3; heme oxygenase (decycling) 1, HMOX1; heat shock protein 27, HSP27; coactosin-like protein,COTL1; 1-phosphatidylinositol-4,5-bisphosphate phosphodiesterase beta-3, PLCB3; ras-related C3 botulinum toxin substrate 3, RAC3; protein enabled homolog, ENAH; ankyrin repeat domaincontaining protein 13A, ANKRD13A; A-kinase anchor protein 2, AKAP2; splicing factor, arginine/serine-rich 2, SFRS2; PR domain zinc finger protein 1, PRDM1
Neural Differentiation
For chemical induction of PDMC neural differentiation, cells were cultured in complete DMEM with 0.4 mM 3-Isobutyl-1-methylxanthine (IBMX) without FBS, as previously reported [4]. For gene silencing-mediated induction of neural differentiation, HSP27 and S100A16 were co-silenced with shRNAs as described below.
Microarray and Cross-Comparison of Microarray and Proteomics Data
To investigate the differentially expressed genes in response to IBMX treatment, microarray hybridization (Agilent SurePrint G3 Human GE 8 × 60 K oligonucleotide microarrays; Agilent Technologies, USA) was performed as described in our previous report [22]. All raw microarray data was uploaded to Gene Expression Ominibus (GEO) / NCBI (accession number: GSE139656). Briefly, the log ratio was defined as log2 (Y/N), where Y was the gene expression level in PDMCs after 12 h or 24 h of IBMX treatment and N the gene expression level in PDMCs without IBMX treatment (0 h). Genes with log2 ratios > 1 or < − 1 were defined as significantly differentially expressed. In addition, cell lysates from PDMCs with or without IBMX treatment at different times (12 h, 24 h, and 48 h) were subjected to proteomics analysis. We used the standard deviation (SD) to evaluate protein level differences in response to IBMX treatment [23]. The proteins in the highest 10% (SD > 1.28) and lowest 10% (SD < − 1.28) expression in IBMX-treated PDMCs were plotted and assessed relative to those from cells without IBMX treatment. Candidates with similar significant expression patterns in microarray hybridization and proteomics analysis were subjected to further validation.
Calcium Influx
The induced astrocyte-like cells and control cells were first cultured in a 96-well microplate and incubated with 2 μM Calbryte 520 calcium dye (#20651, AAT Bioquest, CA, USA) in phenol red-free DMEM (#30153028, Gibco/ Thermo Fisher) for 1 h at 37 °C. The Calbryte 520 calcium dye-containing medium was replaced by low-K + Tyrode's solution (2 mM CaCl 2 , 140 mM NaCl, 1 mM MgCl 2 , 4 mM KCl, 10 mM glucose, and 10 mM HEPES, pH 7.2), the cells were incubated for another 30 min, and the absorbance read with viruses containing shRNAs specific to HSP27 (shHSP27), MT1E (shMT1E), S100A16 (shS100A16) and PLCB3 (shPLCB3). Cells infected with viruses containing Luciferase shRNA (shLuc) were used as infection controls. We performed single gene silencing (upper row) and double gene silencing (middle and lower rows). The cell images were taken 12 days after virus infection. The cells with HSP27 and S100A16 double silencing showed the highest neural cell differentiation among the groups (middle row, shHSP27 + shS100A16). For better characterization of induced neural cells, an enlarged image is shown in the very right panel of middle row. The relative position of the enlarged rectangle is indicated by a black frame in the original image. Scale bar: 100 μm. (B and C) Percentage of astrocyte-like cells under various combination of gene silencing. The phase contrast images of PDMCs with single silence or various double silence combination were taken 12 days after infection. Six images were randomly selected of each experimental groups and the astrocyte-like cells were counted according to their morphology. (D and E) Determination of HSP27 and S100A16 mRNA and protein expression level. PDMCs were co-infected with viruses containing shRNAs specific to HSP27 (shHSP27) and S100A16 (shS100A16). Cells infected with viruses containing shRNA specific for silencing of luciferase (shLuc) were used as controls. After infection, cells were incubated for 12 (D12), 18 (D18) or 24 days (D24). Cells were harvested and mRNA extracted for qRT-PCR at those time points. The proteins in the cells at D18 were used for Western blotting ◂ with an excitation/emission wavelength of 490/525 nm at room temperature with a Synergy HT Multi-Mode microplate reader (BioTek, VT, USA) to obtain the baseline. After a 90 s baseline recording, 3.5 U/ml human thrombin in low-K + Tyrode's solution was added, and the experiments run at an excitation/emission wavelength of 490/525 nm at room temperature. The results were normalized by the initial baseline values and plotted.
Electrophysiology
Ten days after co-silencing of S100A16 and/or HSP27, cells were plated onto 12 mm coverslips at a 1 × 10 4 density. After 2-3 days, electrophysiological recordings were performed. Membrane currents were recorded using whole-cell recordings by patch-clamp. Patch pipettes were prepared in-house from glass capillaries PDMCs morphology with HSP27 and S100A16 silencing at different time points. (A and B) Phase contrast images of cells with HSP27 and S100A16 double silencing. Compare to the cells silenced with Luciferase (shLuc Control, left column), there were nearly no induced astrocyte. The induced cells showed many dendritic processes at D12 and reached their optimum at D18 without chemical inducers (shHSP27 + shS100A16, right column). For better characterization of induced astrocyte, a part of image at D18 were selected and enlarged to observe the formed astroglial network. The relative position of the enlarged rectangle is indicated by a black frame in the original image. Scale bar: 100 μm. (C) Percentage of induced astro-cytes with HSP27 and S100A16 co-silencing. The bar chart shows the percentage of induced astrocyte quantified from the above conditions. The induced astrocytes were counted according to their morphology. * * *: p < 0.001. (D and E) High resolution images of induced astrocytes. Induced astrocytes (D18) showed a typical starshaped morphology. The descriptive image originated from the middle image was processed by Photoshop and the specific fine structure of filamentous processes were indicated by arrows. The images were taken using Olympus BX51W Scientifica system coupled with DAGE-MTI IR-1000 CCD. Scale bar: 20 μm (Kimble). The capillaries were pulled and fire-polished in order to obtain a tip resistance of 3-4 MΩ with solution. Data were recorded at 10 kHz (Axon MultiClamp 700B amplifier with Pulse software; Signal 4.08). The series resistance was maintained < 10 MΩ. Recordings with leak currents > 100 pA were discarded. The protocol of voltage stimulation from the holding potential (Vh) of − 60 mV, stepped the membrane potential to − 100 mV for 200 ms before a 100 ms long depolarizing ramp to + 120 mV. In voltage-step measurements, the Vh was − 60 mV, with steps from − 100 mV to + 120 mV in 20 mV increments over 200 ms. Ten seconds was used as the interval between pulses in both protocols. After establishing the whole-cell recording configuration, the resting membrane potential was obtained for 120 s using the amplifier analog circuit. Electrophysiological experiments were performed by perfusing the clamped cell with a standard bath saline solution (119 mM NaCl, 2.5 mM KCl, 2.5 mM CaCl 2 , 26.2 mM NaHCO 3 , 1 mM NaH 2 PO 4 , 1.3 mM MgSO 4 , 11 mM glucose, pH 7.4), and osmolality adjusted by mannitol. The intracellular pipette solution was prepared as follows: 145 mM KCl, 1 mM MgCl 2 , 2 mM EGTA, 0.2 mM CaCl 2 , 2 mM Mg-ATP, 0.3 mM Na 3 -GTP, 10 mM HEPES (pH 7.2), and osmolality of 300 ~ 310 mOsm kg −1 by mannitol. A gravity-driven microperfusion system with a 2 ml/min flow rate was used to apply the extracellular solution.
Statistical Analysis
Data were analyzed using SPSS, version 21.0 (IBM, New York, NY, USA). Quantitative variables are described in means ± SD. One-way ANOVA was used to compare the differences between groups followed by Bonferroni post hoc test. All tests were two-sided. For pair comparison, the Student's T test were used. P < 0.05 considered statistically significant.
Shotgun proteomics methods including protein digestion, dimethyl labeling, strong cation exchange separation, and MS data processing are listed in the Supplements. Immunoblotting procedures including antibodies employed, total RNA extraction and quantitative real-time PCR method, immunofluorescence, and Laser scanning confocal fluorescence microscopy procedures and used antibodies are listed in the Supplements S1.
Double-Cross-Comparison Screening Strategy for Identification of Crucial Genes in Neural Differentiation
We used neural differentiation model derived from PDMCs as cell model [4]. The mRNAs and proteins were collected respectively from the differentiated cells. The mRNAs were used for probing of a human oligonucleotide microarray. The Table 1 Fluorescence quantification of induced neural cell by various neuron and astroglial markers. PDMCs with HSP27 and S100A16 co-silencing 18 days post co-silencing were probed with various neuron and astroglial marker antibodies, and appropriate secondary antibodies with FITC conjugation were applied thereafter. DAPI was used to stain the cell nucleus. Cells with immunofluorescence signals were subjected to fluorescence quantification in an NC 3000 image cytometer. The percentage of each fluorescence-positive cells over DAPI-positive cells were calculated. Data were collected from three independent experiments. The mean values of each marker in each experimental group were compared to the mean values in the shLuc group for statistical calculation. * p < 0.05; ** p < 0.01; *** p < 0.001. proteins were digested with trypsin and then labeled with dimethyl groups for a shotgun proteomic approach. The microarray results were cross-compared with the shotgun proteomic data at the expression level, and the results plotted in Fig. 1A. We considered that the crucial genes in neural differentiation, our candidate genes, would be those with the same expression trends at transcriptional and translational levels. For the results in Fig. 1A, we set the exclusion criteria as 1.28 in log 2 notation. We found 26 gene candidates fitting our criteria (red dots in Fig. 1A). Later, to screen neural regeneration-related genes, we cross-compared these 26 candidate genes with a publicly available AD microarray database. The goal of the second cross-comparison was to search candidate genes with opposite expression patterns between the regeneration and degeneration databases. We removed the METTL7A, RAB31, PPP2R1B, PRDM1, HIST1H1B, AKAP2, SFRS2 and RAC3 genes from our candidate list because the expression trends of these genes in AD were the same as in our two types of omics data (Fig. 1B, all red or all blue boxes in the three databases). We performed qPCR to verify the expression of the remaining 18 candidate genes in our list (Fig. 1C, upregulated genes and 1D, downregulated genes). We found that the mRNA expression levels of KCTD12, SRXN1, AKR1C1, HMOX1 and BLM were not in accordance with our transcriptomic results and thus were removed from our candidate list. Next, we immunoblotted the remaining 13 candidate genes (Fig. 1E), and the band intensities were quantified. All intensities of each band were normalized with GAPDH expressions of individual time point (Fig. 1F). The results showed that in the upregulation group, only platelet-derived growth factor receptor alpha (PDGFRA) fit our expectations and it may have some effects of promoting differentiation. In the downregulation group, HSP27 1-phosphatidylinositol 4,5-bisphosphate phosphodiesterase beta-3 (PLCB3) were expressed as expected.
As to the S100A16 and metallothionein-1E (MT1E), except the expression at 0 h, all other time points showed gradually down-regulated manner.
HSP27 and S100A16 Downregulation is Crucial for Neural Differentiation
To evaluate the physiological roles played by these four downregulated genes, we silenced these genes alone or in Fig. 4 Co-silencing of HSP27 and S100A16 directs PDMCs differentiation into astrocytes. (A) Immunofluorescence imaging of induced astrocytes. PDMCs with co-silencing of HSP27 and S100A16 (shHSP27 + shS100A16) 18 days after virus infection. The induced astrocytes were probed with primary antibodies specific against MAP2, TUJ1, vGLUT1, GFAP, ALDH1L1, and GS following appropriate FITC-conjugated secondary antibody. PDMCs with silencing of luciferase (shLuc) were also stained with specific antibodies to demonstrate the specificity of each immunostaining. Cell nuclei were stained with DAPI. Scale bar: 50 μm. (B) Laser scanning confocal fluorescence microscopy images of induced astrocytes. PDMCs with co-silencing of HSP27 and S100A16 to induce astrocyte differentiation double stained with anti-GFAP antibody (Green) combined with other markers (Red) including vGLUT1, ALDH1L1, GS, S100B, SOX9 and KIR4.1. Scale bar: 72.7 μm various double combinations via lentivirus-mediated silencing. By observing cell morphology, we found that silencing of each of the four candidate genes individually showed little effect ( Fig. 2A, row 1, shHSP27, shMT1E, shS100A16 and shPLCB3), as well as for luciferase silencing (used as a control; Fig. 2A, row 3, at the very right panel, shLuc). However, in the double silencing experiments, several combinations elicited neural-like morphology, especially in the shHSP27 plus shS100A16-treated group ( Fig. 2A, row 2, middle panel, shHSP27 + shS100A16). The induced cells could be better identified with enlarged rectangle image ( Fig. 2A, row 2, right panel, shHSP27 + shS100A16 Enlarged rectangle). The percentage of astrocyte-like cells out of total cells compared to all other gene silence groups were quantified ( Fig. 2B for single gene silence; Fig. 2C for double gene silence). The mRNA and protein expression levels of HSP27 and S100A16 were examined from the cells with shHSP27 plus shS100A16-treatment and the results confirmed the downregulated HSP27 and S100A16 in the cells (Fig. 2D for mRNA and 2E for protein expression levels). Some of the induced cell morphology by silencing HSP27 and S100A16 showed star-shaped cells and those cells were resemble as astrocyte indicating that downregulation of these two genes might lead stem cells differentiated into astrocyte. It is worth to notice that there were no chemical inducers, such as IBMX, in these experiments; only the HSP27 and S100A16 genes were silenced. We also conducted the combination of 4-gene silence and the various combination of 3-gene silence; the results were presented in the Supplement S2. However, neither one of them showed better differentiation ability than co-silencing of HSP27 and S100A16. We therefore applied co-silencing of HSP27 and S100A16 in PDMCs and observed cell morphology at 12, 18, and 24 days after cells infected with lentiviruses containing shHSP27 and shS100A16. We found that many differentiated cells showed star shapes, a typical astrocytic shape and developed with ramified, stringy and filamentous processes (Fig. 3A). Also, some of them connected with and formed continuous network (Fig. 3A, D18, enlarged rectangle). For each experimental condition, the induced astrocytes were normalized and quantified (Fig. 3C). According to cell images and quantification results, 18 days was the optimal time point for astrocytic differentiation. Thereafter, we captured images of the induced cells using higher resolution microscopy with IR CCD and the cells showed a typical stellate morphology (Fig. 3D). We also draw descriptive figure base on IR image (Fig. 3E). From descriptive image, we found the induced astrocyte contained astrocytic main and small processes, well differentiated endfoot structures and we also notice that there were some process fibers extended from a particular endfeet, probably because they want to explore and touch neighboring neuron or endothelial cell. All these evidences led us to consider the differentiated cells are astrocytes.
Characterization of Differentiated Cell Types
We next tried to characterize the types of differentiated cells derived from HSP27 and S100A16 co-silencing. We used the optimum conditions for astrocyte differentiation mentioned above that is 18 days post co-silencing and stained the differentiated astrocytes with various neural and glial markers. From the fluorescence quantification results shown in Table 1, we found that silencing of either HSP27 or S100A16 alone exerted little effect on astrocyte or neuron differentiation markers, except for neuron-specific class III β-tubulin (TUJ1) with 13.88% positive rate in shHSP27treated cells and glial fibrillary acidic protein (GFAP) with 6.89% positive rate in shS100A16-treated cells. However, in case of double silencing, we found that shHSP27 and shS100A16 together significantly enhanced astrocytic markers expressions such as microtubule-associated protein 2 (MAP2) with 7.12% positive rate, vesicular glutamate transporter 1 (vGLUT1) with 17.28% positive rate, GFAP with 24.94% positive rate, glutamine synthetase (GS) with 9.57% positive rate, and aldehyde dehydrogenase 1 family member L1 (ALDH1L1) with 13.19% positive rate. Compare to control cells, there were nearly no expression of the astrocytic markers mentioned above. Other markers specific to cholinergic neurons (choline acetyltransferase, ChAT), dopaminergic neurons (tyrosine hydroxylase, TH), GABAergic neurons (glutamate decarboxylase 65, GAD65) and glutamatergic neurons (N-methyl-d-aspartate receptor 2B, NMDAR2B and synaptosomal-associated protein of 25 kDa, SNAP25) did not exhibit significant expression changes. The results indicate that silencing of HSP27 and S100A16 leads to PDMC differentiation into astrocytes.
Confirmation of Astroglial Differentiation with Astrocyte Markers
We next performed immunofluorescence staining of induced astrocyte (D18) to confirm cell lineage with astrocytic markers including MAP2, TUJ1, vGLUT1, GFAP, ALDH1L1, and GS (Fig. 4A). The results, in accordance with previous immunofluorescence intensity counts, showed slightly faint MAP2 and TUJ1 immunofluorescence but intense vGLUT1, GFAP, ALDH1L1, and GS immunofluorescence. The results demonstrated that not only the differentiated cells could be stained for these astrocyte markers but also exhibit a typical astrocytic morphology with their processes arranged in filamentous star-shaped patterns. Next, to ensure the co-expression of astroglial specific markers, we performed double staining in induced astrocytes with GFAP and other astroglial markers. The images were captured using laser scanning confocal fluorescence microscopy (Fig. 4B). GFAP staining served to locate induced astrocytes and to better observe other markers. The induced cells showed the typical star-shaped astrocyte morphology (Fig. 4B, Phase column and GFAP column, Green), with some astrocytic end-feet and 4 to 6 main processes in phase contrast images. Meanwhile, positive astrocytic markers including vGLUT1, ALDH1L1, GS, S100 calcium-binding protein B (S100B) (Fig. 4B, middle column, Red) demonstrate the astrocytic nature of differentiated cells. An astrocyte-specific nuclear marker, SOX9, showed a dense staining in the nucleus which in agreement with previous reports [24]. We also performed the immunostaining of an inwardly rectifying potassium channel, KIR4.1 (Fig. 4B, the lowest images in the middle column, Red), which showed highly expressed astrocyte. We observed that KIR4.1 were expressed in the cell membrane which demonstrated the fact that KIR4.1 is a potassium channel [25]. We also observed the KIR4.1 also enriched expressed in the endfeet of induced astrocyte, this fact is in accordance with other report [26]. All these results indicated the PDMCs was differentiated into astrocyte.
Functional Characterization of Induced Astrocytes
We also investigated the function of the astrocytes derived from co-silencing of S100A16 and HSP27 in PDMCs. For functional experiments, astrocytes were induced by cosilencing of shHSP27 and S100A16 in PDMCs for 18 days (D18), while cells silenced with shLuc were used as control.
First, we assessed their capacity to generate intracellular calcium waves after thrombin application. This well-established method to characterize the function of astrocytes involves a protease-activated receptor (PAR) expressed in functional astrocytes, which is activated by thrombin and enables Ca 2+ entry into cells [20]. After differentiating PDMCs into astrocytes by HSP27 and S100A16 co-silencing, we added thrombin to activate PAR and examined Ca 2+ entry by Calbryte 520 calcium dye. The results showed that thrombin addition led to more robust Ca 2+ mobilization in the astrocyte-like cells than in control cells (Fig. 5B). After three independent experiments, the Ca 2+ entry showed a nearly 17.9-fold increase on average in the differentiated astrocytes compared with control cells (Fig. 5C). We next examined the electrophysiological properties of the induced astrocytes by patch-clamp recordings. First by ramping (Fig. 5D) and then by voltage-step (Fig. 5E) to investigate their ability to generate potassium currents. With respect to the former, induced astrocytes showed a large outwardly rectifying potassium conductance (Fig. 5D, left panel). On the contrary, control cells showed a linear conductance (Fig. 5D, right panel). By adding 10 mM tetraethylammonium (TEA), a potassium channel blocker, into the buffer, the induced astrocyte displayed a reduced outwardly rectifying potassium conductance (Fig. 5D, middle panel) indicating the existence of potassium channels. Finally, when the voltage-step protocol was applied to the induced astrocytes, we found typical outward potassium currents with rapid activation kinetics, partial slow inactivation, and a response magnitude proportional to the applied voltage (Fig. 5E, left panel). In control cells, nearly no responsive activation phase was evoked (Fig. 5E, right panel). Also, by adding 10 mM TEA, the response magnitude was drastically reduced (Fig. 5E, middle panel) indicating the existence of potassium channels in the induced astrocytes. At 100 mV induced astrocyte exhibited a nearly 66% decrease in response magnitude while control cells showed only an 8% decrease in average (Fig. 5F). We also recorded their resting membrane potential (Fig. 5G). The more negative resting membrane potential of induced astrocytes than control cells was in agreement with the properties of astrocytes [26]. The results of functional characterization showed that the cells derived from co-silencing of HSP27 and S100A16 in PDMC were able to respond to external stimuli and behave as astrocytes.
Discussion
We previously demonstrated that HSP27 downregulation enhances PDMC differentiation into glutamatergic neurons under chemical induction [4]. Herein, we describe how cosilencing of HSP27 and S100A16 directs PDMCs to spontaneously differentiate into astrocytes without chemical induction. The main role of HSP27, regarded as a differentiation pathway keeper, is to prevent neural differentiation by interacting with procaspase 3, which needs to be activated at the beginning of differentiation. Therefore, the downregulated HSP27 expression level is beneficial for neural differentiation. Currently, little is known about S100A16; and almost all publications focus on its association with cancer. S100A16, a calcium-binding protein belonging to the S100 superfamily, is mostly involved in tumor progression [27]. It forms a homodimer, with two Ca 2+ ions interacting in the EF hand of each subunit [28]. To date, there is no other report depicting the role of S100A16 in astrocyte differentiation. This is the first report on S100A16's involvement in neural differentiation.
The identification of crucial molecules and the mechanisms controlling neural differentiation is a major step toward the successful treatment of neurodegenerative diseases. One of the major age-related neural degenerative diseases, AD accounts for the largest proportion (65%-70%) of dementia cases in the aged population [29,30]. In this study, we used a double-cross-screening strategy with omics data derived from our stem cell differentiation model and an AD database to successfully identify a crucial combination of downregulated genes involved in astrocyte generation. The main function of astrocytes in the CNS is to maintain homeostasis in processes such as neurotransmitter uptake and recycling, synaptic activity modulation and ionic balance [31]. Many studies have shown that astrocyte function is altered in brains of patients with neurodegeneration [32]. For example, the presence of amyloid beta abnormally regulates gliotransmission and neurotransmitter uptake and alters calcium signaling in astrocytes [33]. The astrogliosis in AD is thought to be responsible for changes in critical molecule expression and morphology in astrocytes [34], changes that result in scar formation and inhibition of axon regeneration [35]. Through our screening strategy, we successfully found S100A16 in the AD dataset which was downregulated in the neuron-neogenesis dataset.
The double-cross-comparison strategy used here was proven to be an effective approach for the identification of molecules crucial for neural differentiation. The genes revealed through this strategy were sufficient to induce PDMCs to differentiate into cells that possess the functional and morphological characteristics of astrocytes. In conclusion, our findings provide molecular insights as well as a good astrocyte differentiation cell model for potential future application in human regenerative medicine.
|
v3-fos-license
|
2020-09-03T13:51:47.183Z
|
2020-09-03T00:00:00.000
|
221463272
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41612-020-00137-8.pdf",
"pdf_hash": "8d1abe8829ef812cbee85a43e86be988be80e39c",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44196",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "8d1abe8829ef812cbee85a43e86be988be80e39c",
"year": 2020
}
|
pes2o/s2orc
|
Constraint on precipitation response to climate change by combination of atmospheric energy and water budgets
Global mean precipitation is expected to increase with increasing temperatures, a process which is fairly well understood. In contrast, local precipitation changes, which are key for society and ecosystems, demonstrate a large spread in predictions by climate models, can be of both signs and have much larger magnitude than the global mean change. Previously, two top-down approaches to constrain precipitation changes were proposed, using either the atmospheric water or energy budget. Here, using an ensemble of 27 climate models, we study the relative importance of these two budgetary constraints and present analysis of the spatial scales at which they hold. We show that specific geographical locations are more constrained by either one of the budgets and that the combination of water and energy budgets provides a significantly stronger constraint on the spatial scale of precipitation changes under anthropogenic climate change (on average about 3000 km, above which changes in precipitation approach the global mean change). These results could also provide an objective way to define the scale of ‘regional’ climate change.
INTRODUCTION Improving our understanding of the response of the hydrological cycle to climate change is key to effective adaptation strategiesand remains a major scientific challenge. As the global mean temperature increases with changing climate, the global mean precipitation rate is predicted to increase by about 1-3% K −1 . This rate of increase in precipitation is slower than the rate of increase in humidity in the atmosphere due to thermodynamic considerations (which is predicted to be~7% K −1 from the Clausius-Clapeyron relation). The slower rate of precipitation increase compared to the humidity increase is due to energetic constraints [1][2][3][4] , i.e. the ability of the atmosphere to radiatively cool, and must impose a decrease in convective mass fluxes 1 . Compared to the global mean response, regional changes in precipitation remain poorly understood 5 . At what scale do precipitation changes transition from global (or large-scale) to regional precipitation changes values? We note that previously 'regional' and 'global' has generally not been objectively defined in this context. Any global or local precipitation change is constrained by both the atmospheric energy budget 4,6,7 and the atmospheric water budget 8,9 . The atmospheric energy budget forces any global mean precipitation increase (which increases the latent heat release) to be balanced by an increase in the radiative cooling of the atmosphere and/or by a decrease in the surface sensible heat flux. Locally, the increased latent heating could be compensated by changes in the divergence of dry static energy 6,10,11 , which was shown to exhibit contrasting behaviour for tropical and extra-tropical perturbations 12 . A similar argument could be presented for the atmospheric water budget, i.e. globally, any increase in precipitation must be accompanied by a similar increase in evaporation. Again, local precipitation changes could be compensated by changes in the divergence of water vapour 8 . Changes in the divergence of water vapour could be induced by either changes in atmospheric circulation, driving changes in air mass divergence (referred to as the dynamical contribution), or by changes in the water vapour capacity, driving changes in the divergence of water vapour, even for a given air mass divergence (referred to as the thermodynamically contribution 13 ). The latter is expected to follow the Clausius-Clapeyron relation.
On long time-scales (for which atmospheric storage terms can be neglected), the vertically integrated energy and water budgets are given, respectively, by: (1) where P is the precipitation, E is the evaporation and div(s) and div (q v ) are the divergence of dry static energy (s) and water vapour (q v ), respectively (all in units of W m −2 ). Q is the sum of the surface sensible heat flux (Q SH ) and the atmospheric radiative heating (Q R ) due to radiative shortwave (SW) and longwave (LW) fluxes (F). Q R can be expressed as the difference between the top of the atmosphere (TOA) and the surface (SFC) fluxes as follows: where LW fluxes are positive upward and SW fluxes are positive downward. Recent research has demonstrated that under our current climate conditions the atmospheric water and energy budgets are locally closed on scales of the order of 4000-5000 km 8,14 .
That means that, based on observations in the tropics, once averaged over~5000 km P ≅ Q and the atmosphere is close to radiative-convective equilibrium 14 . In addition, based both on climate model and reanalysis data-sets it was shown that a similar averaging scale (~4000 km) is required to close the water budget (i.e. P ≅ E) 8 . Beyond these scales, the divergence terms become inefficient in compensating the energy/water imbalance.
Shifting our perspective from the current climate to a changing climate, Eqs. (1) and (2) become: where δ represents the difference between a future climate and the current climate. Previous work demonstrated that the correlation between δP and δE 8 and between δP and δQ 6 increase with the spatial scale of averaging and becomes larger than 0.5 for a scale of a few 1000 km. This again demonstrates that the ability of divergence to compensate for changes in precipitation decrease with the spatial scale. Once the divergence terms become inefficient, the precipitation changes approach the global mean change, which is known to be relatively small (compared to local precipitation changes-1-3% K −1 ) 1-3 . However, precipitation changes on regional scales, for which the divergence terms remain efficient, could be much larger than the global mean. Hence, identifying the 'break-down' scale between these two regimes can help in understanding and predicting future changes in precipitation. A priori, the decrease of efficiency of divergence with averaging scale does not have to be similar for the water and energy budgets. In addition, at different geographical locations the relative magnitude of the two divergence terms could change. Thus, it is possible that the characteristic spatial scale of changes in precipitation under global warming is constrained by a combination of the atmospheric energy and water budgets with a changing relative importance between them. We note that the relative role of the different budgets has not been quantified before. The aim of this study is to examine the differences and commonalities between the water 8 and energy 6 budgets control on precipitation and their role in determining the spatial scale of changes in precipitation under climate change. We demonstrate that combining the water and energy budget constraints results in improved predictions of the scale of precipitation changes.
RESULTS
Energy and water budgets control on precipitation Figure 1 presents the multi-model mean divergence terms of both the water and the energy budgets, averaged over different spatial scales. In the tropics the two divergence terms are strongly anticorrelated and show the same spatial structure (please note that div(q v ) is presented with a minus sign to be consistent with (1), calculated as the precipitation minus evaporation) and of dry static energy (div(s), right column f-j-calculated as the precipitation plus the atmospheric radiative terms plus the surface sensible heat flux), at different spatial scales. The top row (a, f) shows the native model resolution, subsequent rows are averaged over a circle centred at each grid point with the given radius indicated in the title.
Eqs. (1) and (2). The multi-model mean spatial correlation between −div(q v ) and div(s) in the tropics (−30°to 30°) is 0.94 (see Supplementary Fig. S1, for the zonal mean behaviour of all models). The high correlation in the tropics demonstrates that any convergence of water vapour that generate precipitation will be accompanied by production and divergence of dry static energy. The opposite is true in the sub-tropics where there is a net divergence of water vapour and a net convergence of dry static energy. Moving poleward, div(s) becomes negative and large (in absolute magnitude), while −div(q v ) becomes positive and small. At high latitudes, there is a net convergence of both water vapour and dry static energy as both are being advected from lower latitudes by eddies. However, as the amount of water vapour decreases with the decrease in temperatures towards the poles, the convergence of water vapour decreases from the midlatitudes storm tracks to the poles. In contrast, the dry static energy convergence increases pole-wards. The zonal mean divergence terms of water vapour and dry static energy reflect the meridional advection of moist static energy 15 . We note that at the native resolution (upper row) the divergent terms appear small at many locations especially over land; however, they are not negligible compared to the local precipitation (see Supplementary Fig. S2 presenting the normalised divergent terms). Averaging the water vapour and dry static energy divergence terms over increasingly larger scales (Fig. 1), we note that the spatial pattern becomes weaker and almost completely vanish at 3000 km. This weakening of the spatial pattern occurs on smaller scales for the water budget (c.f. Dagan et al. 8 ) than for the energy budget.
Following Dagan et al. 8 in Fig. 2a, b we present the length scales L for which the water budget and the energy budget are locally closed to within 10%, respectively (L WB (10%), L EB (10%)), i.e.: for the water budget and jðP þ QÞj=P < 0:1; for the energy budget. L WB and L EB exhibit similar spatial features such as evident at the eastern parts of the subtropical oceans and a sharp transition around ±40°8. These patterns emerge due to the fact that at a centre of a region of negative/positive −div(q v ) or div(s) (such as the eastern parts of the subtropical oceans) the required averaging scale for closure of the relevant budget is larger. In addition, at high latitudes beyond 40°, the averaging scale required to close both the water budget and the energy budgets is large due to the large role of advection of water and energy from lower latitudes.
On average L EB > L WB due to the larger variance in E compared to Q, which more effectively counteract the large variance in P (see Supplementary Fig. S3). However, at different regions the ratio between L EB and L WB changes (Fig. 2c). For example, at high latitudes L EB > L WB everywhere, while at mid-latitudes (around ±40°) L EB is generally smaller than L WB over the oceans but less so over land (for which almost at all latitudes L EB > L WB ). Over the tropical oceans we note a difference between the Atlantic and the eastern part of the Pacific, for which L EB < L WB , and the west Pacific and the Indian ocean for which L EB > L WB . For the latter, the existence of the warm pool and associated cloud cover releases a significant amount of latent heat by precipitation, which cannot be compensated locally by the relatively small radiative term.
Although both budgets are contributing to constrain precipitation, we expect that the smaller scale between L EB and L WB (locally) would be the limiting factor and will determine the spatial scales of changes in precipitation in future climate. Hence, we combine L EB and L WB to a single scale (L EB+WB ), which is: L EB+WB = min(L EB , L WB ) (Fig. 2d).
As was shown in Dagan et al. 8 , the average scale required for closure of the water budget depends on the definition of closure, or to within what percentage P is close to E. The same is true for the energy budget (Fig. 3). A stricter closure requires larger scale of averaging for both the water and the energy budgets. We also note that for both budgets the scale for closure in the tropics is smaller than the global mean scale. This is again due to the large contribution of advection of water vapour and dry static energy from low to high latitudes. In addition, on the global mean L EB > L WB for all the levels of closure. As expected, the combined scale, L EB+WB , has a smaller mean for all the levels of closure (as it is defined as the minimum of L EB and L WB for each location, Fig. 3). We note that the mean values of L EB and L WB presented here based on climate models are consistent with previous estimates based on observations 8,14 . The average scale of precipitation changes under climate change (L δP ) is also presented in Fig. 3. The scale of precipitation changes is calculated as the scale for which the relative precipitation changes (as absolute value) is smaller than a given value, R: jδP=Pj < R; (8) for R in the range of 7.5-15%. We note that the level of imbalance in the different budgets for a given closure threshold is similar (in terms of water amount) to the magnitude of the relative precipitation change R, as all are normalised by the (same) local historical precipitation (Eqs. [6][7][8]. Figure 3 demonstrates that the combined scale L WB+EB is more similar to (but slightly larger than) L δP than either L EB or L WB separately. This is also demonstrated in Fig. 4 which presents the multi-model mean L δP vs. L EB , L WB and L WB+EB for different levels of closure. Figure 4 demonstrates that L EB is much larger than L δP for all levels of closure. The same is true for L WB but to a lesser extent, and the combined scale (L WB+EB ) is the closest to L δP . These results demonstrate that combination of water and energy budgets provides a better constraint on the scale of precipitation changes under climate change (2500-3000 km for a 10% combined budget imbalance or relative precipitation change). Above this scale, precipitation changes approach the global mean change (usually on the order of a few percent), while below it they could be substantially larger.
DISCUSSION
Global mean precipitation changes due to global warming are predicted to be relatively small 1-3 (1-3% K −1 ) compared to the rate of increase in atmospheric water vapour (~7% K −1 ). Any global mean precipitation change must be consistent with both the atmospheric energy and water budgets, meaning that precipitation must change such that the atmospheric energy and water budgets remains in balance 4,[7][8][9]16,17 . Local precipitation changes could be compensated for by divergence of water vapour or dry static energy 10,11 and hence could be much larger than the global mean change. Previous studies have accounted for the changes in the divergence term of the energy budget to understand precipitation changes due to different drivers 10,11 . However, these divergence terms are expected to become less efficient with increasing scales and must vanish on the global scale. While both the energy and water budget constraints on precipitation are well studied individually 8,10,11,16,17 , their relative importance for different regions has not been well evaluated. In addition, the spatial scales at which each constraint holds has not been thoroughly quantified. Hence, most previous studies made arbitrary definitions of 'regional' vs. 'large-scale' precipitation change.
Using 27 CMIP5 models we identify the scale for which the divergence terms become inefficient and above which the changes in precipitation are expected to approach the global mean change to be about 2500-3000 km. This could provide an objective way to define the scale of 'regional' climate change. We note that a shift in the precipitation spatial pattern under climate change (such as a shift in the location of the inter-tropical convergence zone or a widening of the Hadley cells) could also be interpreted as a shift in the water and energy divergence terms. For example, in the tropics the scale of closure of the different budgets is roughly determined by the scale of the Hadley cells (averaging the sub-tropical net evaporation/radiative cooling regions with the net precipitation regions of the deep tropics- Fig. 1). Hence, the future predicted widening of the Hadley cell 18 is expected to enlarge the budget closure scales. However, we note that the Hadley cell is expected to widen by about 100-200 km 18 , while the scale of closure of the different budgets is at the order of 4000-5000 km. Hence, we do not expect this widening to significantly affect our results. This can also be seen from Dagan et al. 8 , which showed that the scale of closure of the water budget is not expected to change significantly in future climate compared to the inter-model spread. In addition, we note that the change in the mean location of the inter-tropical convergence zone due to aerosol forcing is expected to occur on much smaller scales 19 than the closure scales presented here.
Here we show that the characteristic scale of precipitation changes under anthropogenic climate change is better Fig. 4 Average scale of closure of the water and energy budgets vs. the scale of precipitation changes. The multi-model mean spatial scale for local water budget closure (L WB -for which precipitation roughly equals evaporation), energy budget closure (L EB -for which precipitation roughly equals the sum of the atmospheric radiative heating rate and surface sensible heat flux), and the combined water and energy budgets scale (L WB+EB ) vs. the scale of changes in precipitation (L δPy-axis). The size of the dots represents the level of closure or relative precipitation change from 15% (the largest dots) to 7.5% (the smallest dots) in increments of −2.5%. The black dotted line represents the 1:1 line. Fig. 3 Average scale of closure of the water and energy budgets and of precipitation changes. The multi-model mean spatial scale for local water budget closure (L WB -for which precipitation roughly equals evaporation), energy budget closure (L EB -for which precipitation roughly equals the sum of the atmospheric radiative heating rate and surface sensible heat flux), the combined water and energy budget scale (L WB+EB ) and the scale of changes in precipitation (L δP ) as a function of the degree of closure or relative precipitation change. As all quantities are normalised locally by historical P (Eqs. [6][7][8], the x-axis represents a similar amount of water for all. The global mean and the tropical mean are presented for each scale. The vertical lines represent the standard deviation of the 27 different CMIP5 models.
constrained by a combination of the water and energy budgets than by each one separately. This demonstrates that combining the water and energy budget perspective will improve our understanding of the drivers behind, and the scale of septation of, local and large-scale precipitation changes.
CMIP5 data
The analysis is based on data from 27 CMIP5 (phase 5 of the Coupled Model Intercomparison Project 20 ) models (listed in Supplementary Table 1) for the following two protocols: historical and RCP8.5 (Representative Concentration Pathway 8.5 21 -a scenario with relatively fast increase in greenhouse gasses concentrations) simulations. From the historical runs we average the data over the last 20 years of the 20th century, while from the RCP8.5 runs we average the data over the last 20 years of the 21st century. Changes in precipitation (δP) are determined based on the difference between the RCP8.5 and the historical runs. All data are remapped to T63 resolution (about 1.8°). The divergence terms (of either the water or the energy budget) are calculated as the residual of the other terms. We note that, in climate models, the water and energy budget constraints (Eqs. (4) and (5)) hold to the degree the models conserve water/energy [22][23][24] . However, small 'leaks' of water or energy from the models are not expected to significantly affect the results presented here.
CODE AVAILABILITY
Any codes used in the paper available upon request from: guy.dagan@physics.ox.ac.uk.
|
v3-fos-license
|
2021-01-07T09:06:39.372Z
|
2021-01-06T00:00:00.000
|
234290062
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/1996-1073/14/2/267/pdf?version=1609926487",
"pdf_hash": "39e9dcc7478d48be1f79e78e1571d4c7ab131bdc",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44199",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"sha1": "5e0b915dc90b713f64e52d6a7b7722062632d5b3",
"year": 2021
}
|
pes2o/s2orc
|
Management and Economic Sustainability of the Slovak Industrial Companies with Medium Energy Intensity
: Industry 4.0 and related automation and digitization have a significant impact on compe-tition between companies. They have to deal with the lack of financial resources to apply digital solutions in their businesses. In Slovakia, Industry 4.0 plays an important role, especially in the mechanical engineering industry (MEI). This paper aims to identify the groups of financial ratios that can be used to measure the financial performance of the companies operating in the Slovak MEI. From the whole MEI, we selected the 236 largest non-financial corporations whose ranking we obtained according to the amount of generated revenues in 2017. Using factor analysis, from eleven traditional financial ratios, we extracted four independent factors that measure liquidity (equity to liabilities ratio, quick ratio, debt ratio, net working capital to assets ratio, current ratio), profitability (return on sales, return on investments), indebtedness (financial leverage, debt to equity ratio), and activity (assets turnover, current assets turnover) of the company. Our analysis is an essential prerequisite for developing a realistic financial plan for companies operating in the MEI, especially when considering investments in new technologies related to Industry 4.0.
Introduction
In the 21st century, it is especially necessary to point out the importance and usability of electrical energy in everyday life and in industry. The sectors of industry and industrial production are interconnected because of the consumption of the energy industry. Energy machinery of the mechanical engineering industry is focused on a wide range of machines and devices used in the manufacturing, transformation, and accumulation of different forms of energy concerning environmental requirements involved in the production, construction, and operation of energy systems, machinery, and equipment. Industry 4.0 appeals even to companies with vast experience in implementing automatization. Industry 4.0 represents the interconnection between IT technologies and the production process, flexible just-in-time deliveries, advanced quality management and growing robotics, automation, and digitalization. Digitalization has literally become a megatrend in the mechanical engineering industry (MEI). Mechanical engineering companies apply energy fundamentals and mechanics to create extensive types of machines, devices, or systems for energy transformation, biofuel production, materials treatment and processing, environmental regulation, or transportation [1]. In the coming years, the number of manufactured components in selected mechanical engineering companies will decrease due to increased electromobility, as electric vehicles will require a higher volume of electricity and a lower number of components produced by mechanical engineering companies. It follows that it is necessary to build a network of electric vehicle charging stations and that sufficient electricity must be available. The MEI assists in the progress of a clean energy future because it develops subsequent generation marine, solar, and wind energy, as well as electrical energy storage materials, devices, or systems. Moreover, it provides grid integration [2]. It follows that energy efficiency improvements in the MEI represent a crucial challenge in maintenance sustainability and achieving emissions reduction objectives [3][4][5][6][7]. Therefore, the sustainability of the competitiveness of the industry and individual business entities depends on the successful implementation of modern technologies and the development of the intelligent industry. However, the impact of the new technologies introduction on corporate finance also needs to be considered.
Power quality indicators such as voltage range, voltage fluctuations, and asymmetries, minimum short-circuit power, and higher harmonic voltages are a priority of the energy engineering and energy machinery of the MEI. In this area, from a financial point of view, we can deal with efficiency indicators and apply multi-criteria evaluations. The analysis of financial efficiency and intensity indicators of the mechanical engineering companies, as one of the major consumers of the energy industry and industry with the medium energy intensity, is highly relevant.
The industry in Slovakia is dominated mainly by the automotive and mechanical engineering industries, which, together with the electrical engineering industry, are the main sources of growth in industrial production. The Slovak MEI is one of the principal pillars of the Slovak economy. Although in the recent past, it has been assumed that the focus of economic activity in developed countries has shifted from industrial production to the services sector, the industry will remain the engine of productivity growth and innovation in the years to come. Innovations are extremely essential for the success of the MEI [8]. It is possible to assume that mechanical engineering companies in Slovakia will be forced to follow current trends and invest in applied research, development, and high-tech services to increase productivity and added value. Therefore, economic performance and financing is a substantial component in the involvement of companies in Industry 4.0. However, the introduction and application of modern technologies related to Industry 4.0 necessarily require a huge amount of financial resources. For many companies, the low profitability and lack of financial resources is an obstacle [9]. Therefore, it is principal to know the financial performance of companies that are in the spotlight. The question is which financial indicators should be analyzed. The problem in setting specific financial indicators is the fact that each industry or company operates in a different business environment and country. It is demanding to come up with a universal model, so it is easiest to focus on a particular country's industry. This paper aims to identify the groups of financial ratios that can be used to measure the financial performance of the companies operating in the Slovak MEI.
Economic Sustainability
Since COVID-19 appeared, it is even more obvious that the economic sustainability of business entities is one of the main factors affecting the long-term economic growth of a company. The principle of economic sustainability is currently becoming one of the generally accepted principles of effective business development. Economic sustainability represents non-declining economic welfare [10]. It is explained as the allocation of savings and investment with providing the highest degree of prosperity for present and future generations [11]. On the micro and local levels, Bertelmus [10] (p. 121) defines economic sustainability of capital maintenance as "produced and natural capital maintenance for sustaining the productivity of enterprises". Economic sustainability is affected by investment choices that institutional investors make [12]. The economic aspect of sustainability emphasizes resource efficiency. It means that economic sustainability manages losses and surpluses to ensure maximum economic efficiency and focuses on the trade to receive specific results and keep business strategy going [13]. It follows that companies should pay more attention to the sustainability of short-run returns and economic performance as a whole [14]. The theory of economic sustainability is presented in [15][16][17][18][19].
Economic sustainability deals with financial performance, utilization of resources in an optimal way, and the profitable long-term functioning of the company [20]. Financial and economic analysis plays a significant role in the decision-making of corporate management. Every manager should be able to interpret financial and economic indicators to maintain the economic sustainability of a business. Economic sustainability requires fair, impartial, and fiscally sound decision-making while taking other aspects of sustainability into account [21].
The company's performance is evaluated from several aspects, which differ depending on the interest groups. In particular, shareholders are interested in increasing the value of invested capital into the business, while they prefer the profit-generating aspect, return on equity, economic value added, and market value indicators. The ultimate goal of the company is not only to generate value for shareholders, but also to form economic, environmental, and social value [22]. Therefore, business models and corporate governance mechanisms should focus beyond the organization as an economic entity [23]. Specifically, sustainable management is oriented towards economic, environmental, and social aspects of corporate governance aimed to increase the company's competitiveness (see Figure 1). sustaining the productivity of enterprises". Economic sustainability is affected by investment choices that institutional investors make [12]. The economic aspect of sustainability emphasizes resource efficiency. It means that economic sustainability manages losses and surpluses to ensure maximum economic efficiency and focuses on the trade to receive specific results and keep business strategy going [13]. It follows that companies should pay more attention to the sustainability of short-run returns and economic performance as a whole [14]. The theory of economic sustainability is presented in [15][16][17][18][19]. Economic sustainability deals with financial performance, utilization of resources in an optimal way, and the profitable long-term functioning of the company [20]. Financial and economic analysis plays a significant role in the decision-making of corporate management. Every manager should be able to interpret financial and economic indicators to maintain the economic sustainability of a business. Economic sustainability requires fair, impartial, and fiscally sound decision-making while taking other aspects of sustainability into account [21].
The company's performance is evaluated from several aspects, which differ depending on the interest groups. In particular, shareholders are interested in increasing the value of invested capital into the business, while they prefer the profit-generating aspect, return on equity, economic value added, and market value indicators. The ultimate goal of the company is not only to generate value for shareholders, but also to form economic, environmental, and social value [22]. Therefore, business models and corporate governance mechanisms should focus beyond the organization as an economic entity [23]. Specifically, sustainable management is oriented towards economic, environmental, and social aspects of corporate governance aimed to increase the company's competitiveness (see Figure 1). Figure 1. The impact of performance on economic success in sustainable management. Source: our processing according to [24].
The performance analysis of the decision-making unit has an irreplaceable role in the production transformation process. Capital structure and economic performance are some of the main factors that could influence corporate performance. Economic performance can be understood as production capability at the micro, macro, or international level. Companies can maximize their performance and minimize their financial costs by maintaining an adequate capital structure [25]. Kocmanová and Dočekalová [26] state the fol- Figure 1. The impact of performance on economic success in sustainable management. Source: our processing according to [24].
The performance analysis of the decision-making unit has an irreplaceable role in the production transformation process. Capital structure and economic performance are some of the main factors that could influence corporate performance. Economic performance can be understood as production capability at the micro, macro, or international level. Companies can maximize their performance and minimize their financial costs by maintaining an adequate capital structure [25]. Kocmanová and Dočekalová [26] state the following specific economic key performance sector-based indicators: turnover, sales, revenues, costs, added value, income from operations, and safe and good-quality products. However, what if such indicators are not available to determine economic performance? In order to evaluate the company's economic performance, it is necessary to choose the appropriate indicators that can be used to measure it.
Factor Analysis of Financial Ratios
The priority of industrial companies in a constantly floating environment is to recognize valid methods and measures that would allow adaptation to volatile situations. As stated in [27,28], the company can face financial problems if it does not adapt to market conditions. Only those companies that will be able to respond in a timely manner to the changing environment and adopt technologies resulting from Industry 4.0 will be successful.
Financial management has been developing since the 2nd industrial revolution. At the time of the 4th industrial revolution, its importance is growing [28] because the implementation of strategies connected with Industry 4.0 requires significant funding [9,29]. A relative lack of financial resources can cause a considerable disadvantage and reduce the development possibilities of industrial companies [9].
Financial analysis of the industry helps companies to understand their position relative to other participants in the industry; it makes it possible to compare companies with their competitors, to see the differences, take advantage of the positives, reveal weaknesses, and create a plan for the favorable direction of the company. As stated in [30], evaluation of the company's economic performance should be performed simply and quickly.
There is no uniform system of financial measures for evaluating the economic condition of a company or industry; however, financial ratios are mainly used (e.g., ratios of liquidity, activity, profitability, indebtedness) [31]. In financial analysis, it is possible to consider many financial ratios; however, in terms of financial management, time management, and quick decision-making, it is essential to choose the most crucial. Therefore, financial analysts aim to decrease many financial ratios to only a few of them while providing identical information to the primary dataset. In this case, factor analysis seems to be the most appropriate method. The factor analysis is a method based on the principle of early warning of emerging financial problems. The factor analysis concentrates on the main indicators that characterize the economic position of the company or industry. It makes it possible to specify, in particular, the weak or risky aspects of financial stability. The aim of factor analysis in financial analysis is to diminish the number of financial indicators that are correlated with each other and obtain new independent variables that contain information from the original variables.
Researchers have adapted factor analysis on financial ratios of several industries of different countries; e.g., the steel industry of the Slovak Republic [32], the agricultural industry of the Czech Republic [33], the cement industry of India [34], the construction industry [35] and the largest industrial enterprises of Turkey [36], the pharmaceutical industry [37] and the listed companies in China [38], the largest companies of Croatian market [39], or IT companies of the Czech Republic [30]. However, each study used specific financial ratios appropriate for the specified business environment, economic conditions of the country, and available financial data (see Table 1).
As is stated in [40], it is inappropriate to use universal models, because they cannot be repeated in other conditions. As we have already mentioned, the MEI is truly essential for the Slovak economy. The current industrial revolution has widespread effects on the economy and the behavior of mechanical engineering companies. One of the main reasons why Slovak companies invest little in the development of innovations is their weak financial condition, which is still affected by the crisis that occurred ten years ago [41]. It is, therefore, very important to specify the financial indicators that will determine the financial performance of companies in this sector. However, we have no knowledge about the existence of research that studies the financial performance of the Slovak mechanical engineering companies using factor analysis, and therefore, in this paper, we fill this gap. Our research hypothesis H1 is formulated as follows: Hypothesis 1. Financial ratios (indicators) of the non-financial companies of the Slovak MEI show common factors. Source: own study based on [30,[32][33][34][35][36][37][38][39]. Note: AT-assets turnover, CA-current assets, CB-cash and bank, CF-cash flow, CP-cash profit, CR-current ratio, EBIT-earnings before interest and taxes, EBITDA-earnings before interest, taxes, depreciation, and amortization, EBT-earnings before taxes, ER-equity ratio, QR-quick ratio, ROA-return on assets, ROCE-return on capital employed, ROE-return on equity, ROS-return on sales, TA-total assets, TAT-total assets turnover, WC-working capital.
The Slovak Mechanical Engineering Industry and Research Sample
The Slovak MEI is among the main pilots of the Slovak economy, with a stable history, and maintains its solid position among the Slovak industries. The mission of mechanical engineering is to improve the environment by manufacturing devices for the handling and processing of water, soil, air, and waste, as well as to facilitating the usage of renewable energy sources [42], such as fuel cells, wind turbines, and solar energy [45].
In 2018, among the Division 28 (Manufacture of machinery and equipment n.e.c.), the median of the return on assets (ROA) reached 3.88%, and median of return on equity (ROE) was almost 9.3%. The total debt was 57%. Earnings before interest, taxes, depreciation, and amortization (EBITDA) to sales ratio was 7.24%. The assets turnover ratio was 1.16 (specifically, assets were turned 1.16 times a year), inventory turnover was 8 days, and current ratio (CR) was 1.61 [45]. Among the Division 29 (Manufacture of motor vehicles, trailers and semi-trailers), the account receivable collection period was 52.6 days, and the inventory turnover was 13.28 days. The MEI generated €0.02 of earnings after taxes (EAT) for every €1 of total equity, and €1 of sales generated almost €0.03 of EBITDA. The median of added value to sales ratio was 1.38%; the coefficient of CR reached value of 1.54; the median of total debt was 58.04%; and the median of equity to liabilities ratio was 0.75 [45]. Among the Division 30 (Manufacture of other transport equipment), the account receivable collection period was 132.64 days, and the inventory turnover was 50 days. The MEI generated €0.02 of earnings after taxes (EAT) for every €1 of total equity, and €1 of sales generated almost €0.06 of EBITDA. The median of added value to sales ratio was 1.31%; the coefficient of CR reached a value of 1.85; the median of total debt was 50% [45].
Until 2019, the Slovak MEI showed the most significant increase in sales. Unfortunately, in 2020, due to the crisis caused by COVID-19, sales for the first quarter fell by 18% compared to the previous period. The results of non-financial companies in the MEI for 2019 may partially signal how the non-financial companies were prepared to cope with the unfavorable situation and what the current development will do with their future economy. Most mechanical engineering companies have long-term system solutions to crisis situations, especially in the area of costs. The companies paid attention to the emphasis and principle of compliance with the rules. There were concerns about the deterioration in payment discipline and the increase in uncollectible receivables. Due to the COVID-19 crisis, on the one hand, there is a decline in demand from abroad, but, on the other hand, companies are forced to respond by changing processes or procedures to be better prepared for unexpected adverse situations. They are looking for solutions that will help them to be more efficient and productive. In connection with COVID-19, companies in the MEI have begun to reform many of the processes they have done so far, either manually or not at all, using automation and information technology. It is estimated that there will be a greater skilled labor shortage with technical capabilities, regardless of the sector.
The research sample involves a set of the 236 largest non-financial companies of the Slovak MEI. The ranking was compiled according to the amount of generated revenues in 2017 listed in the FinStat database. This database obtains data from the Register of Financial Statements of the Slovak Republic, detects the financial health of the Slovak companies [46], and registers a total of 2644 companies operating in the MEI [47]. On the other hand, the Slovak Investment and Trade Development Agency (SARIO) lists only 872 active companies in the Slovak MEI [48]. For each of the decision-making units, we collected selected financial indicators from 2017 and calculated eleven financial ratios given in Table 2. The equity to liabilities ratio affects the financial stability of the company. It determines whether the company is in crisis or whether the company is threatened by the crisis. Abroad, this indicator is not widely used. However, in Slovakia, Act no. 513/1991 Coll. Commercial Code [49] establishes the minimal value of the equity to liabilities ratio for every year. For the year for which our analysis was made, it was 0.06. Companies should monitor their equity to liabilities ratio more often than once a year. Return on sales (ROS) is used to evaluate an operational efficiency of the firm. It measures how much profit is being produced per euro of sales. In other words, how efficiently a company transforms sales into profits. ROS is sometimes called profit margin [50]. Decreasing ROS could indicate impending financial problems. Return on investment (ROI) calculates the benefit an investor will receive in relation to their investment cost. ROI is the amount earned as a result of that investment [50]. The current ratio specifies the scope to which current liabilities (i.e., accounts payable, short-term notes, or accrued expenses) can be covered by current assets (e.g., inventory, accounts receivable, or cash). The quick ratio, in comparison to the current ratio, excludes inventory because inventory is often illiquid. Anyway, the higher the quick (current) ratio, the safer a position the firm is in [51]. Assets turnover and total assets turnover are efficiency ratios and indicate the effectiveness of the firm's use of its assets (total assets) in generating sales [52]. It indicates how many euros of sales a firm generates per euro of asset investment [53]. Debt ratio can help investors to determine a firm's risk level [54]. It estimates the proportion of total assets financed by a company's creditors [53]. Financial leverage represents the amount of money the company has borrowed to finance the purchase of assets. In our paper, we use the equity multiplier that is calculated by dividing the asset by equity. It measures the proportion of assets financed by a firm's equity [53]. In general, it is better to have a low financial leverage. The purpose of the working capital to sales ratio is to determine the working capital needed in relation to projected sales [52]. A higher ratio highlights the working capital-intensive nature of business [55]. Finally, the debt to equity ratio estimates to which extent the firm is financed by its debt holders compared with its owners [51,56]. A very high debt to equity ratio indicates that a company has a large amount of debt, and usually is much more risky than those with lower debt to equity ratios. Table 3 presents descriptive statistics of the given variables for our research sample.
Statistical Analysis
In order to understand and identify how the variables are connected, we use factor analysis that decreases many of the variables into a smaller set of variables (or factors) [57]. To assess the relevance of given data for factor analysis, we use a Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy [58,59] and Bartlett's test of sphericity [60]. A value of KMO higher than 0.50 and the significance of Bartlett's test (p < 0.05) indicate that factor analysis is suitable.
Since the factor structure for the Slovak MEI was not already provided in previous studies, we use explanatory factor analysis and the principal component analysis extraction method. To find the best distribution of the factor loadings in terms of the meaning of the factors [61,62], we use varimax rotation and factors with an eigenvalue greater than 1. We also use the scree plot to determine the isolation of factors. Factor loadings characterize how each of the variables correlates with each of the factors. Hair et al. [63] consider factor loadings from 0.5 to 0.7 to be practically significant, and those higher than 0.7 as characteristic of a well-defined structure.
The data were processed using the Stata software provided by StataCorp (College Station, TX, USA).
Results
In Section 4, we verify the established hypothesis that financial ratios of the nonfinancial companies of the Slovak MEI show common factors. Before we move on to the factor analysis, we have to compute a correlation matrix that illustrates individual correlation values of the chosen financial ratios. Table 4 shows that most financial indicators have some correlation with each other ranging from r = −0.6055 for debt ratio and net working capital to assets ratio to r = 0.998 for debt to equity ratio and financial leverage. Due to relatively high correlations among considered financial indicators, this would be appropriate for factor analysis. Although the KMO measure is close to the minimum value of 0.5 (KMO = 0.570), we verified the sampling adequacy for the analysis and it permits a preliminary investigation of the factors. Bartlett's test of sphericity (χ 2 (55) = 2850.250, p < 0.001) indicated that variables are not intercorrelated.
Next, we select a principal component solution with the minimum eigenvalue criterion of 1.0 for factor extraction, while Table 5 shows that the principal components analysis produced four factors meeting this criterion. The scree plot in Figure 2 also confirms this selection. In this figure, the eigenvalues are plotted against the factor number, with factor one being the first point on the x-axis. The resulting curve is used to judge the cut-off point by observing the angular changes in the slope [64]. Factor values have to be rotated in order to explain the solution set clearly [35,65,66]. In Table 6, factor loadings after varimax rotation represent the degree to which each of the financial indicators correlates with each of the factors. Our four factors explain 81.92% of the variance (see Table 7), and regarding the financial theory, these factors can be characterized as follows: 1.
Factor of liquidity, which includes the equity to liabilities ratio, quick ratio, debt ratio, net working capital to assets ratio, and current ratio, describes items connected with cash management of the companies. Management of current assets and short-term financing are critical factors that affect the change in the state of net working capital, which is the generator of operating cash flow.
2.
Factor of profitability is composed of return on sales and return on investments. It is an integral factor, which decomposes almost all factors and includes all business activities.
3.
Factor of indebtedness is an important criterion for internal users, external entities, or potential investors. It is created by financial leverage, and debt to equity ratio. The variables quantify the creditworthiness of companies and provide information on the structure of the company's financial resources.
4.
Factor of activity is formed of assets turnover, and current assets turnover. The factor is a reflection of the efficient use and management of the company's assets; specifically, how effectively the company is using its assets to generate sales.
Discussion and Conclusions
Financial ratio analysis for individual companies from the given industry provides a means of obtaining an overview of the financial condition of the industry [35]. However, for each industry and each economy, we can specify different financial ratios that are fundamental for each area. The advantage of using traditional financial ratios in assessing the financial performance of the company is the relatively simple collection of data that are part of the mandatory financial statements. This paper aimed to identify the groups of financial ratios that can be used to measure the financial performance of the companies operating in the Slovak MEI. Hypothesis 1 stated that financial ratios (indicators) of the non-financial companies of the Slovak MEI show common factors. In other words, we supposed that we could cluster some factors from the financial ratios of the non-financial companies operating in the Slovak MEI.
For the factor analysis, Kaiser Criterion (see [67]) suggests retaining those factors with eigenvalues equal to or higher than 1. Our analysis reveals, despite using limited but significant data, four independent factors that allow the measurement of liquidity, profitability, indebtedness, and activity of the company. These four factors account for 81.92% of the total variance. It means that we can confirm hypothesis 1.
In the Slovak theory of financial analysis, our resulting factors are used most often. All of them may influence the company's competitive position, technological, productive, and expansion decisions. As stated in [68], liquidity and profitability are vital to the existence and subsequent performance of a business. Liquidity ratios show the company's ability to meet liabilities. Sufficient liquidity creates opportunities to pursue valuable investment opportunities [69], which are crucial in the context of Industry 4.0. Profitability is an indicator of a company's ability to create new resources or make a profit. A company's profitability lies at the heart of industrial companies' strategic aims and can be improved by increasing revenues or reducing costs [70]. Profitability ratios are very volatile because companies cannot monitor and manage lots of indicators affecting them [71]. The third factor resulting from our analysis is factor of activity. Activity ratios show how efficiently the company is utilizing its assets. Adequate utilization is a requirement of a stable financial situation. A financial analyst has to be extremely prudent about the explanation of results because very high values can identify difficulties in the long term, and, on the other hand, very low values can identify a contemporary problem of not generating sufficient revenues "or of not taking a loss for assets that are obsolete" [71] (p. 34). Insufficient utilization is such that the company has too many assets, and is thus associated with above-average costs because the assets need to be protected and maintained, a large part of which is covered by credit, and the high level of assets requires a significant loan, which produces high interest rates. On the other hand, insufficient amount of assets results in low production volumes and the company loses the sales it could achieve. The factor of indebtedness is the last factor determined by our analysis. Indebtedness is an economic term that refers to the fact that a company uses foreign capital to finance its assets. The use of foreign capital affects the profitability of the company and the degree of business risk. Indebted companies may face the reluctance of external investors to fund new projects because the benefits largely accrue to existing creditors [72].
We can compare our results with several existing studies using factor analysis to group common financial ratios when assessing the financial situation of companies. The most used financial ratios are from the group of profitability, because this factor was extracted in [30,32,33,[36][37][38][39]. In [32,35], several financial ratios were grouped in the liquidity factor and the activity factor. The indebtedness factor resulted only from the analysis of [32]. Although the names of the extracted factors differ among the existing studies, some financial ratios of our analysis were part of the factor analysis of those works. The most used is assets turnover [32,[36][37][38], current ratio [34,35,37,38], quick ratio [34,35,38], debt ratio [34][35][36], ROS [30,32], ROI [30,39], current assets turnover [37], debt to equity ratio [34], and financial leverage [32]. Interestingly, in two cases [37,38], the opposite indicator to financial leverage was used, namely equity ratio calculated as equity to assets. We are not aware of the previous studies focused on industry, in which the equity to liabilities ratio and net working capital to assets ratio were used in the factor analysis. Total debt to total assets ratio (or debt ratio) was also used in the study concerning the detection of falsified financial statements in Greece [73], in the research that identifies the effect of capital structure on the performance of the Jordanian manufacturing sector [74], in determining the working capital requirements of the manufacturing firms listed on Karachi Stock Exchange [75], and in evaluating the financial performance of Indian textile companies [76]. The net working Energies 2021, 14, 267 12 of 15 capital to assets ratio was used in [73,76,77]. As we mentioned in Section 3.1, the equity to liabilities ratio is not used in evaluating the financial situation of foreign companies. However, this indicator is regulated by Act no. 513/1991 Coll. Commercial Code [49] of the Slovak Republic; therefore, it occurs mainly in Slovak studies, e.g., [78][79][80][81][82][83][84].
Exploration and evaluation of financial indicators helps to assess the overall business performance [85] as well as the economic sustainability of the company. It is important to note that economic sustainability can improve but not necessarily guarantee the overall sustainability of economic performance and growth [10].
It is important to objectively analyze the individual relationships between the development of groups of indicators resulting from the factor analysis and the various stages of the financial failures of the companies. The application of factor analysis should contribute to the multi-criteria concepts of business performance evaluation and management as well as efficiency in business practice. This paper has several limitations because we do not consider all companies from the MEI, and we analyze only one year. Therefore, it would be interesting to repeat the analysis with more companies or for a different year. Besides that, we can consider the regional segmentation of companies and other qualitative data. Moreover, in addition to factor analysis, we suggest using data enveloped analysis and the multidimensional scaling method, by which it is possible to assess the economic effectiveness of decision-making units and aspects of business performance. The lack of prior research studies on using factor analysis in the MEI is another limitation. Specifically, for the Slovak industries, there are no research studies that apply factor analysis using financial ratios but analyses based on qualitative data about the company predominate.
Because the essence of the fourth industrial revolution is that mechanical manufacturing is based on digitization, which integrates all smart technologies to optimize operations and production methods [86], industrial companies should invest in new technologies to improve their competitive position. The current level of credit indebtedness and also the total indebtedness of the Slovak industrial enterprises still gives the possibility to use bank loans as a form of financing such investments. However, the problem may be that the cost of repaying loans or financial instruments will reduce the level of profit generation and profitability. Therefore, companies need to move to higher value-added production. On the one hand, this may reduce the level of competitive advantage in the short term, which in Slovakia is based mainly on lower labor costs. On the other hand, in the long run, the share of such newly created value in terms of the company's revenues may increase, which will bring increased profitability of investments in Industry 4.0. Another option is to use funding from the European Union.
The MEI plays a substantial part in driving energy efficiency, supplies lots of services to industry (e.g., producing and delivering electrical power, providing heating, ventilation or air-conditioning), and creates demand for energy [3]. If companies operating in this sector want to be able to provide these services to the highest standard, they need to have a sufficient amount of financial resources. Our analysis is an important prerequisite for developing a realistic financial plan for companies operating in the MEI. This paper provides an enrichment of quantitative methods, financial management, and several other areas at the micro and macro levels, which is significant because the analyzed second largest industry in terms of Slovak industrial production significantly contributes to the Slovak GDP.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
v3-fos-license
|
2018-04-03T03:04:48.724Z
|
2016-11-17T00:00:00.000
|
2137802
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0164649&type=printable",
"pdf_hash": "cbe25f7597c01aa3d16eabb2578708d9b4f57a1e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44201",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "cbe25f7597c01aa3d16eabb2578708d9b4f57a1e",
"year": 2016
}
|
pes2o/s2orc
|
Identification of an Efficient Gene Expression Panel for Glioblastoma Classification
We present here a novel genetic algorithm-based random forest (GARF) modeling technique that enables a reduction in the complexity of large gene disease signatures to highly accurate, greatly simplified gene panels. When applied to 803 glioblastoma multiforme samples, this method allowed the 840-gene Verhaak et al. gene panel (the standard in the field) to be reduced to a 48-gene classifier, while retaining 90.91% classification accuracy, and outperforming the best available alternative methods. Additionally, using this approach we produced a 32-gene panel which allows for better consistency between RNA-seq and microarray-based classifications, improving cross-platform classification retention from 69.67% to 86.07%. A webpage producing these classifications is available at http://simplegbm.semel.ucla.edu.
Introduction
Glioblastoma (GBM) is the most common and most fatal form of primary malignant brain tumor. The survival rate with treatment is frequently under two years, with the median survival rate being 12.2 months without treatment [1]. GBMs are highly heterogeneous and show highly variable gene expression patterns. Several classification schemes have tried to capture this variability by using gene expression data in an attempt to identify more homogeneous sub-categories for prognosis and drug testing [1,2].
The most commonly used classification scheme was proposed by Verhaak et al. in 2010, and divided GBMs into Proneural, Classical, Neural, and Mesenchymal types based on gene expression measured with microarrays. These subcategories differed both in terms of median survival rates, which were highest (13.1 months) in the Neural and lowest (11.3 months) in the Proneural type [1], and in response to aggressive treatment (defined as requiring more than 3 courses of chemotherapy). In the original study aggressive treatment was significantly more beneficial in the Classical and Mesenchymal subtypes, and least effective in the Proneural subtype [1]. The Verhaak et al. classification algorithm was developed by applying a centroid-based classifier, 'ClaNC' [3], on a microarray dataset of 200 GBM samples. Using 173 of the 200 samples (described as 'core' samples by Verhaak et al.) and a linear discriminant analysis (LDA) method of gene selection and variable reduction, ClaNC was used to build a 4 subcategory classifier and assign a category to each of the 200 samples [1].
The Verhaak et al. classifier utilizes 210 genes per GBM category, resulting in the classifier being based on 840 total genes. Since testing hundreds of genes in order to classify GBM samples is impractical outside of large-scale microarray and RNA-sequencing experiments, we set out to identify a reduced gene set that would allow classifications to be made with a subset of genes while retaining classification accuracy.
To accomplish our goal of producing a method of selecting a significantly smaller subset of genes which recapitulates the Verhaak et al. GBM subclassifications, we have developed a method of variable reduction in random forest models designed to reduce the complexity of the classifier while maintaining accuracy. Our approach uses a novel method of random forest (RF) variable reduction based loosely on a genetic algorithm (GA) designed by Waller et al. [4]. This iterative GA framework rewards genes based on expression or other variables from the best randomly-selected subsets by allowing them to continue to the next generation of subsets. Using this approach, variables which do not perform as well in random pairings are eliminated. The final result of our utilization of this algorithm is a set of 48 genes (GBM48 panel) which is highly accurate in assigning Verhaak et al. categories in a test set of 803 GBM expression samples collected from publicly available datasets. Additionally, we have used the same algorithm to maximize accuracy on RNA-seq based data creating a second GBM RNA-seq 32 gene panel. This 32 gene RNA-seq based panel greatly improves our ability to compare RNAseq based classification to microarray based classification using the original 840 gene Verhaak et al. classifier. These findings provide a simpler subset of genes whose expression can be used for classification, as well as a general method whereby similar strategies may be employed in other systems to aid in reducing the complexity necessary to describe them.
The Verhaak et al. Classifier
The Cancer Genome Atlas (TCGA) training set used to build this model consists of 173 'core' samples [1]. Each of these samples had genome-wide expression patterns collected from runs on three platforms (Affymetrix HuEx array, Affymetrix U133A array and Agilent 244K array). In order to improve reproducibility and to select genes behaving consistently across multiple array platforms, the total number of probes used in the analysis was reduced to 1,740 through a series of filters, described by Verhaak, et al. [1]. Briefly, the first filter was consistency across at least two of the three array platforms, which was calculated by comparing the expression pattern across all three platforms. If a gene had a 0.7 or higher correlation across two platforms, it was kept in the analysis. This filter resulted in 9,255 genes remaining from the original 11,861 unified gene expression patterns. The second filter was high variability across samples, i.e. probes with mean absolute deviation (MAD) greater than 0.5 across all patients were filtered out, which resulted in 1,903 genes remaining. The third filter removed genes where the individual MAD was significantly different than the averaged MAD, in order to remove genes with extremely variable standard deviation. These filters resulted in a final a set of 1,740 probes combined across three microarray platforms. These 1,740 genes were then used to estimate the correct number of clusters/classifications. This was done by attempting multiple different cutoffs and selecting the number of clusters with the highest stability using consensus average linkage hierarchical clustering [5]. These experiments resulted in 4 subclasses being determined as the optimal cutoff.
Using ClaNC, the total gene set to describe these 4 clusters/classifications was then reduced to 840 probes through linear discriminant analysis and cross-validation. The final set of 840 probes was then used to create the Verhaak et al. centroid-based classifier. These 840 genes constituted the starting pool of variables used for the algorithm presented here, aimed at the identification of an optimized smaller subset.
Datasets Used in this Study
The six datasets used for the process of building and validating our algorithm are listed in Table 1 along with their relative sizes, platforms and final model accuracy. Additionally, we used 122 RNA-seq samples from TCGA generated using the Illumina HiSeq2000 v2 platform. These were used to produce a second reduced 32-gene RNA-seq panel for better RNA-seq/ microarray data agreement on Verhaak et al. classifications.
All 6 datasets used were obtained using Affymetrix microarrays, though different versions of microarray chips were used depending on the dataset. TCGA data was downloaded from the TCGA website [6], the Rembrandt data from the Rembrandt data download page [7] and the other datasets were downloaded from the Gene Expression Omnibus repository [8,9], including datasets from Sturm et al. [10] consisting of 46 samples, Schwartzentruber et al. [11] consisting of 27 samples and Grzmil et al. [12] consisting of 35 samples (Table 1). When more than one probe existed for a given gene in a given dataset, prior to normalization and batch effect adjustment the brightest probe was selected to represent each individual gene.
All datasets were combined and normalized using the R package limma [13], and batch effects were adjusted using ComBat [14]. ClaNC was used to create a centroid-based classifier [1], and a Verhaak et al. category (Mesenchymal, Proneural, Neural, or Classical) was assigned to each sample within the test sets.
A subset of the TCGA data consisting of 122 samples, for which both RNA-seq and microarray data were available, was used to develop our RNA-seq panel. This subset was used to find a small group of genes with high information content and correlation with microarray data in order to improve RNA-seq to microarray classifications.
Optimal Gene Cutoff Selection
The first step in our analysis was to estimate the optimal size with which to produce a classifier. In order to do this, we produced 1000 random forest models from randomly selected genes at cutoffs of between 2 and 60 gene subsets for both the reduced GBM48 classification panel and for the reduced RNA-seq panel. We then estimated the average random model accuracy on all 803 samples. The resulting curve was fitted using local polynomial regression fitting, and additional probes were included only if they improved the average model by at least 0.001% (Fig 1) [ 15]. This produced an optimal cutoff of 48 genes for our reduced Verhaak et al. panel and 32 genes for our improved RNA-seq panel.
Feature Selection
Our initial expression set comprised the Affymetrix expression levels for the 840 Verhaak et al. probes in 171 of the 173 'core' samples (2 samples from the original training set were not available for download from the TCGA website). We then included only genes existing with the same gene symbol across all six training and test sets, which resulted in a total starting pool of 753 probes. We used five datasets to build the classifier (datasets #1-5, Table 1) and then used the Rembrandt [7] test set (dataset #6) for final verification. Each subset of 48 genes was trained on the TCGA training dataset (dataset #1) and evaluated for fitness on datasets #2-5 but not on the Rembrandt dataset (dataset #6), in order to avoid overfitting. This process helped to select gene subsets that worked on more than one Affymetrix chip type and on all datasets used in the subset fitness evaluation process. The final model was validated on the independent Rembrandt dataset (#6) that was not used to build the classifier in the training process. The Rembrandt dataset was selected for this purpose because it was the largest test dataset from a source other than the original classifier.
Random Forest
The "Random Forest" algorithm was named as such because it consists of hundreds or thousands of decision trees. The consensus of the classifications predicted by these decision trees represent the classification predicted by the overall model. For our random forest models, we used the "randomForest" library [16], an R implementation based on the original design by Leo Breiman [17]. All random forest models were built identically with genes selected from varSelRF, RFE and GARF. We used 5000 trees per model with a default number of variables randomly sampled at each node/split of the decision trees.
Genetic Algorithm/Random Forest (GARF) Approach
Genetic algorithms (GA) are algorithms that behave similarly to natural selection. The basis of the genetic algorithm used here is inspired by a small molecule variable optimization algorithm based on linear regression found in Waller et al. [4]. Our genetic algorithm starts with a pool of fixed or variable randomly-selected variables from which 'offspring' models can be created. An iterative algorithm, our GARF approach slowly eliminates less fit variables by keeping only the variables that make up the best models. This leads to fitter generations of offspring models with each subsequent generation (Fig 2).
Our algorithm starts off by selecting offspring subsets that are selected by the subset selector "sample" from the base R package [18]; random forest models are then built on these subsets and are evaluated on datasets #1-5 for fitness (described in the 'GARF Offspring Fitness Evaluation' section). The genes in the best subsets are allowed to move on to the subsequent generation. The variables themselves are treated as a pool, where each subsequent generation of 48 gene subsets is randomly selected from the current pool.
The first key difference when compared to the Waller et al. approach is that instead of removing the least fit variables, we keep the variables in subsets that produce the best models. This difference in methodologies only removes variables which do not appear in the best models, rather than punishing variables that are potentially grouped with other poorly performing variables. This eliminates the need for the 'taboo' search found in the original algorithm, which confirms at every elimination step that genes in the least fit models are not present in any highly fit models.
Furthermore, in the case of our algorithm, the cutoff is dynamically selected based on the number of variables in the pool. This cutoff is set to keep the number of models needed to allow all genes to remain in the pool; thus, theoretically, if all genes perform the same, they should all be kept in the next iteration. This ensures that only genes which appear in more than a single top model will move on to the next generation. The Waller et al. algorithm allows a static kill factor to be used, with a suggested value of 5% of the worst models excluding variables found in top models. In contrast, our approach favors keeping variables which have combined predictive value together in the pool, allowing greater opportunities in subsequent generations for them to be placed together again with other groups with higher predictive value. This permits optimization to occur as the result of genes which together describe the system well, rather than removing individual genes which did not describe the system well or which may simply not have a strong effect on the total value of the subset of variables in a given model. The third difference is that Waller et al. utilize leave-one-out cross-validation to establish fitness. Grouping together multiple cohorts and building the model on samples from multiple cohorts, and then utilizing cross-validation to validate the model increases the possibility of over-training. Given that reproducibility is a challenge both between platforms and labs, our algorithm is designed to train on a single cohort, but then only select genes with reproducible effects in alternative cohorts. Building a model on a single cohort and being able to replicate it independently on additional cohorts improves the likelihood that it will be reproducible on additional datasets using a similar platform.
GARF Offspring Fitness Evaluation
To test the fitness of each subset, a random forest model is built using the genes that make up that subset. The fitness of each subset is then evaluated by how many identical classifications the random forest model built on it produces when compared to the Verhaak et al. ClaNC classifications obtained using all genes. The overall fitness is the average accuracy produced by all subset-derived random forest models built in a generation of subsets. The fitness evaluation process excludes dataset #6 which is used for final model validation after the completion of the algorithm in order to evaluate if the model has predictive capabilities outside of the datasets it was trained on.
GARF Runs
The first generation of our genetic algorithm starts by constructing the number of subsets equal to the total number of genes in the starting gene pool. In the case of this system, we start with 753 genes, and 753 subsets of 48 genes are created in the first generation of models. The number of created subsets and random forest models built on those subsets in each subsequent generation is equal to the number of genes remaining in the pool. This ensures the number of models constructed reduces as the number of possible subsets shrinks due to the shrinking size of the pool of variables.
At each iteration, average model accuracy is evaluated by comparing the Verhaak et al. model built with ClaNC and the subset-based random forest model predictions against the 575 samples in datasets #2-5 (Table 1). If the classification of the random forest model is identical to the classification predicted by the Verhaak et al. model, the prediction is considered to be correct.
At each successful iteration at which average model accuracy improves, the unique genes from the top 2.08% of models from that iteration move on to form the starting pool for the next generation. The 2.08% cutoff is determined as the minimum number of models needed in order to ensure that 100% of the genes have the possibility to survive to the next generation of offspring models. The percentage kept can be described with the following formula: (100% / # of genes in model). For example, if there were 400 genes in the pool, 400 subsets for building models would be made, 9 (400/48 = 8.32) of which would move on to the next generation. These 9 models would contain 432 genes if there was no overlap, making this the minimum number kept to allow all 400 genes to potentially move on to the next generation.
This iterative process ends when the next generation of models fails to be superior to the previous generation in average model accuracy or when the target number of genes is reached in the pool of variables. If the next gene pool fails to improve upon the last, the genetic algorithm stops and the best-scoring models from the best-performing variable pool are selected for follow-up with the holdout test set (dataset #6).
The entire process, starting from the initial pool of 753 genes, was run 10 times, and what is reported in the results section was the best model from all 10 runs as ranked by comparison across all samples included in the test set.
VarSelRF
Variable selection using random forests (varSelRF) is a variable reduction package which utilizes random forest models [19]. We use it here as the first of two comparisons to our own novel variable reduction method. One thousand runs of varSelRF were run using 5000 for the number of initial trees, 2000 trees in each iteration, and 0.2 was used for the number of variables to drop at each iteration. The average number of genes per run was 105.88; the maximum number of genes in a varSelRF output was 527, and the minimum was 15 with a standard deviation of 87.06 variables. Accuracy across the different datasets is reported in S1 Table. RFE Recursive feature selection (RFE) is a variable reduction package which is part of the R 'caret' package [20]. We use it here as the second of two comparisons to our own novel variable reduction method. As a comparison, one thousand runs of RFE were carried out at a cutoff of 48 variables. The output, though comparable to varSelRF, had a much lower standard deviation, and was significantly more consistent across all 1000 runs. Accuracy across the different datasets is reported in S2 Table.
Results
The number of possible groups of 48 genes from the 753 genes that made up our starting pool is greater than 2.13 x 10^76 [21]. As a result, testing every possible combination would take more than 5 x 10^68 years using a single thread on a core i5 processor. Therefore, a variable reduction technique was needed to reduce search space while improving the accuracy of the gene selection for these models.
We first used varSelRF, a commonly used algorithm for gene selection in random forest models [19]. Unlike the method reported here, varSelRF cannot be set to run at certain gene subset cutoffs, and the ability to select an optimal number of variables has been reported as being a benefit of varSelRF [22]. We ran varSelRF 1000 times and tested the genes it selected in RF models on all 803 samples, then tested the accuracy on dataset #6 consisting of 228 samples and on datasets #1-5 consisting of 575 samples used as training sets. The closest comparison to our method were models which showed up using an optimum cutoff of 43 and 62, which represented 111 of 1000 of our varSelRF runs. The average of such models was 83.27% and the best varSelRF model in that range was 85.68% (62 genes), the worst being 81.82% (43 genes). By comparison, our best random model at 48 genes had 87.65% accuracy, about 2% better than the best varSelRF runs in the same range. Average models with 43 randomly selected genes were competitive with varSelRF at 82.13% (compared to 81.82% for models selected by varSelRF with 43 genes). The best model from the 1000 varSelRF runs was 88.54% accurate, and required 258 genes.
Our second comparative analysis was carried out using RFE from the caret package. We ran RFE 1000 times using 10-fold cross-validation. Using the top 48 genes from each run, we then built a random forest model on our training set using the selected genes, and evaluated them against all test sets. On average, the RFE output across all test and training sets (803 samples) resulted in an accuracy of 84.38%. The best suggested output from RFE resulted in an accuracy of 87.55%, making it outperform varSelRF in the general range of our optimized model. The worst model was 81.57% (S2 Table).
We also produced 1000 random forest models built on 48 randomly selected genes. These 'random' models produced an overall accuracy of 82.04%, and the best random model had 86.55% accuracy. RFE outperformed the random models by more than two percent on average, and one percent for the best model output by RFE (87.55% vs 86.55%). The best randomly generated models outperformed the best varSelRF model in our range of interest: 85.68% with 62 genes for VarSelRF vs 86.55% for the best random model. On average, varSelRF was slightly better on average than the random models in our range of interest, 83.27% on average for suggested gene subsets between 43 and 62 genes (compared to 82.04% at random). Our GARF approach significantly outperformed all three.
After our cutoff of 48 genes was selected to be the optimum number for our purposes (see Optimal Gene Cutoff Selection), we then created random offspring subsets of 48 genes from the variable pool. These subsets were used to create random forest models to predict the Verhaak et al. classifications. These classifications were then compared with the original classification. Models that performed poorly had their genes eliminated from the overall pool of variables used to make the next generation. With every generation, variables were iteratively removed from our pool of variables based on fitness, eventually resulting in a vastly improved subset over random variable selection (Fig 2). Our own models based on our GARF algorithm significantly outperformed VarSelRF on average by over seven percent in our range of interest, and on average by over six percent when VarSelRF was able to select its own optimal cutoff. Our GARF approach out-performed the best model produced by RFE by more than three percent.
Survival Analysis
In order to explore survival by subtype in GBM and to examine the similarity in survival of the two alternative models (the panel based on the GBM48 method and the original classifier), Kaplan-Meier survival curves were generated for all 537 samples with available survival data from the 803 samples used in this study (Fig 3). Our new reduced classification scheme based on the GBM48 panel and the classifications based on the original 840 gene classifier both support the idea that the Proneural subtype is associated with a better outcome. Both our classifier and the original classifier had similar p-values associated with the differences in the significance of survival outcome between subgroups. Our classifier showed a similar degree of variance from outcomes when compared to the original classifier with a p-value of 1.76 x 10^-5 in comparison to a p-value of 1.34 x 10^-5 for the original classification technique. This suggests that our simpler classifier has a similar clinical significance as the original classification technique with regard to survival.
Genes Selected by Our Model
Our best model consisted of the genes listed in Table 2, which also shows average normalized and batch effect-adjusted expression of each gene in each classification along with their standard deviations (SD). Their final accuracy across all 803 samples was 90.91%. The individual set-by-set accuracy breakdown is listed in Table 1. Overall the GBM48 panel was 92.65%, 81.51%, 89.05% and 96.71% accurate in predicting the Proneural, Neural, Classical and Mesenchymal subtypes, respectively.
In order to compare the similarity of our GBM48 panel to the 840 Verhaak et al. genes, we created a heatmap and clustering of our simplified panel and compared it to a corresponding heatmap including all Verhaak et al. genes (Fig 4). The sample separation into the four Verhaak et al. subtypes was very similar, supporting the use of GBM48 to achieve Verhaak et al. classification.
Similarly, multidimensional scaling (MDS) was used to cluster samples based on either the 840 Verhaak et al. genes used for the original classifications or the GBM48 panel (Fig 5). Though there appears to be a clear separation into Proneural, Mesenchymal, and Classical subclasses, it is worth noting that the Neural subclass tended to be similar to all three other subclasses in the Verhaak et al. classifications. In addition, healthy control samples in the Verhaak et al. study were classified as Neural, suggesting that this subtype is representative of the lack of extreme variation characterizing the other subtypes.
The GSEA analysis resulted in a statistically significant association with Alzheimer's disease associated genes. A total of 14 of the GBM48 panel's genes associated with upregulation (FDRadjusted p-value of 3.92 x 10^-6) and 11 of the GBM48 panel's genes associated with downregulation (FDR-adjusted p-value of 7.76 x 10^-5) in Alzheimer's disease. Additionally, two promoter motifs corresponding to SP1 and POU2F1 were found to be enriched in the GBM48 set. The first promoter region was linked with 15 of the GBM48 panel's genes which contained the GGGCGGR motif. This motif has a statistically significant relationship with SP1 (FDRadjusted p-value of 2.55 x 10^-4), a zinc finger transcription factor associated with cellular processes including cell growth, differentiation and apoptosis [29], which has been shown to be upregulated in gliomas with poor clinical outcome [30]. The second promoter region was linked with 6 of the GBM48 panel's genes which contained the CWNAWTKWSATRYN motif for the POU2F1 transcription factor (also known as Oct-1, FDR-adjusted p-value of 4.73 x 10^-4), which has been shown to be differentially expressed in human glioblastoma cells [31,32]. G:profiler reported a number of statistically significant gene ontology associations with this dataset largely related to anatomical structural differentiation of various nervous tissues, chemotaxis, axonal guidance and protein kinase binding (S1 Fig). For protein kinase binding Table 2 shows gene symbols from the 48 genes from our top model. Average normalized and batch effect-adjusted expression levels for each gene are shown for each classification along with the standard deviation for each directly in the column to the right.
doi:10.1371/journal.pone.0164649.t002 across the entire GBM48 panel, 9 of the genes were associated with an FDR-adjusted p-value of 1.82 x 10^-2. Additionally, two regulatory motifs were identified; one for transcription factor AP-2α (GSCCSCRGGCNRNRNN) was found in 20 of the GBM48 panel's genes. The transcription factor AP-2α has been shown to be downregulated in gliomas and is believed to be negatively associated with the grade of human glioma [33,34]. The second was identified for EGR-1 (GCGGGGGCGG), which was found in 26 of the GBM48 panel's genes. The transcription factor EGR-1 is a zinc finger protein that has been shown to be suppressed in human gliomas and human glioblastoma cell lines [35,36] Twelve of the GBM48 panel's genes were found to be associated with each other by way of experimental evidence according to String-db (Fig 6). Amongst these 12 genes, 7 were associated with protein kinase binding in G:profiler (p-value 1.77 x 10^-5) and 4 of the 12 were found to be associated with kinase binding and transferase activity in GSEA (p-value 2.34 x 10^-4). Also according to GSEA, 5 of the 12 genes were associated with the Classical tumor subtype, and 4 of the 12 were associated with the Proneural subtype. The genes involved in this network are shown in Fig 6 and are color-coded to represent which of the subtypes (Classical or Proneural) they were associated with, as well as which database described them as being statistically significantly related to protein kinase binding (either G:profiler or GSEA).
RNA-seq Comparison and Panel Optimization
In order to test the consistency between microarray-and RNA-seq-based expression data, we tested our method on 122 samples for which both were available. Importantly, the traditional Verhaak et al. classification scheme using 793 genes which were present in both RNA-seq and microarray data produced the same classification in 69.67% (85/122) of the samples.
Following this analysis, our GARF method was run using the core TCGA samples as the training set and the RNA-seq data as the test set. After normalization and correction of batch effects, we attempted to find the ideal gene signatures for classifying these RNA-seq samples in the same way as these samples had been classified using microarray data. Our results produced a 32-gene RNA-seq classifier which was 86.07% accurate and the most consistent with The RNA-seq classifier and the GBM48 panel did not share any genes. This is likely because the genes that are the most informative and consistent across both microarray and RNA-seq platforms are not necessarily the most consistent and most informative genes when looking at microarray exclusively.
Web Server for Classification
In order to facilitate the use of this algorithm, we built a web server at http://simplegbm.semel. ucla.edu/. This server supports microarray-based sample classification (one expression value per gene, submitted as a comma-separated file) and can produce outputs using the original Verhaak et al. classification scheme, as well as the GBM48 and RNA-seq panels. The server automatically handles normalization and batch effect adjustment utilizing ComBat [14]. In the case of RNA-seq data, rank normalization is utilized, which is better suited for cross-platform comparison with microarray data [37].
Discussion
The goal of this project was to create a method as close in accuracy to the original classifier as possible, while using a significantly smaller number of variables. Centroid-based classifiers like ClaNC [3] and Prediction Analysis for Microarrays (PAM) [38] are excellent and intuitive ways of classifying groups; however, they rely upon gene-by-gene evaluations, i.e. each gene's fitness is evaluated on how well an individual gene separates out the different classifications. Gene-by-gene evaluation techniques tend to keep genes that have redundant information as they do not take into account multi-gene relationships or high levels of correlations between genes. By contrast, random forest models rely on decision trees, which allow multi-gene relationships to be accounted for in predictions. The multi-gene relationships permit fewer genes to better classify samples, and are a major benefit of utilizing random forest models in classification problems. Developing models on multi-gene relationships also has the benefit of potentially uncovering complex relationships between variables that would not be discoverable by evaluating each variable's fitness independently.
Several popular methods for selecting a smaller subset of genes exist, including iterati-veBMA [39], varSelRF [19], RFE [20] and R-SVM [40]. Of these, only varSelRF and RFE are designed to work with random forest models. The most common application of RFE is with support vector machines as in the R-SVM package [40]. VarSelRF has been reported to require fewer genes, and perform as well as other methods such as support vector machines (SVM), Diagonal Linear Discriminant Analysis (DLDA) and k-nearest neighbors models (KNN) [19,22]. Our method outperforms varSelRF and RFE for this application.
Earlier attempts to reduce classifiers of disease states using random forest models have largely relied on the varSelRF package in R [19]. One study of particular relevance which utilized varSelRF for variable selection with GBMs also attempted to re-evaluate the original classification technique using gene-isoform-based descriptors and random forest techniques [41]. However, the study produced an alternative classifier with low similarity to the original classifier, predicting the original classifications with only 81% accuracy. By comparison, our average random model using 121 randomly selected gene signatures was 85.60% accurate (1000 random models on our cohort of 803 samples). These random gene selections significantly outperformed the genes selected by this study. This is similar to our findings that the VarSelRF technique selects genes with similar accuracy to random models within our selected GBM datasets (84.51% on average for our 1000 test runs on our cohort of 803 samples). RFE fares only slightly better, presenting the need for an alternative strategy for gene subset selection.
A significant benefit of our hybrid method is that it allows the evaluation of subsets of probes at a time, allowing probes without complex relationships to be slowly and selectively removed. While many other algorithms include weighting by LDA and individual performance of genes, individual performance in our algorithm is completely ignored in favor of genes that work best in groups of a specified size. The advantage of specifying a set number of genes is that this allows for optimal groupings of genes to be selected in order to function within the confines of different diagnostic technologies that may require a limited number of genes in order to be practical.
Our GBM48 panel approximates the accuracy of the Verhaak et al. classifier while requiring expression values for 6% of the genes required by the original classifier. The Verhaak et al. classification itself was not perfect, as there was a distinction between 'core' samples and other samples that did not fit their classification as conferred by k-means clustering [1]. The core samples were selected through the use of silhouettes [42]. Negative silhouettes were given to 27 of the 200 samples, indicating that they did not fit their assigned classification. This can be interpreted as an accuracy rate of 86.5% for the original classification scheme and that approximately 13.5% of samples do not fit this classification scheme. Assuming the rest of the samples were randomly assigned classifications (3.38% of samples), it is possible to infer that the highest accuracy that can be achieved is 89.88% if the TCGA tumor training set used to build the Verhaak et al. model is representative of all GBMs. In our case, we considered our final result of 90.91% of samples being accurately assigned to be in the range of what would be expected from an excellent model. Additionally, the method design and use of data from multiple platforms, multiple batches and multiple laboratories ensures that only genes with the highest level of consistency are used in our final models.
In addition to the similarities in accuracy between our two models, we have also presented here in Fig 3 that our classifier has very similar survival curves to the original classification technique. This is presented as a benchmark for comparing clinical significance of our GBM48 panel with that of the Verhaak et al. classifier. We would like to also point out that the Verhaak et al. classification technique is not the best indicator of survival in GBM patients, despite favorable outcomes for patients diagnosed with the Proneural subtype. In fact, one of the best indicators of a favorable outcome for survival in patients undergoing treatment with temozolomide chemotherapy has been shown be the methylation of the MGMT promoter [43]. Additionally, at least one alternative strategy to classification has been proposed involving epigenetics. This integrative approach utilizes all available epigenetic, copy number variation, microarray expression and genetic variation instead of microarray data exclusively as a classification technique for GBMs, which shows the potential for improvement upon existing classification techniques [10].
The GARF framework presented here can be used for any project that requires the optimization of a model using a specific number of genes to work with a particular infrastructure, and likely would work with any other dataset with quantitative descriptors where variable reduction would be advantageous at the cost of a small amount of accuracy.
|
v3-fos-license
|
2022-05-19T15:09:52.825Z
|
2022-05-17T00:00:00.000
|
248870737
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2331186X.2022.2074343?needAccess=true",
"pdf_hash": "a7bff30644d2767d83be5d79c1be9f5372ca0230",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44202",
"s2fieldsofstudy": [
"Education"
],
"sha1": "6d7df27052436dbbcc89d399b1c2c9735845c60d",
"year": 2022
}
|
pes2o/s2orc
|
The practice of learning post-1994 instructional reforms in Ethiopia: Looking a case primary school through the lens of organizational learning theory
Abstract The purpose of this study was to examine school-level learning of post-1994 instructional reforms via the lens of organizational learning theory (OLT) in a sample primary school. Data were collected from three teachers, a school principal and head of Woreda Education Office (WEO) using interviews. Besides, data were collected from review of documents. The collected data were analyzed, narrated and interpreted qualitatively. The results showed that (1) the school displays only limited constructs of organizational learning (OL) and characteristics of a learning organization (LO), (2) team learning seemed the lively agency of learning followed by school-wide learning while teacher as agency were still indiscernible, (3) it seems that self-initiated instructional reforms have been overlooked both in policy and practice while reforms prescribed by MoE were the most focused contents of learning, (4) the influence of team learning and school-wide learning in reforming instruction seems insignificant as audited from teachers’ plans; rather, it resulted in a heap of reports that would be discarded after periodic performance appraisal, (5) the school’s subtle effort of being a learning school was hindered by overdose of prescribed reforms, lack of instructional leadership skills and teachers’ resistance resulting from pedagogical deskilling, subject matter incompetency and reluctance to change. On the basis of these findings, conclusions and implications have been made.
Abstract:
The purpose of this study was to examine school-level learning of post-1994 instructional reforms via the lens of organizational learning theory (OLT) in a sample primary school. Data were collected from three teachers, a school principal and head of Woreda Education Office (WEO) using interviews. Besides, data were collected from review of documents. The collected data were analyzed, narrated and interpreted qualitatively. The results showed that (1) the school displays only limited constructs of organizational learning (OL) and characteristics of a learning organization (LO), (2) team learning seemed the lively agency of learning followed by school-wide learning while teacher as agency were still indiscernible, (3) it seems that self-initiated instructional reforms have been overlooked both in policy and practice while reforms prescribed by MoE were the most focused contents of learning, (4) the influence of team learning and school-wide learning in reforming instruction seems insignificant as audited from teachers' plans; rather, it resulted in a heap of reports that would be discarded after periodic performance appraisal, (5) the school's subtle effort of being a learning school was hindered by overdose of prescribed reforms, lack of instructional leadership skills and teachers' resistance resulting from pedagogical deskilling, subject matter incompetency and reluctance Animaw Tadesse ABOUT THE AUTHOR Animaw Tadesse (Researcher)completed his first degree in education and his second degree in educational measurement and evaluation. He has been working as a teacher educator for more than a decade, and since 2016 he is working as a policy researcher. His research areas are teacher education, educational assessment & evaluation, educational reforms, and instruction. Currently, he is a PhD candidate at Addis Ababa University in curriculum studies. Ambissa Kenea (PhD) is Associate Professor of Curriculum Studies and Multicultural Education at Addis Ababa University. His research interests include education and diversity (multiculturalism), curriculum development, teachers' professional development, and education for the sociocultural minorities. He has published dozens of scholarly articles on local and international peerreviewed journals.
PUBLIC INTEREST STATEMENT
Since the inauguration of the 1994 Education and training policy, schools in Ethiopia as elsewhere in the world are experiencing multiple of instructional reforms. These reforms have been incorporated in the initial and in-service teacher training programs, especially after the introduction of Teacher Education System Overhaul (TESO) in 2003. Despite those efforts, empirical local studies as well as reports of the government reveal that failure stories overwhelm success. Using organizational learning theory (OLT) as a theoretical framework, this study, therefore, explores how a case public primary school has been learning the reforms in its own context. The findings could initiate other researchers to undertake nationwide studies by using the theoretical framework as new lens. It may also alert education policymakers in the country who often prioritize the top-down policy prescription over the blended and/or bottom-up reforms.
Introduction
Nowadays, technology and pervasive global competitiveness are challenging organizations to easily succeed as they did decades back. Consequently, many companies in the world have experienced failure since market competition for survival is getting very difficult (Ghazzawi & Cook, 2015;Probst & Raisch, 2005). This change makes the importance of the technical rationalist approach to organizational performance dubious (Austin & Harkins, 2008;Fullan, 2007;Hadad, 2017).
Similar to other organizations, school systems fall short of realizing their mission of preparing the young generation for such uncertain world (Darling-Hammond, 2012;Stoll & Kools, 2017). Inability of the education sector in innovating itself, limited professional ability of teachers in preparing the generation and, most importantly, failure of successive educational reforms in improving classroom practices are the typical school realities that represent the challenge (Stoll & Kools, 2017).
Thus, continuous improvement in performance is considered mandatory for organizations' survival. (Maletic et al., 2012;Probst & Raisch, 2005). Despite multiplicity of the suggested solutions in the literature, prioritizing on knowledge and information management becomes indispensable to stay in the global market competitiveness (Hadad, 2017). Consequently, the very source of competitive advantage of the twenty-first-century economy is thought to be "creating, acquiring, and developing knowledge within an organization" (Hadad, 2017: 206). Hence, learning is thought to be a prerequisite for organizations including schools to survive and maximize OP in an environment of rapid change (Murray, 2002;Senge et al., 2012).
Organizational learning (OL) is considered invaluable asset for organizations struggling with change and growth (Murray, 2002). Consequently, it becomes an agendum of hot discussion since the publication of "the fifth discipline", a seminal work authored by Peter Senge (1990). Despite the disparity in singling out a definition for a LO (Wai- Yin Lo, 2004); LOs, according to this author, are: organizations where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning to see the whole together (p. 3) The notion of organizations as learning entities is also debatable. Some argue that organizations learn like human beings do, so that they can search for, store in, retrieve out and interpret information (Walsh & Ungson, 1991;Huber, 1991). Others support the metaphoric perspective that rather consider individuals as learning agents who individually and collectively represent OL (Argyris & Schon, 1978;Wellman, 2009). The metaphoric discourse focuses on the characteristics of a LO instead of the constructs; but both of the discourses seem dominant in explaining the concept of OL. This paper examines the case school as a LO using knowledge from both discourses of OL.
OLT is one of the major theories that is often employed to understand this inbuilt learning in organizations (Argyris & Schon, 1978;Huber, 1991;Senge et al., 2012). The theory views OL as multi-level learning that takes place at individual, team and organizational level (Senge et al., 2012;Giesecke & McNeil, 2004). According to OLT, organizations may know less than its members; such conditions prevail when organizations fail to learn what every member of the organization knows (Argyris & Schon, 1978) or when members lacked the required knowledge and skills to share for or learn from others (Wellman, 2009). Consequently, "the collective organization benefits or suffers from individual capabilities" (Wellman, 2009:37) and its learning culture (Argyris & Schon, 1978). OLT also understands learning in organization in terms of change in cognition, behavior/action, and in both thinking and action (Qin & Li, 2018;Park, 2006;Scott, 2011;Senge et al., 2012). The thinking perspective of OLT operationalizes OL in terms of organization's capacity to search, store, retrieve and interpret knowledge (Huber, 1991), while the action perspective understands learning in organizations in terms of change in the behavior of the entities of learning (Watkins & Marsick, 1996). Huber (1991) synthesized four constructs linked to the thinking perspective namely knowledge acquisition, information distribution, information interpretation and organizational memory. According to this author, the constructs refer to the process by which knowledge is obtained (knowledge acquisition), information from different sources is shared and thereby leads to new information or understanding (information distribution), distributed information is given one or more commonly understood interpretations (Information interpretation) and knowledge is stored for future use (organizational memory). Kools and Stoll (2016) advocates of the metaphoric perspective have identified seven characteristics which a LO must exhibit: developing and sharing a vision centred on the learning of all students; creating and supporting continuous learning opportunities for all staff; promoting team learning and collaboration among staff; establishing a culture of inquiry, innovation and exploration; establishing embedded systems for collecting and exchanging knowledge and learning; learning with and from the external environment and larger learning system; and modeling and growing learning leadership (p. 3) Since it is the model of OL often adapted to study schools by many researchers (e.g., Qin & Li 2018; Collinson et al., 2006;Panagiotopoulos et al., 2018;Park, 2006;Senge et al., 2012;Torokoff & Mets, 2008), this paper takes a position of the blended perspective where OL is understood as change in both thinking and action. Moreover, this perspective is chosen because good teaching requires teachers a fundamental shift in both thinking and action (Senge et al., 2012).
School as learning organization
Any entity which is established to accomplish predefined goals, have group of individuals who act for collectivity, and have clear boundaries from other organizations can be called organization (Argyris & Schon, 1978) so does schools (Park, 2006;Senge et al., 2012). With regard to learning, there is a speculation which assumes that all organizations have the ability to transform into a LO (Can, 2010;Knapp, 2008;Senge et al., 2012). Consequently, it has now become a necessity to rethink any organization including schools as a LO (Kools & Stoll, 2016) so as to take a competitive advantage over others. Thus, OL in school setting is the change among school-level actors on how they think and do in attaining their mandated mission 1 (Hora & Hunter, 2013). Therefore, alike in other organizations, OL in school setting can operate at individual, team and school level. Individual learning is thought to be the building block to both team and school-level learning (Fauske et al., 2005). Thus, learning by each teacher is the foundation for the learning of a school at team and organizational level. In line with this, Beauregard et al. (2015:719) argued that the process of OL "does not occur without strong legitimacy basis provided by individual learning agents; [it] is produced and reproduced by individual learning agent." But, learning by the individual teacher does not necessarily mean learning by the school; because, schools may not learn a knowledge which every teacher acquires (Senge et al., 2012).
The second learning entity is team, which is the most powerful agency for improving OL (Law & Chuah, 2015;Lewis et al., 2016). Teaming up creates collaborative culture and collective responsibility within professional learning community. Its effectiveness over individual learning is due to the collective intelligence of a team which exceeds the sum total of intelligence of its members (Law & Chuah, 2015). However, OL does not happen in mere gathering of individuals into groups (Argyris & Schon, 1978); rather empowering members is a necessity for learning to happen within the group (Law & Chuah, 2015). In the school context; teacher subject groups, staff development teams, site teams, peer networks, and team teaching can function as basis of team learning (Kools & Stoll, 2016). In schools where team learning is functional "teachers engage in structured processes that include joint lesson planning and in observing and commenting on colleagues' classes" (Collinson et al., 2006, p. 41). However, team activity is not equivalent to team learning; instead, the latter is a principled practice designed to bring collective learning of minds acting together (Senge, 1990in Kools & Stoll, 2016. With respect to school-wide learning; Pedler et al., 1989) noted that OL does not occur through mere training of individuals but it takes place as a result of learning at the organization level. In the same vein, Beauregard et al. (2015) indicated that organizations learn through the medium of their subunits and these units are individuals and teams that function in the multilevel system. By implication, schools also learn through the multilevel system that operates in networks of individuals and team of teachers sharing common goals. Thus, teachers and school leaders as individual units and/or team of teachers/leaders are agents for school level learning, but it is obvious that "individual members can come and go in an organization, but the organization can preserve knowledge, behaviors, norms, and values over time" (Collinson et al., 2006, p. 109).
However, in creating an LO, a multitude of variables can prohibit or facilitate it. Consequently, the quality of leadership, organizational culture, teacher behavior, and reform approach employed by the government are typical contextual variables with which success or failure of both building learning schools and implementing instructional reforms is connected (Hallinger & Lee, 2013;Schlechty, 2009;Schleicher, 2016).
Instructional reform
Educational reforms are common at system, school and/or classroom levels (Fullan, 2015). Reforms which aspire to bring improvement in student learning are often named instructional. Teachers are now indisputably considered as the most important agents of instructional reform implementation (Yun, 2007). This is because they have multiple roles as curriculum developer, action researcher, team leader and staff development facilitator (Campbell et al., 2003as cited in Yun, 2007. Similarly, Kools and Stoll (2016) argued that any reform in education will take hold in the classroom only if it successfully alters the behaviors and beliefs of teachers. The rationale for this claim is noted in World Bank (2018:16) report as: "System-level commitment to learning without school-level innovation is unlikely to amount to more than aspirational rhetoric." In well-developed nations and especially in Europe and the USA, uncountable reforms have been tried in teaching and learning per se. Some of these reforms are subject specific such as reforms in teaching science and mathematics (Cooper, 2013) and some others are linked to the application of instructional technology or pedagogical interventions and use of instructional materials. Similarly; Hallinger and Lee (2013) have discussed instructional reforms tried so far in Thailand: student centered learning, cooperative learning, brain-based learning, localized curriculum content, and use of technology. Such reforms have been common in Africa since the 1990s (Altinyelken, 2010;Chisholm & Leyendecker, 2008;Eger, 2016). According to Schlechty (2009:32), all these reforms aspire to improve instruction and the central idea behind these reforms is making immediate shift from heavy reliance on lectures to discovery learning and inquiries. Kools and Stoll (2016) attributed the necessity of instructional reforms to the inherent weaknesses entrenched in traditional models of schooling where: "the single teacher, the classroom segmented from other classrooms each with their own teacher, and traditional approaches to teaching and classroom organization-are inadequate for delivering these 21 st century learning agendas." (p. 12). Schools as learning organization (SLO) should therefore engage in a process of learning new roles that such quest for improvement brings. This paper aims to interrogate how such learning roles are being tried out in a school context in Ethiopia.
Overview of instructional reform in Ethiopia
Educational reforms in Ethiopia usually come following a regime change; as a result, in 1994 a new education and training policy (ETP) has been endorsed three years after the overthrow of the military regime from power. The policy and its directives contain structural, curricular, and pedagogical reforms. With regard to curriculum and pedagogical reforms, the policy gave more emphasis to producing citizens who are problem solvers, and critical and creative thinkers (MoE, 1994).
To realize this goal, Education Sector Development Programs (ESDPs) prioritized curricular reforms which promote change in teachers' classroom practices. In light of this, ESDP-II (MoE, 2003) called for a change in teaching methods and system of assessment so as to create citizens envisioned in the policy. A quality improvement package which, among others, aimed to install these refroms has been introduced into the country's education system (MOE, 2008). ESDP-IV (MoE, 2010) also included similar elements under curriculum reform which aspired to improve teachers' instructional practice at the classroom level. Moreover, Continuous Professional Development (CPD) program (MoE, 2009) has been introduced to improve the quality of instruction.
However, the reforms in pedagogy were solidified after the introduction of the "Teacher Education System Overhaul" [TESO] (MoE, 2003). Since then, ideas such as teacher as reflective practitioner, teacher as action researcher, cooperative learning, and continuous assessment that aimed to improve instruction become part of teacher training/development program. Moreover, TESO reform ideas were translated into strategies and initiatives; and the initiatives like active learning (AL), continuous assessment (CA), and action research (ACR) are all together labeled in this paper post-1994 instructional reforms. Failure stories, however, exceed success stories as reported by government reports, funding agencies, and individual researchers.
Statement of the problem
Recent findings reveal that instructional reforms introduced so far in Ethiopian primary schools seldom improved schools' and students' achievements. One of these findings could come from the scores obtained by primary school students in learning assessments (MOE, 2013); which fall very far behind the expectation of the reform efforts. The Early Grade Reading Achievement, EGRA scores (RTI, 2014(RTI, &, 2010, for instance, show that children in first cycle primary schools were not developing the basic language skills required to learn effectively in later years, in which only 5% were able to perform to the standard in reading fluency (2010) and more than half were nonreaders 2 (2014). Similarly, obtained scores from Early Grade Mathematics Achievement, EGMA (Asfaw, 2015) and successive National Learning Assessments (NLA) according to the analysis of Japan International Cooperation Agency (JICA), 2019) were behind the pass mark score set in the education and training policy (TGE, 1994), which is 50%. With regard to the process, local studies conducted on the status of implementing post-1994 instructional reforms in primary schools show that the reforms seldom transformed teachers' classroom practices (Hindeya & Mulu, 2013;Melese et al., 2009;Mulu, 2004;Serbessa, 2006;Tuji, 2006). This has made instruction linger in its pre-1994 way of doing. Put together, failure of post-1994 instructional reforms and the quality disaster seem beyond speculation.
There are ample findings from OL researches which claim that any organization is only as good as its learning ability (Law & Chuah, 2015;Wellman, 2009). That is, the more an organization is a LO, the higher will it be competitive and sustaining in the dynamic global competition. Schools are not exceptions. One can infer that the effectiveness of school reforms in general and instructional reforms in particular seem highly dependent on the potency of the school to make OL an aspect of its culture. However, how the case school is built into a LO and how contextual factors facilitate/inhibit learning of instructional reforms by the school require investigation. Thus, this study was planned to examine the case school's learning of instructional reforms via the lens of OLT. In so doing, it tried to find answers to the following questions: (1) How is knowledge acquired, distributed, interpreted and stored for future use at the school?
(2) How prevalent are the characteristics of a learning organization in the school?
(3) What is the agency and content of learning in the case school?
(4) How does this "learning process" influences instructional reforms at the school?
(5) What conditions stimulate or inhibit organizational learning in the school?
Methods and materials
The objective of the study is to examine school level learning of instructional reforms using the lens of OLT in a case primary school. In so doing, a qualitative research method with case study design was employed. This is because of the fact that the case study is very adaptable to study issues in education (Stake, 1995), and especially important for understanding pedagogic issues (Mills et al., 2010).
Selection of a case is considered critical in case study research (Yin, 2018;Mills et al., 2010). The relevance of the case (s) for the research objective is the most important criterion for selection. Hence, the case school, which is located in Bure Town (410 km north-west of Addis Ababa, the capital of Ethiopia), is purposively selected out of 12 public primary schools hosted in the town's administration. The school was selected for two reasons; first, out of the twelve primary schools which have been functioning in the town, it has been awarded the title of best performing schools in the 2018 annual performance evaluation. Hence, we are interested to study how OL is built in a 'best school' based on the principle of appreciative inquiry. 3 Second, the case school is one of the oldest schools which is functioning for over 40 years. Consequently, it has been experienced many of the post-1994 instructional reforms from the very beginning. The data sources were head of the Woreda Education Office (WEO), principal of the school, and three veteran teachers working in the school (working for more than 15 years). Data were collected using interview. Moreover, government and school documents which are relevant to the study were reviewed.
The extent to which OL constructs were exercised in the school, the degree to which the characteristics of a learning school exist, the contents and agency of learning, the way how OL influences instructional practice vis-à-vis the reforms, and the variables in the context which support or inhibit the SLO while learning these instructional reforms were the focal contents of the data gathering instruments.
Procedurally, the interview guides were developed based on review of relevant literature and observation of existing experiences. One of the researchers facilitated the interviews and took detailed field notes. The data so collected were analyzed thematically. Themes drawn from the basic research questions of the study and those emerged through reading and re-reading of the field notes were used as pillars for the data analysis.
Results
This section is devoted to presentation of the results of the study based on five themes drawn from the basic research questions.
Knowledge acquisition, distribution, interpretation and storage
Knowledge acquisition, distribution, interpretation and storage constitute important constructs of OL (Huber, 1991). The data sources for the present study were asked how well the school engaged in these processes. Alongside interviews, school documents were reviewed to complement the data obtained.
To start with knowledge acquisition; experience sharing with cluster schools, 4 peer/expert supervision, and to some extent mentoring were the strategies through which teachers in the school acquire knowledge which likely improve their classroom practice. Concerning experience sharing, the school did periodic visits to other schools; the school principal noted "in 2018, the school has made experience sharing with two schools in the cluster." Similarly, both the principal and head of the WEO reported periodic professional visits by cluster supervisors and woreda education experts. Supervision and mentoring were other strategies employed to acquire knowledge. According to the principal "in-school supervision" and "mentoring activities of senior teachers to their mentees, especially in language and the sciences streams" were used to acquire knowledge about teaching and learning in light of post-1994 instructional reforms. This opinion was supported by the interviewed teachers; they confirmed that feedbacks given in post-supervision sessions have been useful to improve their classroom practice. But according to the principal and head of WEO, the effort of teachers in further analyzing and sharing the feedbacks, and in transforming it into instructional knowledge was limited.
Even though not deep rooted, learning from self-experience and learning from others' experiences emerge as the prevailing strategies of knowledge acquisition in the case school. However, the process rarely suits to real organizational experiments for it lacks rigor on cause-effect relationship between the school's actions and outcomes. Moreover, evidence of acquiring knowledge using other strategies 5 (see, Huber, 1991) seems limited. In particular, one of the teacher participants strongly argued that inherited knowledge is almost negligible due to staff turnover and poor documentation. She said: Teachers often transfer from school to school; but there is no mechanism in the school designed to tap their knowledge. In the school, I have been working since the last nine years; if I transfer to another school, there is no strategy set by the school through which I can share this knowledge and this is true for school principals and supervisors.
Concerning knowledge distribution; participant teachers and the school principal confirmed that there were networks of teacher groupings organized in the form of committees, unions (e.g., language teachers' union, science teachers' union, etc.) and learning teams (e.g., the CPD committee, the "one to five organization" which seem phased out now, etc.). Nevertheless, the findings show knowledge that helps for improving instruction is rarely derived out of this networking as the school as well as teachers in the school focus on readymade focus areas cascaded in the form of periodic checklists/ templates. This makes the instructional knowledge shared among teachers almost similar. Thus, various teacher team-ups did not seem effectively networked, and this has limited both knowledge creation via interpretation and knowledge sharing between and within teacher groupings.
Regarding the process through which teachers provide meaning to the already distributed information, similar interpretation seems opted for than variation; the school principal asserted, "in the instructional issues on which our teachers/team of teachers discuss, we encourage them to rich on similar understanding and way of doing classroom routines" The WEO, also asserted that similar understanding is sought from all teachers and that was checked through periodic supervision from WEO and cluster supervisor. This implies, irrespective of variation in the subject they teach, experience they have, and grade level they teach; teachers were required to reach on common interpretation. One of the possible reasons for this might be the strong tradition of prescribing curricula and means of implementing it, which limits teachers to try different ways of practicing professional duties.
With respect to the fourth construct which is knowledge storage; the head of the WEO, replied knowledge that is already learnt in the school is stored in the form of minutes, piles, and sometimes in soft copies. However, the school principal was unable to locate most of the so called "knowledge" in the storage sites. Here, the question was rephrased and asked to him "the school has about 40 years since its establishment 6 and in those years, it might have acquired a dearth of information from which teachers could learn many things; then where do you think this information is stored?" Then, he smiles and replies "let alone information of the last forty years, I could not find the same in the last ten years; it has gone with teachers and principals who transferred to other schools or who retired." From a knowledge management perspective, however, what he called it knowledge is not really knowledge as it remains largely unused and effectively lost. Knowledge management chain, according to knowledge management theory, has four hierarchies namely data, information, knowledge, and wisdom (Bellinger et al., 2004). While information is data that has been given meaning via understanding relations, knowledge is transformed information about which individuals have understood patterns of that relationship (Wang, 2007). Thus, it sounds that what is stored in the storage sites of the school seldom qualify for the meaning of neither information nor knowledge; rather, it seems unprocessed data which moved with teachers and/or school principals who transferred to other schools.
Teachers have the same view; one of the participants asserted that archives, databases and filing systems where knowledge is documented for reuse in improving instructional practice are limited. Then a probing question was asked! "If you have questions on how to do your teaching, from where/ whom do you get that question answered?" she asserted that she asks the department head. This shows knowledge obtained from experience sharing in and out of school is tacit knowledge stored in the minds of few individuals. Kools and Stoll (2016) based on synthesis of literature identified seven basic characteristics of a learning school: developing and sharing a vision centered on the learning of all students; creating and supporting continuous learning opportunities for all staff; promoting team learning and collaboration among all staff; establishing a culture of inquiry, innovation and exploration; embedding systems for collecting and exchanging knowledge and learning; learning with and from the external environment and larger learning, and modeling and growing learning leadership. The major findings from interview and document review are summarized as follows under each characteristic.
Characteristics of schools as learning organization
With regard to developing and sharing a vision; all the participants replied that the school has a vision which is incorporated in various school documents (e.g., in the plans of teachers' one to five grouping, students' one to five grouping, school's strategic plan, etc.) and in the school compound too. However, the school principal and teachers argue that it is nominal and not effectively communicated to stakeholders. Moreover, both the school principal and participant teachers were unable to remember this vision. This shows knowledge acquisition, distribution, interpretation and storage that OL constitutes were rarely practiced in the school.
Concerning creating and supporting continuous learning, though intermittent and replica of instructional methods imposed from MoE, all the participants posited that teachers in the school had various opportunities for learning. One of these opportunities was the periodic training from a nearby College of Teacher Education, Woreda education experts, and from trained and/or experienced teachers in the school. The usual CPD that provides teachers apparent freedom on topic selection was also another opportunity noted by participants for continuous learning.
In relation to promoting team learning and collaboration, the checklists used in the school and interview data confirmed the availability of different teacher teams such as science teachers' union, mathematics teachers' union, and language teachers' union. All of these are in addition to the usual teachers' one to five grouping, departmentalization and development armies. Interviewees, however, were not happy about the contribution of these team learning contents in bringing shifts in teachers' classroom practices along the direction which post-1994 reforms aspired. One of the teachers replied "the team activities I have engaged in with other teachers do have more paper value than truly improving my classroom practice." The school principal and head of the WEO had also the same view; they considered the teams as mean of communicating the various checklists imposed from above.
With respect to establishing a culture of inquiry, innovation and exploration; head of the WEO and the reviewed school documents (such as action research reports, teachers' daily lesson plans, and checklists developed by the school and WEO) show that except minimal efforts from the school in encouraging teachers to do action research 7 and to support instruction through technology; the rooms for inquiry, innovation, and exploration seem closed due to an organizational rigidity that gives no opportunity for school-based reforms. The case in point which the principal raised and head of the WEO has shared was implementation of continuous assessment (CA). As the principal stated, he planned to reform the school's summative continuous assessment (SCA) practice, and requested the WEO for approval. Surprisingly, the WEO, in turn, requested the Zone Education Department (ZED) and then the Regional Education Bureau (REB). The principal was asked "what if teachers requested you to do CA differently?" He replied, "I refer to the WEO." Analysis of teachers' lesson plan also support absence of focus for reform elements in the instructional process.
With regard to embedding systems for collecting and exchanging knowledge, teachers and the school principal confirmed that the school has well established structures that extend from the smallest one to five group to developmental army. However, collecting and exchange of knowledge within and between the sub-structures were reported to be too bureaucratic that didn't prioritize knowledge-based and meaningful professional engagement but to adhere to the school's demands.
Concerning learning with and from the external environment, participant teachers and the school principal reported that except for periodic visit to cluster schools the school is a closed system which rarely learn from the external environment. For instance, no collaboration was made or partnership agreements were signed with anyone else including parents in instructional improvement per se.
The last characteristic, which is modeling and growing learning leadership, was also found limited since the school principal was busy in preparing checklists for periodic teacher appraisal, in writing project proposals for funding agencies and in reporting to the educational authorities above him. This indicates that compared to administrative functions, instructional functions that are vital to improve teaching and learning seem overlooked by the school principal.
Forms and contents of learning in the school
OL is undertaken by a multiple of agencies and the contents too are very much diversified depending on the vision and mission of the organization.
With respect to the contents, AL, CA and ACR were common focus areas of learning noted by all the participants. Besides, they mentioned subject specific contents such as language, civic and ethical education, and science kit. Topical issues like substance abuse and HIV/AIDS were additional focus areas in which teachers have engaged in periodic school-wide learning. Regarding the agency of learning, these contents especially AL and CA seem focused mainly in team learning. This is because a multiple of team learning groups such as one to five, subjectspecific unions, development armies, supervision teams etc. were established in the school. CPD, in which AL, CA and ACR are priorities; though part of a school wide initiative, was conducted by teacher teams. There were also occasions when topical issues and common contents were discussed at school-wide sessions but only rarely. Nevertheless, participant teachers and the principal made clear that learning at individual teacher level might be either tacit or did not exist at all. The principal and head of the WEO supported their claim by citing the under achievement of teachers in licensing test administered in 2017. This implies the contents of learning were pedagogical issues already prescribed by MoE through series of strategic documents that contains multiple of programs and initiatives. Although the school appears adhering to team learning as agency of school-wide learning, the contents did not win the commitments of team members as participant teachers clearly show their boredom with those contents.
Teachers' learning and its influences on instructional reform
Teachers in the school acquire knowledge on statutory instructional reforms from experience sharing within the school and from other schools using peer supervision and experience sharing respectively, although not adequate. It also obtained trainings from sources within and out of the school. In addition, there were teacher groupings through which they could acquire knowledge relevant to their classroom practice. 8 Review of a sample of checklists, however, show that instructional reforms appear rarely focused. For instance, of the ten checklist criteria sent from WEO, none focused on instructional reforms related to CA, AL or ACR. Out of the fourteen criteria the school dispatched to teachers, only one required teachers to support instruction with technology and to do action research.
More importantly, analysis of a teacher's daily lesson plans reveals limited evidence of incorporating post-1994 instructional reforms. Instead, it follows the "traditional" lesson planning in which daily lessons for a given week appears on a single sheet of A4 size paper following traditional didactic elements. A specimen from the month of April is portrayed below.
Moreover, analysis of teachers' yearly one to five plan shows issues of instructional reforms except score analyses were overlooked; out of fourteen activities planned for the year, no activity has specifically dealt with elements of instruction. It focuses on other issues such as increasing students' engagement in one to five and development army meetings, minimizing the number of dropouts, decreasing the number of underachievers via tutorials, organizing meetings with parents and similar other issues. Thus, it is possible to conclude that the limited knowledge obtained in these approaches of knowledge acquisition were rarely taken to the classroom.
Review of National school standard document (MoE, 2014) has, however, incorporated process elements that demand teachers to practice instructional reforms but within the scope of already identified reform elements. This document is a guideline on how to label schools in ranks of 1 to 4 indicating 1 below standard and 4 highly standardized. The document has 26 standards of which 14 standards deal about process and counts 35% from the total 100% evaluation. These 14 standards have in sum 60 indicators carrying varying weights. Of these, it seems that the criteria that requires teachers to practice post-1994 instructional reforms accounts less than 3% of the total 100%. Similarly, school leaders were given some amount of space to facilitate teachers' instructional reforms and monitor its implementation. Nevertheless, it appears difficult for teachers to commit themselves for a weight which is less than 3%, since 97% of the criteria did not require them to practice these reforms.
The context: facilitating and prohibiting variables
OL is influenced by conditions which prevail in and outside of the school environment. According to the school principal; teachers, the reformists, and school leaders are in the top of these conditions. With regard to teachers, they lacked the subject matter and pedagogic competence to implement government reforms. There is also reluctance from the side of teachers in implementing the reforms; the principal mentioned teacher dissatisfaction for lacking the commitment. In his own words "teachers are professionally reluctant to learn and practice the instructional reforms because of the fact that they are dissatisfied with the incentives government provides." The school principal and head of the WEO asserted that they lacked instructional leadership skills. Though the school principal was trained to identify gaps and fill teachers' skills and knowledge gaps, the head uncovered, he was overburdened by administrative functions.
Second, head of the WEO affirmed schools were busy in hosting heap of reforms such as BPR, BSC, Kaizen, one to five and its families, school improvement, CA, AL ACR, performance appraisal and many more. These reforms, he noted, are very instant that nobody seems accountable to the failures or successes. Some of these reforms, the school principal, head of the WEO, and participant teachers claimed, are contradictory: AL seems contradictory with content coverage, and CA with regional examination. Regarding AL and CA, one of the teachers responded: We are expected to deliver teaching using AL methods and this demands us to be flexible in content coverage. Because, engaging students in the teaching learning process needs more time than the traditional lecture method. Contradictorily; during classroom supervision, the principal, cluster supervisor, and experts from WEO strictly check the alignment of the daily lesson plan with the annual lesson plan. So does CA with what is focused in regional examination.
Moreover, resources which are critical to effectively implement post-1994 instructional reforms were lacking. Imposed reforms, lack of continuous professional training and shortage of inspired teachers were also major challenges of OL and instructional reform implementation which participants noted. Teachers' and principals' turnover and absence of functional knowledge/skill sharing strategy was another challenge to school-level learning.
With respect to enablers of OL, participants have mentioned limited opportunities. Teachers, for instance, observed that staff socialization was high which implicitly mean an enabling school context for team learning. They also added that the school is full of veteran teachers who have decades of experience in teaching. Though they became dissatisfied by the amount and rapidity of top-down reforms; both the WEO head and the school principal considered them as guiding elements for instructional improvement.
Thus, the school context appears playing a dual but opposite role for OL. While the entrenched staff socialization and the staffing of the school by veteran teachers helped in creating enabling context; teachers' reluctance and lack of subject matter and pedagogic competence, absence of instructional leadership, and deep-rooted culture of prescription from the government played the counter offensive. In addition, acute shortage of resources limited the culture of OL and learning of instructional reforms in the school.
Discussion
The findings uncovered some constructs of OL that prevailed in the school. To mention these, the school has tried to acquire knowledge using some but limited strategies known in OL. For instance, it shared experiences within staff members and cluster schools around. Moreover, periodic inschool supervision and to some extent mentoring seem sources of knowledge acquisition in the case school. Nevertheless, it lacked Huber's (1991) basic sources of knowledge acquisition that are common in a LO; namely, inherited knowledge, knowledge from newly employed staff members, knowledge from experimentation and knowledge that comes from scanning the bigger environment. Even, the prevailing knowledge acquisition constructs lacked consistency and continuity as the school was implementing them intermittently probably to satisfy the requirements by the WEO in time of external supervision.
With respect to distribution, the findings showed that the school established multiple teacher teams working in professional groupings, committees and subject-specific groupings. Such squad of groupings facilitate information distribution in a LO (Beauregard et al., 2015;Smith & Lyles, 2011;Wellman, 2009). Moreover, teachers in the school periodically practice inbuilt supervision which is one possible outlet to distribute the acquired knowledge. However, the commitment of teams, their freedom to interpret information on teaching learning, and the amount of knowledge created through the process seem too limited.
Knowledge acquisition is bounded within government reforms; especially AL, CA and ACR. Hence, experience sharing within and out of the school compound focuses on how best other teachers or schools implemented these reforms. Thus, teachers are required to converge to similar understanding in reforms per se. Weak evidence obtained regarding the school's culture of inquiry, innovation and exploration could be best epitome for this syndrome. Huber (1991) and Daft and Weick (1984) noted divergent interpretation of acquired knowledge promotes creation of new organizational knowledge due to unlearning or discarding of obsolete knowledge. From this point of view, the school worked to maintain knowledge staff acquires (if it so) and the powerful and legitimate knowledge seem knowledge from the government. Review of MoE's reform documents dispatched in post-1994 Ethiopia shows that the government determines not only the why (purpose) and what (content) of the reforms but also the how (implementation with yardstick) which is the unique feature of fidelity curriculum (Jeasik, 1998). This may cripple creativity and multiplication of organizational knowledge since the intent of learning is to disorganize and increase variety; but not to organize and reduce variety (Watkins & Marsick, 1996; as cited in Kools & Stoll, 2016).
Storing the knowledge created was found to be very limited. The study revealed that data which could have been potentially transformed into other knowledge management hierarchies is stored in the form of minutes and piles; but these data are discarded once teachers have got a tick in the monthly checklist. That means, the data produced from supervision reports, mentoring, experience sharing, and teachers' CPD portfolios were not used to enhance further learning but to satisfy the reporting requirements of the school and education authorities in the hierarchy. By the time of the interview with the principal, an attempt was made to check if the last ten years' minutes were available. However, he was only able to locate scrambled minutes of the 2018 academic year. The information which is saved from discarding was also rarely shared among teachers to improve instruction since individuals and team of teachers lacked the freedom for interpreting the information and transforming it into meaningful knowledge. This implies the data produced from the aforementioned sources were at best stored in the hands of individuals, which is the most volatile storage site (Wellman, 2009;Nelson & Winter, 1982in Huber, 1991Gioia & Poole, 1984) The common and relatively permanent storage sites in LOs such as computers (Argyris & Schon, 1978) were lacking in the school or not properly handled. That means future endeavors for creating a learning school is hampered due to inadequate storage and application of knowledge accumulated over generations.
The study also revealed the school has few of the characteristics of a learning school; it has established varieties of teams which are building blocks for creating a LO (Senge et al., 2012). Moreover; using these teams, it worked to promote collaboration among teams/individual teachers; and, though intermittent, it tried to create learning opportunities for teachers. To this end, team seems the most visible agency of learning followed by school-wide learning. But, the teams didn't seem functioning in the true sense of a learning team; whereas, the means and magnitude of individual teacher learning looks fuzzy. With respect to the vision, it seems that staff failed to share and commit itself for its accomplishment for no one was able to remember the vision even partially. The learning opportunities created for teachers also seem partially adequate. However, the rest of the characteristics especially culture of inquiry, innovation and exploration and a learning leadership seem to be absent in the school while the remaining characteristics were observed only partially. The culture of inquiry, innovation and exploration was found insignificant, partly because of limited freedom to try self-initiated reforms (Zachry & Schneider, 2008).
The study also shows contents of government-initiated instructional reforms (AL, CA and to some degree ACR) were the focuses of team and school wide learning. CPD is one method of school-wide learning in which teachers work on prioritized interests. By default, government instructional reforms received utmost priority. Teachers' and students' one to five groupings were found the routines in which both teachers and students aspire to accomplish goals. The school's score analysis tradition is also one step ahead in creating a learning school which is the bedrock for instructional reform (Zachry and Schneider (2008)). Scrutinizing these elements; in the school studied, one can partly find the characteristics and constructs unique to a LO suggested by different OL scholars (e.g., Huber, 1991;Kools & Stoll, 2016;Schlechty, 2009;Wellman, 2009). However; from analysis of teachers' daily lesson plans and other documents, it appears that exposure to reform contents in team and school-wide learning did not bring any influence in reforming instruction. It simply contains the traditional didactic elements which, according to Kools and Stoll (2016, p. 12), are criticized as not "adequate for delivering the 21 st century learning agendas." Storing already acquired knowledge and using that knowledge to initiate further reforms were found the weakest in the school and this limited the utility of acquired knowledge for reporting purpose only. Moreover, neither the shares given in the school standard document (MoE, 2014) nor the findings from the school-based score analysis helped them to introduce selfinitiated instructional reforms. The saying "the team activities I have engaged with other teachers do have more paper value than truly improving my classroom practice" conveys the full message of the failure of school level learning in practicing instructional reforms. This inhibits the attainment of the goal of instructional reforms introduced by the government which, according to Gallagher (2000), is to improve the learning of all students. Of course, this is not surprising as similar studies conducted on school setting reported the failure of schools to become LOs (Can, 2010;Coppieters, 2005;Schlechty, 2009;Senge et al., 2012).
The concept of LO works within a context (Law & Chuah, 2015) and schools are no exception. Thus, in this study, the enablers and inhibitors were found related to the school leadership, teachers, and the reform approach. With respect to leadership, the principal engaged more in managerial issues and less on instruction; but, research esttablishes that leadership is a prerequisite for creating a learning school (Collinson et al., 2006;Hallinger & Lee, 2013;Zachry & Schneider, 2008;Leithwood, Leonard, & Sharratt, 1998). For instance; reforms in Thailand failed due to school leaders' inability of being instructional leaders (Hallinger & Lee, 2013).
Teachers are probably the most influential elements who determine OL and instructional reform (Collinson et al., 2006;Schlechty, 2009). In this study, though they engaged in various groups, it appears that they lacked the prerequisite knowledge and skill which continuous learning and reform implementation require. Nevertheless, these competencies are critical for developing schools into a LO (Bowen et al., 2007).
The other factor in the context is the reform itself. The study shows the reforms were impositions from MoE (Jimma & Tarekegn, 2016;Tefera, 1996). Though not closed, the space provided for school-level reforms seems limited. That is, MoE appears to determine not only the why and what of the curriculum but also the how of teaching and assessment. By so doing, it seems that the government is teaching in the classroom. This might be associated with the reform tradition which dominated the history of modern education in the country. Policy change in education usually undertaken following regime change (Negash, 1996). And hence, the reforms which follow policy changes were meant to indoctrinate a regime's ideology.
The supremacy of reformers over the voiceless weakens motivation of the latter in creating a learning school so do the inspiration for initiating instructional reforms (Schleicher, 2016;Zachry & Schneider, 2008). Scholars suggested either a fair balance between top-down and bottom-up reform approach (Fullan, 2007;Jimma & Tarekegn, 2016;Schleicher, 2016) or a bottom-up reform approach (Zachry & Schneider, 2008) in place of the traditional top-down reform approach. Moreover, the reforms were not only swift but also bounteous which some of them seem hard to reconcile with each other and not relevant to teachers' works. Surprisingly, research affirmed that OL assists teachers' reform implementation when reform content is related to teachers' daily work (Teare et al., 2002as cited in Law & Chuah, 2015 and when reform becomes less demanding (Schlechty, 2009).
In addition, the output measures emphasized in the school standard document (MoE, 2014) such as completion, attrition and repetition rate, gender parity index, and a pass mark score in classroom assessment are weak requirements to make the reforms effective. Research shows reforms are more promising in output-driven schools (Bowen et al., 2007), which are closely tied to outputs related to teaching and learning than in process (activities) or input driven ones.
Conclusions
From the foregoing data presentation and discussion, it seems sensible to draw the following conclusions Of the constructs LOs should display, the school attempted only one, that is to acquire knowledge using experience sharing distribute via multiple teacher groupings. Moreover, the school displayed few of the characteristics of a learning school. Thus, using the various teacher groupings, it seems that the school has a potential to grow into a LO; though it lacked other sources of knowledge acquisition suggested in OLT literature. Most importantly, the school used less strategies such as learning from inherited knowledge, vicarious learning and grafting. In addition, freedom of interpretation and culture of storing knowledge for further use were lacking. The school also lacked a culture of innovation, exploration, experimentation and inquiry, all of which elements are important characteristics of a learning school.
OLT affirms that OL takes place via multiple agencies; individual, team and school wide. In the school, only team learning emerged as an active agent of learning followed by school wide learning. However, individual learning seems tacit. Any element of instructional reform is implemented at classroom level. Thus, unless teachers individually grow into agents of learning, OL may not improve classroom practice. In addition, periodic exposure to OL could at most support reporting but not instruction for OL is rarely sporadic. In addition, the influence of team learning and school-wide learning in reforming instruction seems insignificant as it is learnt from teachers' plans; rather, it resulted in a heap of reports that would be discarded after periodic performance appraisal. Thus, it is sound to conclude that the subtle attempts in OL were not intended toward attaining the school's vision; but to get fit to the requirements of authorities in and outside of the school.
In the last two decades, the MoE has introduced several reforms that prescribe for teachers and schools on how to teach students. This makes all the contents of school learning to focus on these imposed reform ideas at the expense of school-initiated reforms. Hence, it seems that self-initiated instructional reforms which are the very purpose of OL have been overlooked both in policy and practice while reforms imposed by MoE were the most focused contents of learning in all learning endeavors. One can say this situation gives the government unconditional license to avail itself in the classroom while in fact it is not legally mandated.
The school's subtle effort of being a learning school was severely challenged by an overdose of imposed reforms, lack of instructional leadership skills, and acute shortage of resources. Teachers' resistance resulted in lack of commitment to practice the reforms. Thus, it can be concluded that the school context of demands for reform supposedly to enhance learning has been inhibiting rather than enabling of such learning.
Implications
The school has already established the structural arrangements which are needed in creating a learning school; the various teacher team-ups and school clustering are among these arrangements. Experience sharing from within and out of the school compound on post-1994 instructional reforms also show the roots of creating a learning school is already germinating. However, transforming both these structures and engagements within them into functional mechanisms for change needs serious attention. Introducing school-level initiatives that help individual teachers to publicly share and apply tacit knowledge in team discussions may facilitate learning. Lesson study, learning communities or any bespoke learning initiatives created by the school could be among these strategies.
Learning organizations rarely develop without organizations having a strong sense of inquiry for knowledge; in the school, there are some signs this. Nevertheless, storing and interpreting the knowledge seem hindered by poor habits whereby the knowledge is achieved and rarely if ever referred to again. This needs urgent rethinking of both the top-down tradition of introducing instructional reforms and the knowledge management strategy used by the school.
In addition, transforming classroom practice from tradition to instruction prescribed by post-1994 reforms needs coherence among reform elements; the findings show some contradiction between, for instance, AL and content coverage, CA and memory-oriented regional examination and ACR and uniformity in executing the reforms. Acute shortage of resources also hampered the school's effort of learning the reforms. Developing a learning school needs handing these problems for serious by the school leaders and educational authorities in the top hierarchy.
Finally, further researches need to be conducted at the national level to understand how schools as organizations in general could be built into a learning school.
These strategies are inheritance (congenital learning)
(1), grafting on to itself (2), and noticing or searching for information about the organization's environment & performance (3)For further details, please refer to Huber, 1991. 6. The school has been established in 1979; and since 1994, it is functioning as full cycle primary school which hosts children of Grade One through grade Eight. 7. In 2018 academic year, only seven teachers out of 69 conducted action research. 8. Data on classroom practice of the instructional reforms was limited since no classroom observation was conducted in this study.
Disclosure statement
No potential conflict of interest was reported by the author(s).
|
v3-fos-license
|
2018-12-10T22:20:08.244Z
|
2017-12-01T00:00:00.000
|
55445960
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://jurnaljam.ub.ac.id/index.php/jam/article/download/1184/955",
"pdf_hash": "8fa736c479009052bc7c46e21b7de290c69277af",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44204",
"s2fieldsofstudy": [
"Business"
],
"sha1": "8fa736c479009052bc7c46e21b7de290c69277af",
"year": 2017
}
|
pes2o/s2orc
|
DOMINANT EFFECT BETWEEN COMPENSATION, LEADERSHIP AND ORGANIZATIONAL CULTURE ON EMPLOYEE PERFORMANCE IMPROVEMENT
The performance of employees is closely related to compensation (total cash), organizational culture, and leadership. This research aims to identify the independent variables: compensation (X1), leadership (X2), and organizational culture (X3) by dependent variable of performance (Y). The result of multiple regression analysis had identified that organizational culture dominantly influenced the employee performance in the Integrated Office of Malang City.
Previous studies have proved that there is an influence of compensation on employee performance, leadership on employee performance, and organizational culture on employee performance.Therefore, an identification of which variable is the most influential on employee performance in an organization.This research was conducted with the aim to identify the most dominant influence between compensation, leadership and organizational culture on employee performance at Integrated Office Malang.
A good performance in providing service in Integrated Office Malang is very necessary in order to in-crease the Regional Income and Expenditure Budget (APBD) of Malang City.In 2012, the Regional Income and Expenditure Budget of Malang reached 1.261 trillions; in 2013, the Regional Income and Expenditure Budget of Malang reached 1.402 trillions; and in 2014 the Regional Income and Expenditure Budget of Malang reached 1.582 trillions (Sofi'i, 2016).This increase makes Malang city which has the second highest Regional Income and Expenditure Budget in East Java (Sofi'i, 2016).Increase in Regional Income and Expenditure Budget certainly cannot be separated from the performance of the Government of Malang in performing good public service.It has total of 16,387 employees according to Central Bureau of Statistics of East Java (2016).
Employees can perform public service well in Integrated Office of Malang City if the government takes into account several factors, such as compensation, organizational culture, and leadership.The problems that arise in the Integrated Office shows how the Government of Malang City often lost the revenue due to poor performance of its employees, for example in providing relocation letter for the Dominant Effect Between Compensation, Leadership and Organizational Culture public (Cahayani, 2013).This results in the loss of public trust, which leads to a decrease in Regional Income and Expenditure Budget itself.
The amount of compensation which is not proportional to the workload, organizational culture that is less supportive in the implementation of work, and leadership that is less assertive affect employee performance.The effect of compensation on employee performance is strongly supported by Hertati (2009); the results of her research show that compensation has a significant effect on employee performance in PT VICO.Furthermore, in the results of the study which was conducted by Hertati (2009) showed that compensation is positively correlated to the performance of PT VICO employees although other variables also play a role.
In addition to compensation, leadership within the organization is also crucial to improve employee performance.The leadership style within an organization at the sub-district level is crucial in determining how the employee performance in the agency, whether it increases or not (Tatulus, Mandey & Rares, 2015).In this case, the leadership style that is most closely related to employee performance is charismatic leadership style.This is in accordance with the results of the research which was conducted by Tatulus, et al (2015) in the Office of Tagulandang Sub-district Sitaro Regency, showing that leadership that is nurturing employees like "father" with "his son" is the most appropriate leadership to apply.
Furthermore, how does organizational culture affect the performance of employees within an organization?According to a study which was conducted by Wirda & Azra (2007), organizational culture will affect employee performance.The research which was conducted to employees at Padang State Polytechnic, Campus Unand Limau Manis-Padang, shows that organizational culture (represented by attention to details, result orientation, people orientation, group orientation, aggressiveness, and stability) will form causal effect on employee performance (which is represented by quality, quantity, time-based, need for supervision, interpersonal effect) (Wirda & Azra, 2007).
LITERATURE REVIEW AND HYPOTH-ESIS DEVELOPMENT Employee Performance
Employee performance can be defined as the result of two aspects, namely ability/ skill and expertise (natural) owned by employees, and motivate employees to have a better performance (Osman-Gani, et al, 2013).Furthermore Osman-Gani, et al (2013) suggests that employee performance is strongly related to belief (religion) and spiritual.
Employee performance can be ensured through multi-objective capacities such as human, technology, organization, and level of institution (Ahmad, et al, 2015).Performance begins with top management, but the result is obtained from the lowest position, namely employee (Ahmad, et al, 2015).Good organizational performance also illustrates the extent to which the level of satisfaction of the employees themselves (Ahmad, et al, 2015).Ahmad et al (2015) in his research also added that employee performance is strongly influenced by supervisors, the support of organization for career development, as well as the development of human resource capacity owned by companies or organizations.
Employee performance indicates financial or nonfinancial outcome of employees, which is directly related to organizational performance and organizational success (Anitha, 2014).Anitha (2014) also suggests that employee engagement is very influential in shaping the performance of the employees themselves.Employee engagement is very influential on the indicators of employee performance, namely performance in work, task performance, organizational citizenship behavior, productivity, maximum effort, affective commitment, commitment to sustainability, level of work climate, and customer service (Anitha, 2014).
Compensation
Compensation is a form of reward given by a company for good performance that has been shown by employees in the hope that employees can improve their performance (Kuster & Canales, 2011).Furthermore, Kuster & Canales (2011) mentioned how the compensation system can affect the performance of employees by differentiating the types of compensation into three, namely salary, commission and incentives.
Compensation is a form of motivational tool given by the company to employee in order to increase employee innovation; it is given in the form of stock compensation (portfolio, stock and options) and incentives (Sheikh, 2012).Sheikh (2012) defines that the compensation referred to in the improvement of employee innovation is related to a number of shares provided and the additional cash earned from the performance given (Sheikh, 2012).
Leadership
Leadership is how to influence others by using a name approach, the influence of charisma, high self-consideration, motivating employees by inspiring, and stimulating intellectuals that aims to enhance one's creativity (Cheung & Wong, 2011).Leadership can be done well and effectively, especially charismatic leadership; if it is mediated by the support for work and better relationship (Cheung & Wong, 2011).Furthermore, support for work and support in relationships will build followers' trust and loyalty (Cheung & Wong, 2011).
Leadership is a form of influence on followers or others in which the behavior of the leader transforms and inspires his followers to work on expectations as well as include or align personal interests for the good of the organization (Guay, 2013).Furthermore, Guay (2013) asserts that the early categories of transformational leadership are personal character, values, organizational environment, and external environment.To that end, transformational leadership is required to be more focused on person organization (character of people who fit the goals of the organization) (Guay, 2013).It means that the objectives and interests of a person are in accordance with the organizational goals.
Organizational Culture
Organizational culture is a set of values within the organization that includes five dimensions, namely future orientation, power distribution, avoid-ing uncertainty, gender equality, humanitarian orientation (Gupta, 2011).Gupta (2011) explains the importance of conformity of social practice, social value, organizational practice, and organizational value, in the form of a superior organizational culture.
Organizational culture is intended as the values contained within the organization, which include the value of entrepreneurship (independence) and legitimacy, that can improve overall organizational performance (Lindkvist & Hjorth, 2015).The research which was conducted in studio or art museum in Sweden showed that entrepreneurial emphasis and legitimacy will make and organization able to run well (Lindkvist & Hjorth, 2015).
Organizational culture can also be interpreted as values disseminated within the organization and referred to as the work philosophy of employees (Moeljono Djokosantoso in Soedjono, 2005).Furthermore, organizational culture is defined as the values that guide human resources to deal with external problems and to adjust the integration of company, so that each member of the organization has to understand the existing values and how they should act and behave (Susanto in Soedjono, 2005).Organizational culture also serves as a distinction between an organization and the others (Robbins in Soedjono, 2005).
Research Hypothesis
Based on the conceptual framework above, the hypotheses which can be developed from this research are as follows: H0: Compensation, leadership, and organizational culture have no significant effect on employee performance.From the hypotheses that have been formulated, the definition of each variable is as follows: Compensation (X1) is a form of reward for employee performance in the hope of improvement of employee innovation and employee motivation from one period to the next period.In the public sector, compensation is divided into: salary, honorarium, family allowance (husband/ wife and child), job allowance and performance allowance.Leadership (X2) is a form of influence on others through name, charisma, high self-consideration, and motivating through inspirational stories, intellectual stimulation, so as to encourage the creativity of others and align the interests of employees with the interests of the organization.
Organizational culture (X3) is the values contained and disseminated within the organization, referred to as work philosophy of employees, agreed and shared, meets the elements of future orientation, authority distribution, avoiding uncertainty, gender equality, humanitarian orientation, social practice, social values, organizational practices and organizational values.Employee performance (Y) is the result of the skill and expertise of an employee; it is indicated from the job, task performance, productivity, maximum effort, affective commitment, commitment to sustainability, and level of work climate.
RESEARCH METHOD
This research is a quantitative research.It was conducted in Integrated Office of Malang City, located at Mayjend Sungkono Street, Arjowinangun Village, Kedungkandang Sub-District, Malang, East Java.The research data was collected through questionnaires distributed purposively to 172 employees or government apparatuses of Malang City at Department of Capital Investment and One-Door Integrated Service (DPMPTSP), Department of Population and Civil Registration, Department of Communication and Informatics, Department of Industry, Department of Labor, and Regional Tax Service Agency (BP2D) in Integrated Office of Malang City.
The research instrument used in primary data collection is questionnaire.The questionnaire of this research is divided into two main parts, namely: the personal data of respondents and the influence on performance.The personal data of the respondents include name, age, position, work unit, education and marital status.The effect on performance consists of four components: compensation, leadership, organizational culture, and employee performance.This section aims to identify the real effects of compensation, leadership, and organizational culture on employee performance of Malang City Government in the Integrated Office.
In compensation, respondents are given five statements regarding salary, honorarium, and benefits gained when they work in the organization.In leadership, respondents are given seven statements regarding the leadership of superior in the organization in general.The next component is organizational culture.The statements presented on organizational culture are closely related to cultural values in terms of achieving targets, long-term goals and organizational goals.The last component in the questionnaire is employee performance, which consists of seven statements about job activity and work completion.To respond to four components of these variables, respondents should simply mark one of the alternative answers that have been provided, namely: (1) strongly disagree; (2) disagree; (3) do not know; (4) agree; (5) strongly agree.
After collecting the data, the researchers conducted validity and reliability test, classic assumption test (autocorrelation test, multicollinearity test, data normality test, and heteroscedasticity test), and multiple regression analysis by using SPSS software version 22.The result of validity and reliability test on the questionnaire showed valid and reliable results.Meanwhile, classic assumption test results show that there is no multicollinearity in the data, no correlation between the variables studied, no heteroscedasticity, and the data collected is normally distributed.
RESULTS AND DISCUSSIONS
The data which was obtained from 172 employees of the Government of Malang City in Integrated Office showed that 68 respondents (39.77%) work in Regional Tax Service Agency (BP2D (DPMPTSP).This means that most respondents are from the BP2D work unit.The respondents are the head of departments, the head of division, head of field, head of section, extension agents, archivists, treasurers, evaluators, web managers, institutions, and staff in the fields and sections in each work unit.
The respondents in this research are employees at Malang City's Government.They are the productive age to work and mostly are married.Based on age, the respondents can be grouped into four age groups, namely: 25-35 years old; 36-45 years old; 46-55 years old; and 56 years old and above.From the data which has been obtained, 28 respondents aged 25-35 years old, 52 respondents aged 36-45 years old, 81 respondents aged 46-55 years old, and the remaining 11 respondents aged 56 years old and above.
Because the respondents are at productive age, they have various length of work period.A total of 63 respondents have worked in their unit for less than or equal to 10 years; 57 respondents have a work period of 11 to 20 years; 44 respondents have worked in their unit for 21 to 30 years; and 8 other respondents even have worked in their unit for more than 30 years.Various length of work period of the respondents is also because some of them were newly appointed as staff or employees and many of them are transferred to work units in the Office of Integrated Malang currently.
Based on the last education, the respondents can be grouped into graduates of Senior High School or equivalent, Diploma I, Diploma II, Diploma III, Diploma IV, Bachelor, Master, and Doctor.From the data obtained, there are 40.12% of the respondents who are graduates of Senior High School or equivalent; 1.74% of the respondents who are graduates of Diploma III; 0.58% of the respondents who are graduates of Diploma IV; 39.53% of the respondents who are bachelors; while the remaining 18.02% of the respondents who are masters.
Feasibility Test of Multiple Regression Model
Table 1 shows the results of model feasibility test using a regression tool, where R square (R 2 ) is 0.224.This result shows that the independent variables namely compensation (X1), leadership (X2) and organizational culture (X3) can predict the dependent variable, namely employee performance (Y), of 22%.Meanwhile, the remaining (78%) is influenced by other variables outside the variables studied in this research.
From the beginning, this study used one-tailed test, so that for the value of significance, it is necessary to multiply two.For example, the significance of compensation on the performance of employees in Integrated Office, 0.004*2, is 0.008.Nevertheless, the results of t-test with one-tailed test in formance).Independent variable (compensation and organizational culture) even significantly affects dependent variable (employee performance), unlike variable of leadership (t= 1.552 and sig= 0.122).Moreover, between variable of compensation, leadership, and organizational culture in Table 2, organizational culture is seen to have the most positive (t= 5.061), significant (sig = 0,000), and dominant effect on employee performance.
From results of multiple regression analysis (ttest) shown in Table 2, thus it can be said that H1 is accepted, while H0 is rejected.It means that compensation, leadership, and organizational culture have a significant effect on employee performance.Furthermore, from the results of multiple regression analysis (T-test) in Table 2, we also can identify that the compensation has a less significant effect on employee performance, while leadership has a significant effect on employee performance.Meanwhile, organizational culture has more dominant effect significance on employee performance.
This study identifies that compensation has a positive and significant effect on employee performance.Positive and significant influence of compensation on employee performance has also found in the research which was conducted by Dessler (2010), referring compensation as one of the tools in motivating employees to achieve maximum performance.Furthermore, in this research, leadership has a positive, but not significant effect, on employee performance.This is contrary to the research which has been conducted by Tatulus, et al (2015), stating that leadership can significantly improve employee performance in Tegalgandang Sub-district, Sitaro Regency.However, the research that has been done by Apriliani (2015) actually supports the results that show no significant influence of leadership on employee performance.
The final result of this study identifies that organizational culture has the most positive and significant influence on employee performance.The positive influence of organizational culture on employee performance is consistent with the research that has been conducted by Gupta (2011), explaining that a good work culture will lead to good overall organizational performance, including employee performance.The research which was conducted by Wirda and Azra (2007) discussing the role of organizational culture in improving organizational performance at Padang State Polytechnic also supports this result.
The results of this study imply the improvement of employee performance of Malang City's Government in Integrated Office.The Government of Malang City should maintain compensation in the form of salary and honorarium, as well as allowances to its employees and add the amount or details of compensation if necessary.It is because compensation can significantly affect the performance of employees from time to time.Similarly, organizational culture also has the most dominant influence on employee performance in Integrated Office; Malang City's Government needs to direct and instill social values in work culture, job description in detail, short-term and long-term goals of the organization, as well as targets to be achieved by the Government of Malang more intensively.By doing this, employees can do their duties and perform their function as a public servant better, in accordance with the standards and procedures set by the Government of Malang.
CONCLUSIONS
From the results of this research, it can be concluded that compensation gives a positive and significant impact on employee performance, while leadership has a positive but not significant effect on employee performance.Instead of these two variables, organizational culture has a positive and significant influence which is more dominant on improving employee performance of Malang City's Government in Integrated Office.
|
v3-fos-license
|
2021-06-06T14:00:46.772Z
|
2021-04-28T00:00:00.000
|
235570595
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ejournal.undip.ac.id/index.php/ilmulingkungan/article/download/25817/pdf",
"pdf_hash": "c2cdef228655eb2d8cf4bc4ead304e1af96faa1a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44205",
"s2fieldsofstudy": [
"Business"
],
"sha1": "c2cdef228655eb2d8cf4bc4ead304e1af96faa1a",
"year": 2021
}
|
pes2o/s2orc
|
Community Development in Kemijen Village, East Semarang: A Corporate Social Responsibility in Practice
Companies, either state-owned and private which operates in the field and/or related to natural resources must implement Corporate Social Responsibility (CSR), such as by PT Indonesia Power (PT. IP) UBP-Tambaklorok which is located nearby Tanjung Mas Harbor, Tambaklorok Village, North Semarang District. Semarang City. The CSR implementation should benefit for both sides, the company for its image and the local community for the implemented programs. Therefore it is important to analyse perception of the local community as well as their opinions about the company's CSR activities in their village, in this case is the people of Kemijen village which is located adjacent to the PT. IP’s area. This research is descriptive qualitative, done in 2019, describing the phenomenon of CSR implementation by PT. IP, and the perception of the local community of Kemijen village towards the CSR activities implemented by PT. IP. The informants were taken using purposive sampling technique, covering both formal and informal leaders, the local people, as well as community development officer of PT. IP. Primary data was gathered using in-depth interview technique and observation. Secondary data consist of documents. Primary and secondary data was then coded and analysed interactively. PT. IP has formulated and implemented the Company’s strategies into CSR Roadmap 2015-2019, which is the grand strategy and a milestone of CSR implementation to integrate CSR strategy into the Company’s strategy, in the sectors of education, health, economy, and infrastructure. According to the perception of the Kemijen villagers, there have been both benefit and insufficiency of the PT. IP’s CSR implementation. They expect more programs to be implemented, in order to allowing them more opportunities for poverty alleviation.
Introduction
People, profit and planet as elements of the triple bottom line should be implemented in a balanced and concurrent manner to achieve sustainable development. Instead of synergizing, these elements often intersect with each other due to conflicts of interest (Hadi, 2020). In most cases, the planet is the weakest element with the least number of supports compared to the other two elements (Hammer & Pivo, 2017). This is because the planet is still not considered a basic need that must be met. On the other hand, human being live in such a situation that they are in really need the environment. It is the human who depends on the environment rather than the opposite. In fulfilling all their needs, human interact one to another by making exchange. It is where economic activities take place. Producers take resources from the environment, and then provide goods and services to the consumers as the users.
In their operational activities companies, especially when their production related to natural resources, usually create unavoidable social and environmental consequences. Therefore, these companies must allocate parts of their revenues for social and environmental activities, especially to the affected communities nearby the companies' operational locations. It is compulsory in Indonesia according to Law Number 40 Year 2007 on Limited Liability Company, 2007 in which in the Chapter V Article 74 Paragraph 1 says that "companies operates in the field and/or related to natural resources must implement Corporate Social Responsibility (CSR)". Thus, CSR is considered as a way to synergize the triple bottom line pillars (Hadi, 2020).
All corporates must commit to the community development. This role is executed by all corporates, either state-owned or private-own. One of those having been performing the Corporate Social Responsibility (CSR) is PT Indonesia Power (PT. IP).
As has been stated in research done by Ma'rif et al. (2013) the manufacturing companies tend to implement their CSR around these companies. PT. IP Tambaklorok Semarang CSR activities are in the villages of Kemijen, Bandarharjo, Tambaklorok, and in Tanjung Emas, besides in Gunungpati which is quite far from the company location in North Semarang but still in the city of Semarang. There are several articles on CSR on mangrove in Tambaklorok village, milkfish (bandeng) fish in Tanjung Emas, and fish fumigation in Bandarharjo. However there was very limited articles on CSR in Kemijen village. There are also articles on the CSR of PT. Indonesia Power in some cities. However, only a few focused on CSR of Indonesia Power in Semarang, which are CSR creative in natural batik as Indonesia Power's commitment in environment and cultural development of people in Gunungpati village, western Semarang (Martuti et al, 2017;. Therefore, this study contains an assessment of the community development in Kemijen Village, Semarang Municipality, by PT Indonesia Power (PT. IP) UBP Semarang in particular, based on the perception of the local community in Kemijen Village towards PT Indonesia Power, as well as their opinions about the company's CSR activities in this village.
Method
The type of this research is descriptive qualitative, describing the phenomena of CSR implementation by PT. Indonesia Power (IP) UBP-Tambaklorok as it has been known since it is located at Tambaklorok, Semarang City. This research was conducted in 2019.
The CSR is aimed at community development effort for the local people living around the company, especially in this research is people in Kelurahan Kemijen (Kemijen Village). Therefore, locus of this research is at Kelurahan Kemijen, and the focus is the phemonemon especially seen among the local people of Kemijen. The phenomenon is the implementation of CSR by PT. IP Semarang and the perception of the local community of Kemijen village towards the CSR activities implemented by PT. IP. The informants were taken using purposive sampling technique, covering prominent formal (Head of the village), informal leaders ( wife of head of the village as Head of Women Asociation), the local adult people in general, as well as community development officer of PT. IP company. They were chosen since they were considered having the capacity to give needed data and information. Data was gathered using indepth interview technique with the informants, as well as informal interview on relevant interest with several local communities, besides observation of the existing social and physical facts in the Kemijen village areas. Documents and analysis of secondary data were also utilized.
Triangulation technique was also applied to get a valid data by using other sources (material, people) for comparing collected data. Information from one source was checked with information from other sources. Primary (interview results and observation) and secondary data (from the village) was then coded and analyzed interactively using Miles et al. (2014) technique, then qualitatively analyzed.
Results and Discussion
Apart from supporting the implementation of sustainable development, several studies have shown that CSR is also beneficial for companies to build employee trust and enhance the company's reputation Yadav et al., 2018). According to Wibisono (2007), there are five pillars covered in CSR activities, as follows: (a) capacity building of manpower both inside the company and in the surrounding community; (b) enhancing local community economic condition; (c) inharmonious relational cooperation between the company and its social environment could create conflict; (d) developing good corporate governance; (e) physical, social and cultural sustainability. In this research, the term "…and in the surrounding community" is very important to notice, since this is one among the main reasons for Indonesia Power's CSR in its surrounding villages CSR is applied through three strategies, namely: (1) Community Relations. Which is done through activities support the development of understanding by means of communication and information to stakeholders. In this way that CSR prograns are directed into short term and accidental charity activcities.
(2) Community Service. Focused on company service to meet the societal needs. The company is the facilitator, while the society is empowering themselves.
(3) Community Empowering. This CSR strategy gives broader access to the society to support its empowerment, which they become the company's partner (Budimanta et al., 2008;Hadi, 2020).
Community service and community empowerment are very important related to community relations of the company with its surrounding communities.
Hadi (2020) stated that community empowerment or community development is the highest level and the most complicated CSR's strategies because it requires accompaniment, planning, and good governance. CSR becomes the spearhead of signing a Community Development towards mutual wealth between business, society, and environment. The affected community must be encouraged to take an active role in the interaction. In principle, community should become the beneficiary of any business activity. The community involvement in development has become an integral part of democratic political system. Furthermore, development must accord with environmental sustainability.
All corporates must commit to the community development, including the application of Corporate Social Responsibility (CSR) is PT Indonesia Power (PT. IP), an energy company, which provides electricity in Java and Bali Islands. The PT. IP has some offices as operational locations in Java and Bali. There have been some researches on CSR done about PT. IP as part of study on several companies in Semarang, Central Java, including PT. IP ( Ma'rif et al., 2013), PT. IP in East Java covering PT. IP in Kamojang (Muhammad et al., 2019) and in Suralaya, Banten (Efendi et al., 2020;Nugroho & Purnaweni, 2020), PT. IP UPJ PLTU Jeranjang West Lombok (Supija et al., 2018). There is also an article found on the CSR of PT. IP Tambaklorok Semarang (Hendro S & Naryoso, 2018;S. & Naryoso, 2020). The papers mostly talk about the CSR programs of PT. IP, satisfactory index, and community respons to the CSR programs.
PT Indonesia Power Tambaklorok is located at Tanjung Mas Harbor, Semarang, in Tambaklorok Village, Semarang Utara District. The other villages nearby the company are Tanjung Mas, Kemijen, and Bandarharjo. This study focuses on an effort to find out how the company performs its community development in Kemijen Village only, as this location becomes one of the the CSR core territoritories of PT Indonesia Power (PT. IP) of Semarang branch.
Regulations require a mandatory compliance of any enterprise or corporate to exercise activities aimed at developing the nearby environment. The Corporate Social Responsibility (CSR) is one of examples promoted by PT Indonesia Power UBP-Tambaklorok Semarang across the nearby area, i.e. the area where its stakeholder live and feel the direct impacts of the corporate daily activities.
PT Indonesia Power has performed several CSR activities concerning the affected environment in Bandarharjo, Kemijen dan Tanjung MaS villages. These villages have been traditionally receiving immediate impacts of the business activities of PT Indonesia Power UBP-Tambak Lorok. Therefore, based on the reasons stated in the Introduction, Kemijen village was the location of this research.
CSR Implementation in Kemijen Village
The business run by Indonesia Power is of very important to the society. Electricity is the main energy source very much needed by human being for everyday's life, ranging from lighting, energy for so many appliances, heating, transportation, communication, to entertainment of various activities (television, film, cellular phone). No one could imagine life without electricity. Therefore, the position of Indonesia Power as electricity provider is very important As an energy company that brings direct impact on the life of the society, Indonesia Power continues to seek to provide product and service that meet customer expectation and are environmentally friendly. This is carried out by synergizing its business activities in environment, employment, occupational health and safety management, as well as responsibility to the customers and social development of the community to support improvement of life quality and better environment. Environment and nature sustainability guarantee smooth business process and ensure availability of raw materials supply that are sourced from the nature.
In order to achieve business continuity, Indonesia Power has formulated and implemented the Company's strategies into CSR Roadmap 2015-2019, which is the grand strategy and a milestone of CSR implementation to integrate CSR strategy into the Company's strategy. In addition, Indonesia Power is always committed to implement sustainability programs through CSR activities and a series of policies that aim to realize sustainable development.
The most crucial economic sector in Kemijen Village, according to male local leaders, is that the village is lacking skilled employments, most making their livings as laborers, hampering self-manifestation of the localities. Kemijen Village, furthermore, is still deficient in terms of educational quality, creating poor responsiveness to up-to-date knowledge.
The environmental issue is found in the lack of trees in the village. This area needs more green strips to support their life quality. To worsen the situation, health quality also tends to be inadequate. Most of the local community cannot afford to buy necessary medicines or medical checks-up. Waste problem also contributes to the poor condition across Kemijen, in which defecation facilities are absent.
Therefore, female local community activitists and leaders noted that they want improvement in health sector service, in particular Integrated Service Post (Pos Pelayanan Terpadu, or Posyandu). Posyandu is very important to secure adequate nutrition for infants and elderly. This Posyandu activities is monthly run, with local volunteers manage the events. However, it needs supplys such as healthy snacks or food for the elderly and the infants. Wife of the Head of Kemijen village, who was also head of the Local Women Group expected that Posyandu can be available in each RW group of . Infrastructure 1. Waste management is a disturbing problem so that it needs immediate actions towards healthier environment 2. Machine-based waste management is ineffective when the electricity does not work, in particular when the flood occurs. Therefore, the village need an integrated waste channel to prevent the waste from polluting the environment 3. Flood prevention still becomes a serious problem because some of the dwellers are in dispute with PT KAI in terms of the rights on the settlement. According to the village chief, it deals with economic and infrastructure aspects 4. In terms of infrastructure, the village is planning to build an entry gate (gapura) to beautify the look. The local people have been making effort of creating the clean and creative neighborhood. Whereas, economic speaking, the villagers are attempting to create entrepreneurship in small or large groups. These groups are coordinated by experienced individuals and aided by external fundings. 5. The financial aspect becomes the major problem towards longlasting entrepreneurship of the villagers.
Data Taken From Interviews, Mass Media, and Public Data
Kemijen villagers have already been familiar with PT Indonesia Power since they were kids. Therefore, they expect that the company gives a specific attention to the local villagers by making a priority during the job recruitment. On the other hand, most of the villagers do not qualify for the job specifications, which are dominated by electrical engineering skills. PT Indonesia Power is very selective in recruiting the applicants, since the very technical system it has. However, with 13,97 % out of its households are categorized into poor families, the Kemijen villagers are mostly low educated people, do not suit with the company's requirements. Still, some of the villagers perception is that the company does not give priority to the local people. Table 1 explains the priorities they villagers expect from PT Indonesia Power's CSR and the CSR Programs as a respon to it.
From the Table 1, we know that PT. IP has done some CSR Programs as a respons to local community priority problems. However, there are still some priority issues that have not been resolved through the CSR program. In economic sector, there is only 2 training programs provided for local residents in improving their economy (waste bank and milk fish processing). It is not sufficient since people nowadays are dealing with information and technology problems. Skills in IT sectors are highly requested for local community to sell their products broadly through online system.
In education sector, CSR Program one provided scholarship for 1 university student. In the CSR reporting web, they stated that the scholarship is available for 2 students but only 1 student fitted the requirements. It shows that the education quality in Kemijen Village is not good but they can not afford tutoring fees. So, it is important to provide free group study for Kemijen students. Furthermore, they need to renovate the school building as expected by the local community.
In health sector, PT. IP has responed to the local community problems very well. They have provided supplementary program for 50 elderlies in 2019 in collaboration with Health Office of Semarang Municipality.
In infastrusture sector, CSR Programs are done by build Waste Bank building and provide water pumps to suck up floodwaters. It really helps the local community since flood and waste are the main problems of their community. However, local community expects more than curative strategy like water pumps. They expect preventive strategy to flood because flood happens often.
Local Community Perception of CSR Programs
In general, the local people of Kemijen Village perceived PT Indonesia Power to have a goodwill but the implementation of its community programs should have been better. Such inadequacy has prompted the community to demand more from the company in order to help them out of dependency. PT Indonesia Power is expected to make the best effort to help grow and improve the life quality of Kemijen Villagers.
PT Indonesia Power, furthermore, had also contributed by providing books and scholarship to the local children and students, but the local community had not been proactively involved in any decisionmaking process. Table 2 shows local community opinios on CSR Programs of PT Indonesia Power.
From Table 2, it shows that the main problems in CSR implementation in Kemijen are: a) inequality in contribution and grant, and b) inappropriate grant. For this reason, PT IP needs to involve local residents starting from the planning stage of the CSR program to find out about the priority issues of residents. In addition, it is necessary to carry out an evaluation that also involves residents so that the next CSR program will be more effective, equitable and right on target.
. Summary
According to perception of the Kemijen villagers, there are many sectors mostly needed by the local villagers, in terms of economic, education, and health that needs to be resolved through the CSR programs. In the education sector, the local villagers need bigger allocation for scholarship.
In the health sector, PT. IP has responded to the local community problems very well, by providing supplementary program for 50 elderlies in 2019 in collaboration with Semarang Health Office.
In infrastructure sector, PT. IP's CSR Programs was done by constructing Waste Bank building, besides providing water pumps to manage the floodwaters which has been helping the local community since flood and waste are the main problems of the Kemijen community.
In the economic sector, waste bank and milk fish processing by PT. IP was not sufficient.
Recommendation
PT. IP should conduct better educational programs in Kemijen village such as facilitating free group study for Kemijen students and bigger fund for scholarship. The company should also renovate the school building as expected by the local community.
In the health sector, the loval villagers need better support for their Posyandu programs.
In terms of infrastructure development, local community expects more than curative strategy like water pumps. They expect preventive strategy to flood since flood is a latent threats in the local area.
The CSR grants should also be better distributed, in order to allowing the local villagers more opportunities to alleviate them from poverty.
|
v3-fos-license
|
2019-03-21T14:08:23.016Z
|
2019-03-20T00:00:00.000
|
84185825
|
{
"extfieldsofstudy": [
"Medicine",
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-019-40065-z.pdf",
"pdf_hash": "72dba2a1f192b978c5be12ee07ec91021efeffbc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44206",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "72dba2a1f192b978c5be12ee07ec91021efeffbc",
"year": 2019
}
|
pes2o/s2orc
|
Estimating Natural Mortality of Atlantic Bluefin Tuna Using Acoustic Telemetry
Atlantic bluefin tuna (Thunnus thynnus) are highly migratory fish with a contemporary range spanning the North Atlantic Ocean. Bluefin tuna populations have undergone severe decline and the status of the fish within each population remains uncertain. Improved biological knowledge, particularly of natural mortality and rates of mixing of the western (GOM) and eastern (Mediterranean) populations, is key to resolving the current status of the Atlantic bluefin tuna. We evaluated the potential for acoustic tags to yield empirical estimates of mortality and migration rates for long-lived, highly migratory species such as Atlantic bluefin tuna. Bluefin tuna tagged in the Gulf of St. Lawrence (GSL) foraging ground (2009–2016) exhibited high detection rates post release, with 91% crossing receiver lines one year post tagging, 61% detected after year two at large, with detections up to ~1700 days post deployment. Acoustic detections per individual fish ranged from 3 to 4759 receptions. A spatially-structured Bayesian mark recapture model was applied to the acoustic detection data for Atlantic bluefin tuna electronically tagged in the GSL to estimate the rate of instantaneous annual natural mortality. We report a median estimate of 0.10 yr−1 for this experiment. Our results demonstrate that acoustic tags can provide vital fisheries independent estimates for life history parameters critical for improving stock assessment models.
natural mortality (M), which has traditionally been difficult to estimate 22 . Natural mortality has been estimated for Atlantic bluefin tuna using electronic tagging data 23 , although values of age-specific M for western Atlantic bluefin tuna are still considered uncertain by ICCAT. In 2017, the western assessment used an age-varying rate derived from the Lorenzen method 23 scaled to M = 0.10 at ages 14-16+, while the eastern assessment used a Lorenzen curve scaled to M = 0.10 at ages 20+ 24 . A sensitivity analysis found that a lower rate of terminal M for the western stock is associated with lower estimates of recruitment and SSB.
The degree of mixing between eastern and western stocks is a further key uncertainty in the Atlantic bluefin tuna assessment. ICCAT currently uses separate assessment models for eastern and western Atlantic bluefin tuna (i.e. population mixing is not accounted for). Estimates of population-specific depletion can be biased when catch removals are not attributed to the correct stock of origin [18][19][20] . An analysis using simulated data has indicated that accouting for mixing is particularly important for the western stock to obtain unbiased estimates of absolute stock size 25 . Information about the movement and relative abundance of different populations in time and space is key to correctly quantifying the contributions of those populations to fishery catches in mixing models. Electronic tagging of Atlantic bluefin tuna has emerged as a powerful tool for learning about many aspects of the biology and ecology of bluefin tunas [26][27][28][29][30][31][32][33][34][35] . Data obtained from electronic tags has the potential to improve estimates of key model parameters, such as rates of fishing and natural mortality, and migration and mixing. Conventional and electronic tagging have revealed the details of large-scale migrations of juvenile, adolescent and mature bluefin tuna, the understanding of which is central to the proper management of this species [3][4][5][6][7][17][18][19][20][26][27][28][29][30][31][32][33] . Information from electronic tagging and biological markers that provide origin of the tagged fish, allows estimation of population-specific movement patterns. Tagging, otoliths and genetics indicate that the amount of trans-Atlantic crossing varies depending upon population of origin, year examined, age of catch, and possibly sex. Tagging studies have indicated higher rates of trans-oceanic movements of eastern origin fish to the western Atlantic than vice versa, likely due to the much larger size of the eastern population 5 and the more limited distribution of the GOM population. Together with genetic markers, results from electronic tagging can also be informative about catch composition, demonstrating for example that western Atlantic bluefin tuna fisheries target mixed populations along the eastern seaboard of North America 5,18 .
Using this new knowledge from electronic tagging data to build biologically plausible models is vital to minimizing bias in assessments of stock status. Despite the rapid advances in our understanding of Atlantic bluefin tuna biology, key questions remain about population mixing, productivity, recruitment dynamics, maturity schedules, abundance trends and the number and stock origin of fish harvested by western Atlantic fisheries. To date, almost all electronic tagging of Atlantic bluefin tuna has been focused on deployments of archival and pop-up satellite archival tags. Acoustic tags have the potential to provide valuable information about the biology of Atlantic bluefin tuna, given their high detection rate, their longevity, and the independence of detections from fishing activity.
In this paper, we examine the use of acoustic tags in combination with Ocean Tracking Network (OTN)deployed acoustic receiver lines across entrances of the Gulf of St. Lawrence to: a) examine timing of arrival and departures of Atlantic bluefin tuna foraging in the GSL, b) determine the fidelity to foraging grounds annually to estimate how many fish return to the GSL, c) estimate survivorship using a multistate Bayesian mark-recapture model, and d) test if fish acoustically tagged and released in North Carolina waters recruit into the GSL. The GSL may serve as a unique location for long term monitoring of the Atlantic bluefin tuna fishery, due to the extraordinary investments Canada has made in the OTN infrastructure in this region. Strategic underwater receiver lines are now in place in many location of Canadian coastal waters and opportunistic investments have placed additional receivers along the US coastline from Maine to Florida. Conventional tags placed simultaneously on fish tagged with the acoustic tags provide an additional set of long-term marks necessary to generate estimates of natural and fishing mortality similar to previous studies conducted on Atlantic and Pacific bluefin tuna (Thunnus Orientalis) using archival and pop-up satellite tags 23,35 . (Table 1). Fish were caught on commercial Atlantic bluefin tuna fishing vessels, permitted to conduct scientific tagging, in the fall months off Port Hood on Cape Breton Island, Nova Scotia. The fish were all caught on rod and reel with live or freshly caught dead Atlantic mackerel (Scomber scombrus) or Atlantic herring (Clupea harengus) bait. During the tagging campaigns, one vessel was designated a tagging boat and multiple fishing vessels caught bluefin tuna on rod and reel. The fish were "transferred" to the designated tagging boat, the F/V Bay Queen IV, which had a large deck and transom. All bluefin tuna were brought on board the vessels using methodologies described previously 3,5,17 . In addition to these Canada deployments, four fish were caught on trolling lures in North Carolina waters in March 2013 using sport fishing vessel, tagged and released 4 .
Methods
Once a bluefin tuna was caught by rod and reel, the fish was reeled in and leadered to the open transom door. By placing a titanium or stainless steel lip hook, carefully behind the lower jawbone, we are able to pull the fish through the transom door and onto a wet vinyl mat. A saltwater hose was inserted in the mouth to oxygenate the tuna gills while on deck and a soft cloth soaked in a fish protectant solution (PolyAqua ® ) was placed over the eyes to keep the fish calm 3,33 . Curved fork length (CFL) of the fish was measured to the nearest mm with a flexible tape measure, fish were also sampled for fin clips for genetics, tagged and released. When possible pictures of the electronic tag positions were obtained upon release (Fig. 1 For this experiment all Vemco acoustic tags were packaged in a plastic "shark case" with 5 mm holes drilled in at both ends of the tag (Fig. 1). The holes were enlarged for the attachment leaders by hand boring with a file or dremel tool, and carefully smoothed to prevent any interaction of the edges of the hole with the materials used to construct the leaders. The tags were secured to the fish externally using a two-point attachment technique, with a custom titanium dart on each end of the acoustic tag. Tags were inserted into the dorsal musculature of the fish at depths of 15.2 to 17.8 cm depending upon the size of the bluefin tuna. The materials in the leader consisted of a single layer of 180 kg monofilament (Moi Moi Hard), a cover layer of aramid braided cord that provided increased abrasion resistance over the monofilament, and up to two layers of heat shrink wrap. Pop-up satellite archival tags (Wildlife Computers MK-10 and mini-PATs) were attached to a subset of the acoustically tagged tuna and tracks from these satellite tags were reported on previously 17,33 . Information from these pop-up satellite archival tags are not used in the present model and analysis.
Acoustic receiver lines using VR4 UMs receivers were deployed and maintained by OTN. They were initially placed across a portion of the Cabot Strait and across the Scotian shelf off Halifax, Canada in the summer of 2007 (Fig. 2). The receiver array used to enclose the GSL was partially installed when the project was initiated. This line was completed in late 2008 and spanned the entire Cabot Strait and the Strait of Belle Isle, which together provides an electronic "gate" that the Atlantic bluefin tuna must cross prior to reaching the GSL foraging ground. The completion of the OTN lines enabled us to record long-term movements of bluefin tuna acoustically tagged on their GSL foraging grounds. In addition, the previously deployed Halifax Line, completed in 2007, provided a line of complete coverage across the Scotian Shelf (Fig. 2). Additional deployments opportunistically of receivers along the eastern seaboard of North America from Newfoundland to the Gulf of Mexico, Bahamas and in the Strait of Gibraltar provided opportunistic detections (Fig. 3a). Table 1). 91% of the acoustically tagged fish were subsequently detected by a receiver post deployment (Table 1). From these acoustic tag deployments, 31,822 acoustic detections were acquired by receivers located along the eastern seaboard of North America from Newfoundland to the Florida keys, the Bahamas, and in the Strait of Gibraltar (Figs 1-3). We used 101 acoustic tags for development of a bluefin tuna mortality model and the mean mean curved fork length for tagged fish in the model was 250 cm (5 th percentile 193 cm, 95 th percentile 294 cm).
The original deployment years (2009-2013) were designed to test whether the Vemco acoustic tags (V16-4h, 6 L) were detectable from bluefin of the size class tagged, and we scheduled these tags to transmit coded acoustic pulses for a period of ~2.5 years with a predicted maximum of 858 days. The tags were designed with a kill switch for 865 days per manufacturer specifications, however some variation occurs due to battery life and temperature. Up to one year post release, 91% of the acoustic tags were detected at the OTN lines in Canadian maritime Shelf waters (Fig. 4). By year two, 61% of the fish carrying acoustic tags were detected across the OTN lines. As many as 34% of the tags were still detected in their third year post release indicating that the battery life extended beyond the manufacturer specifications (Fig. 4). Two bluefin tuna tags were detected for four years post release from this first release of Vemco tagged bluefin tuna and a single fish had five years of detections also indicative that tag attachments worked.
Based on results from pop up satellite archival tagging 33 , and the recapture history of tagged bluefin tuna in the Mediterraenan Sea (Table 2) (Fig. 3b). Based on acoustic detections bluefin tuna entered the GSL by crossing the Cabot Strait Line during the summer months from 4 June to 22 October, (mean date 10 July) (Fig. 5). Bluefin exited the GSL when crossing the Cabot Strait Line from 2 July to 19 November (mean date 12 October) after spending 7 to 166 days (mean GSL residency 94 days) on the GSL foraging grounds. Bluefin tuna usually crossed the Halifax Line on the Scotian Shelf (located ~400 km southwest of the Cabot Strait Line) before crossing the Cabot Straight Line in the early summer and after in the fall. Bluefin tuna crossed the Strait of Belle Isle Line, located to the north of Newfoundland, from 7 July to 23 September (mean date 6 August), including one fish that crossed the Strait of Belle Isle Line in four consecutive years. It appears these fish were exiting the GSL via this route as most had been detected earlier entering the GSL via the Cabot Strait. Entry and exit dates, and residency days were calculated from detections post deployment year.
Transit durations between the crossing of the Halifax Line on the Scotian shelf on the northern journey, and the Cabot Strait Lines ranged from 2.85 to 77 days (mean duration 14.90 days). The shortest distance that a fish could swim between the two lines is approximately 460 km, suggesting a minimum sustained speed of approximately 6.73 km/hour, in the case of the bluefin tuna with the shortest duration between subsequent recordings. www.nature.com/scientificreports www.nature.com/scientificreports/ Transit durations between the Cabot Strait and Halifax Lines were longer, ranging from 3.57 to 127 days (mean duration 37.53 days). Inshore receivers on both the Cabot and Halifax lines received significantly more hits than offshore receivers (Fig. 6) indicative that the fish are moving along the coastal shelf waters in relatively shallow depths.
Additional detections of tagged Atlantic bluefin in Canadian waters were obtained from Vemco Mobile Transceivers (VMTs) attached to free swimming grey seals located in the southern GSL and on the Scotian Shelf An interesting finding of the current study was the large number of detections that we observed on the Canso Causeway receiver (detections). The Strait of Canso, linking the GSL to the Atlantic Ocean was the historic migration route of these fish, and has been blocked by the Canso Causeway since late 1952. The longevity of giant bluefin would suggest that current year classes of GSL fish are only a few generations removed from the last bluefin tuna that might have used this passage as the primary migration route for entering and exiting the southern GSL prior to 1952. There are anecdotal reports of large numbers of bluefin seen in close proximity to the causeway in the years immediately following its construction. The receiver in the vicinity of the Canso causeway obtained over 4000 detections. To exit the GSL, bluefin must swim around Cape Breton Island to reach the Atlantic side of the Strait of Canso, a detour >450 km or longer or go north thru Bell Isle.
Tagged Atlantic bluefin tuna were also detected by individual moored Vemco receivers ( www.nature.com/scientificreports www.nature.com/scientificreports/ of the four North Carolina acoustic tags were subsequently detected by acoustic receivers (Table 1) on the Halifax line and off Sable Island (Fig. 7). One of the fish was detected off Cape Cod in June 2013.
Bayesian mark-recapture model. Using a spatially-structured state-space model, we obtained a posterior median estimate of the instantaneous annual natural mortality rate in Atlantic bluefin tuna of 0.10 yr −1 (standard www.nature.com/scientificreports www.nature.com/scientificreports/ deviation of log x, SD 0.34). (Fig. 8). The acoustic tagging data were also informative about rates of seasonal movement into and out of the Gulf of St. Lawrence, upating the prior distribution in most months (Fig. 9). The estimated rate of movement into the GSLwas highest during June and September (Fig. 9b), while the high estimated rates of departure from the GSL in October and November (Fig. 9a) are consistent with observations among receivers at the Cabot, Canso and Belle Isle Straits in those months.
Estimated detection probabilities at acoustic receivers were much higher in the GSL box than outside (Fig. 10), reflecting a higher density of receivers in this area, and the fact that tagged Atlantic bluefin tuna must cross receiver lines to enter and exit the GSL. Acoustic detection probabilities were estimated to have increased during the first years of the study in both areas, probably reflecting recruitment of receivers in the OTN and other projects over the study's duration. Acoustic detection probabilities in the GSL were estimated to have decreased in the final 2 years of the study (Fig. 10a), possibly reflecting attrition and re-deployments of receivers to new areas, or lags in the acquisition of receiver data annually. See supplementary material for additional model results.
Discussion
Electronic tagging of long-lived highly migratory fishes with coded acoustic tags permits conducting long-term studies that can provide valuable information about rates of mortality and migration. For Atlantic bluefin tuna, this technology can potentially provide monitoring capacity and address significant questions such as: a) the timing of arrival and departures of Atlantic bluefin tuna foraging in Canadian waters, b) the natural mortality rate of mature fish based on Bayesian modelling approaches 17,23,35 . Acoustic tagging data can inform current population models on the status and assessment of the Atlantic bluefin tuna populations. The original battery life of the acoustic tags used in this study was programmed to be ~2.5 years. More recent tags have projected battery lives of 5-10 years. The tags showed significant reliability when placed externally, with double titanium dart attachments, indicating the technology is capable of showing fidelity to a specific geographic area. We anticipate that with the long periods of occupation evident in the GSL waters (Fig. 7b) it may be possible to routinely obtain 5 year acoustic records for Atlantic bluefin tuna. This can be utilized for long-term monitoring of the assemblage of fish in these waters and could be used to assess recruitment of juvenile fish utilizing Carolina waters into the GSL.
Atlantic bluefin tuna have a complex population structure and there remain significant questions concerning the status, the structure and dynamics of Atlantic bluefin tuna populations, especially in the North Atlantic where mixing is known to occur on foraging grounds. The availability of a network of receivers covering the Cabot Strait provided the initial opportunity to test the role of acoustic tags in improving fisheries management of these valuable fish. Development of methods to provide empirical estimates of natural mortality is of high priority for bluefin tuna stocks, since all else being equal, using a lower rate of natural mortality in the stock assessment can often lead to www.nature.com/scientificreports www.nature.com/scientificreports/ lower estimates of the ratio of current to unfished stock size (i.e. greater depletion), and more conservative projections of future stock development. Survival estimates from the multistate mark-recapture model for Atlantic bluefin tuna suggest a low rate of mortality from natural causes, consistent with the fact that most individuals in this study had a curved fork length ≥240 cm at tagging, corresponding to an age of ~14 years or more 39,40 . For comparison, ICCAT uses a natural mortality rate of 0.10 yr −1 for eastern Atlantic bluefin aged 20 years and older, and for western Atlantic bluefin tuna aged 14 and over 23 . Values used in the stock assessment are thus consistent with the natural mortality estimates obtained in this study using acoustic tag recapture histories. Acoustic tagging methods appear to have good potential to improve estimates of natural mortality in the stock assessment, where conventional tagging data have so far proven insufficient to distinguish between alternative hypotheses about natural mortality 23 .
The multistate mark-recapture model we applied provides a robust and flexible framework for estimating rates of survival and seasonal movement in long-lived migratory fish species. Disentangling non-detection, fishing vs. natural mortality and tags reaching the end of their programmed transmission life presents a challenge with acoustic tag data sets, particularly for long-lived species where relatively long recapture histories are needed to accurately estimate survivorship. Using Bayesian approaches can help to alleviate this problem by allowing incorporation of prior knowledge from other studies or sources. For example, in this study, prior information from earlier published studies was utilised for rates of natural and tagging-related mortality, while an empirical prior was developed for acoustic detection rates in the Gulf of St. Lawrence (see Supplementary Material for details). As noted above, the tags deployed from 2009-2013 had a programmed transmission life of approximately 2.5 years, which is likely not long enough to discriminate over a range of low values of natural mortality with a high degree of precision. Despite the use of prior knowledge, there is likely some conflation of natural mortality, tagging related mortality, tag loss, and non-functioning tags in model parameter estimates. Adding a further tag type to the model for which information about the reporting rate is available (e.g. tags with a large monetary reward such as the pop up satellite archival tag or surgically implanted archival tags) could help to inform estimates of acoustic tag loss and tag transmission time. The precision of the natural mortality rate estimate is also expected to improve once detection histories from tags with longer programmed transmission times (5-10 years) start to accrue.
A potential limitation of the model applied in this study is the coarse spatial resolution. Permanent (i.e. over the duration of the study) emigration out of regions of high detection probability, for example return of Mediterranean origin fish to the eastern Atlantic may affect estimates of other model parameters. This phenomenon could potentially lead to estimates of natural mortality and tag shedding rates that are biased high, although its effect is not expected to be significant given the low frequency of observations of satellite tagged Atlantic bluefin tuna that ended in the eastern Atlantic or Mediterraean (2 out of 48 over the duration of the study). Future work will extend the model to a higher spatial resolution. This could be implemented by splitting the outside-GSL box into e.g. 3 or 4 areas, allowing more detailed patterns of movement to be estimated. Improving prior information or adding auxiliary data on detection probabilities and rates of fishing mortaliy is of high priority: extension of the multistate mark-recapture model to both acoustic and satellite tag detection histories is ongoing. This is expected to improve estimates of area-specific detection probabilities and acoustic tag transmission times for acoustic tags with short detection histories. Both have potential to improve the accuracy and precision of natural mortality estimates. Given additional data on the genetic origin of tagged Atlantic bluefin tuna from fin clips (i.e. Gulf of Mexico vs. Mediterranean spawners), accounting for stock-of-origin would be straightforward within the model framework presented, whereby movement and other parameters can be estimated separately for each origin. While the results above apply to a limited number of year classes (e.g. corresponding roughly to the terminal age group in ICCAT's western bluefin tuna assesment), there has been a trend towards smaller lengths at tagging in recent years, so that development to an age-structured model could also be of interest in future. By increasing acoustic tagging effort in North Carolina, it might also be potentially possible to determine when a fish recruits into the GSL foraging ground from this lower latitude foraging area.
Testing acoustic tagging on the GSL foraging grounds was critical as this sea is a semi-enclosed region and the OTN has strategically placed two fully closed receiver lines at Cabot Strait, and Belle Isle. This placement of receivers ensures capture of the tuna's electronic signals when they leave the region and return. An additional line on the Scotian Shelf (Halifax Line), across the continental shelf provides valuable information in concert with the Cabot Strait line on arrival and departure. Together these receiver lines permit continuation of a long-term study both on resident and new arrivals. The GSL may serve as the best long term site for monitoring western Atlantic bluefin, due to the investment Canada has made in placing strategic underwater receiver lines here and the diligent effort they have in maintaining these lines and downloading the data. Our study has demonstrated a high detection probability within the GSL, which supports estimation of detection probabilities in other areas with lower receiver densities.
Importantly, the use of external acoustic tags was made possible only by deploying on the deck, and carefully anchoring the tag in two places. From recapture results, we know that we have succeeded in constructing a 5 year attachment tether that keeps tags on the fish reliably. Given that the V16 tags have met the 2.5 year specifications of the manufacturer in tag transmission rates we predict 5 and 10 year data detections times will be possible with the current deployment techniques (2016-present) and receiver arrays, yielding improvements in the precision of survival estimates. New models incorporating valuable information from double tag experiment (satellite and acoustic tags) data sets, as well as genetic identification of the population origin of the fish from fin clips, should improve our capacity to model the survivorship of bluefin tuna by population, providing important information on their annual foraging patterns, and potentially enabling an assessment of the efficacy of increased protections on the spawning grounds in the Gulf of Mexico.
Data Availability
Telemetry data will be made available via our public website at tagging of pelagic predators (https://oceanview. pfeg.noaa.gov/topp/map) upon publication, or by request to the corresponding author. All model data is provided in the supplement.
|
v3-fos-license
|
2019-03-21T13:03:56.278Z
|
2013-08-20T00:00:00.000
|
30327099
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36518",
"pdf_hash": "512f3a9ff08f2e1bf12acf71f525212f5c3aea15",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44210",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "512f3a9ff08f2e1bf12acf71f525212f5c3aea15",
"year": 2013
}
|
pes2o/s2orc
|
Influence of Long Chain Free Fatty Acids on the Thermal Resistance Reduction of Bacterial Spores
Aims: The objective of this study was to investigate the effect of free fatty acid length chain and unsaturated bond number on the heat resistance and recovery media of bacterial spores. Methods and results: For 6 species, bacterial spore heat resistances were estimated at different free fatty acid concentrations added to heating media or in recovery media. The addition of free fatty acids to heating media has a slight influence on the heat resistance of bacterial spores whatever the species or type of acid studied. On the contrary, the addition of free fatty acids to the recovery medium after the heat treatment greatly reduces the ability to bacterial spores to recover and form colonies. This effect varies depending on chain length, unsaturated bond number of fatty acid and on the bacterial strain studied. Conclusion: The presence of free acids in the recovery media is an additive stress which decreases the capability of injured spores to germinate and grow thereafter. Significance and impact of this study: The impact of free fatty acids presented in this study can be taken into account to reduce the thermal intensity of food sterilization in relation to their availability in food matrix.
Introduction
The reduction of heat treatment intensity of food is a major challenge in the food industry.The main reasons are the preservation of the nutritional and sensorial quailties and saving energy in the context of sustainable development.Based on the hurdle concept developed by Leistner [1], other stress factors can be considered and associated with the heat treatment, such as the presence of antimicrobial molecules in the heat-treated food.One way to diminish heat treatment is to reduce bacterial heat resistance by combining thermal stress with other stresses such as a reduction of water activities or pH in heating and recovery media.Some of these combined effects are known and have been modelled [2][3][4].Regarding the pH, this effect is taken into account and used by some food canning companies to optimize their sterilization heating times.
Free fatty acids are non-toxic antimicrobial molecules and are naturally present in some food such as fat liver, fish or animal fat or natural olive oil.The bactericidal effects of free fatty acids have been extensively studied by Neiman [5] and Kabara [6] who determined the MIC of saturated and unsaturated free fatty acids for different bacterial species.The values of minimum inhibitory concentrations (MIC) of free fatty acids vary depending on the nature of the acid.They are not correlated with the length of the chain but are strongly influenced by the number of unsaturated bonds in the carbon chain in some foods, their presence in high concentrations in natural fats or oils limits or inhibits bacterial growth.Numerous studies have characterized the inhibitory effect of free fatty acids on bacteria in food.These studies have involved vegetative bacteria such as Salmonella sp [7], Staphylococcus aureus [8], Listeria monocytogenes [9], Escherichia coli [10,11] or bacterial spores such as Bacillus cereus [8,12], Bacillus subtilis [13], Clostridium botulinum [12,14] and Clostridium perfringens [15].
Foster and Wyne [16] showed inhibition of Clostridium botulinum germination in the presence of oleic acid.For Bacillus subtilis this inhibition is different depending on the nature of the free fatty acids and their concentration [13].Yasuda et al. [17] showed that at equal molarity of 0.2 mmol•l −1 , the percentage of inhibition was twice as high for oleic, linoleic and linolenic unsaturated free fatty acids than for palmitic and stearic saturated fatty acids.These authors explain this inhibitory effect by interaction between free fatty acids and the hydrophobicity of the spore surface which blocks bacterial spore germination mechanisms and therefore bacterial growth [13,17,18].
The antimicrobial activities of free fatty acids were clearly presented by Desbois and Smith [19] in a review.Different mechanisms may account for these inhibitory effects on vegetative bacteria.It appears that these molecules primarily affect the cell membranes where they are adsorbed [20,21].The insertion of free fatty acids in the membrane disrupts the electron transport chain, inhibits reactions of oxidative phosphorylation, inhibits enzyme activities and the entry of nutrients into the cell and can cause cell lysis.
The effect of free fatty acids in germination or growth of bacterial spore has been studied.However few studies have quantified the inhibitory effect of free fatty acids associated with a food heat treatment.Studies which describe the effect of free fatty acid in germination and growth of bacterial spore after heat treatment are scarce.For various long-chain (C16-C18) free fatty acids with different unsaturation levels, Tremoulet et al. [22] observed that the increase in their concentrations in the heating treatment medium decreased the decimal reduction time of spores of Geobacillus stearothermophilus.Ababouch et al. [23] reported a decrease in the apparent heat resistance of Bacillus cereus spores when free fatty acids were added to the incubation medium.Mvou Lekogo et al. [24] have described the influence of the presence of free fatty acids in both heating and recovery media on the heat resistance of spores of Bacillus cereus and Clostridium sporogenes.A Bigelow-like equation was developed to quantify the effect of free fatty acid in these two media on the D values.This study was limited to four free fatty acids (palmitic, palmitoleic, stearic and oleic).These free fatty acids added to the recovery media more efficiently reduced the D value than when they were added to the heating media.
The aim of this study is to assess the impact of the length and degree of unsaturation of the carbon chains of free fatty acids added to heating or recovery media, on the heat resistance of spores of different bacterial species.
Microorganisms and Spore Production
Aero-anaerobic and anaerobic species of spore forming bacteria were studied.Bacillus cereus NTCC1145, Geobacillus stearothermophilus CIP-23T, Bacillus licheniformis Bac37 isolated from dairy product and Bacillus pumilus E71 isolated from vegetables were used.Two strains of Clostridium sporogenes were studied: strain Pasteur 79.3 and strain Ad81 isolated from dairy product.These species are currently present as spoilage or pathogen bacterial spores in large food products.Spores of aero-anaerobic species were obtained as follows: cells were precultivated at 37˚C for 24 h in Brain Heart Infusion (Difco).The preculture was used to inoculate nutrient agar plates (Biokar Diagnostics BK021) with MnSO 4 40 mg/l and CaCl 2 100 mg/l added to the surface area.The plates were incubated at 37˚C for 5 days.The spores were then collected by scraping the surface of the agar, suspended in sterile distilled water, and washed three times by centrifugation (10,000 g for 15 min) (Bioblock Scientific, model Sigma 3K30).The final suspension (about 10 10 spores/ml) was finally distributed in sterile Eppendorfs micro tubes and kept at 4˚C.Clostridium sporogenes spore production was obtained by the method described by Goldoni et al. [25].
Source and Preparation of Fatty Acids
The free fatty acids used in this study had a carbon chain length ranging between 14 and 20 carbon atoms and had a degree of unsaturation (number of double bonds) between 0 and 3. Cis unsaturation was chosen because it is the form which is found in food.The free fatty acids used were: myristic acid (Fluka), palmitic acid (Alfa Aesar), stearic acid (Fluka), oleic acid (Alfa Aesar), linoleic acid (Acros Organics), linolenic acid (Acros Organics) and arachidonic acid (Fluka).Emulsions with different free fatty acid concentrations were obtained by mixing (Polytron ® PT-MR 2100, Kinematica AG, Switzerland) and microsonication (Branson Sonifier 250, Branson Ultrasonics, USA) in distilled water with 0.1% Tween TM 80 as a dispersant (Alfa Aesar, Strasbourg, France).The lack of influence of Tween TM 80 on bacterial heat resistance was previously controlled (data not shown).
These solutions were added to nutrient broth (10 g tryptone, 5 g meat extract, 5 g sodium chloride per liter of water) (Biokar Diagnostic) for heating media or nutrient broth with Bacteriological Agar (15 g•l −1 ) (Biokar Diagnostic) for recovery media.The concentrations of free fatty acids added to heating media were 0.8 mmol•l −1 .In recovery media, different concentrations were added, ranging from 0 to 0.8 mmol•l −1 .After sterilization by autoclaving at 110˚C for 45 minutes, as described by Marounek et al. [10], the pH was adjusted to 7.
Heat Treatments of Spores
Firstly, 30 μl of spore suspension was diluted in 3 ml of adjusted heating medium.200 μl (vitrex) capillary tubes were filled with 100 μl of sample, sealed, and subjected to a thermal treatment in a thermostated glycerol bath for different heating times.The heat treatment was stopped by cooling capillary tubes in a water/ice bath.Then they were broken at both ends and their contents poured into a tube containing 9 ml of sterile tryptone salt broth (Biokar Diagnostics) by rinsing with 1 ml of tryptone salt broth.The viable spores were counted by duplicate plating in recovery media and incubated at 37˚C for Bacillus cereus, Bacillus pumilus, Bacillus licheniformis strains at 37˚C anaerobically for Clostridium sporogenes and at 55˚C for Geobacillus stearothermophilus.
Data Analysis
For each condition, classical D values and their associated confidence interval were fitted by using module "nlinfit" and "nlparci" Matlab 6 1, The Mathworks.One level of concentration was studied to evaluate the influence of free fatty acid in heating media.Concerning the influence of free fatty acid in recovery media, for each fatty acid studied, D values were estimated for different concentrations ranging from 0 mmol•l −1 to 0.8 mmol•l −1 , added to the recovery media.
The influence of the free fatty acid concentration in recovery media on the D values can be modelled by a simple Bigelow-like model (Equation ( 1)) [23].
[ ] In this model, FFA z values represent the increase in free fatty acid concentration in the recovery media, which led to a 10-fold reduction in the D value (D* corresponds to the D values without free fatty acids in the recovery media).This parameter quantifies the influence of fatty acid concentration in the recovery media on the heat resistance of the bacterial spores.The sensitivity parameters of the models FFA and their confidence interval were fitted to experimental values using the module "nlinfit" and "nlparci" Matlab 6 1, (Mathworks, Meudon France).This model and its associated parameter values have been established for FFA concentrations in recovery media ranging from 0 mmol•l −1 to 0.8 mmol•l −1 .
Results
Oleic acid is a major free fatty acid present in oils and fats.The impact of this acid added to the heating and recovery media on the heat resistance (D values) of different species of bacterial spores was studied and is presented in Tables 1 and 2. It can be seen that the presence of 0.8 mmol•l −1 of oleic acid in the treatment medium slightly reduces the heat resistance of bacterial spores.The addition of 0.8 mmol•l −1 of oleic acid in heating media reduced the heat resistance of the strain of Clostrid- ium sporogenes Pasteur 79.3 to 0.88 fold while it de-creased the heat resistance to 0.55 fold for the Geobacil- lus stearothermophilus strain CIP-23T studied.
The presence of oleic acid in the recovery medium clearly affects the apparent heat resistance of bacterial spores (Figure 1 and Table 2).This influence varies according to the strain studied, whereby a lower value of the FFA parameter indicates a higher sensitivity.Strains can be very sensitive, such as the strain of Clostridium sporogenes with a oleic value of 0.31 mmol•l −1 .Other strains are less sensitive, such as Bacillus licheniformis with a value of 3.
The nature of the free fatty acid also modifies its effect on the heat resistance of bacterial spores.The influence of the length of the carbon chain of saturated free fatty acids on the heat resistance of Bacillus cereus ATCC 11145 strain was studied.Tables 3 and 4 present bacterial heat resistance and sensitivity according to the free ′ Table 1.D values (minutes) determined for bacterial spores of different species for 0.0 mmol•l −1 and 0.8 mmol•l −1 of oleic acid added to heating media.fatty acid concentration for four different carbon chain length fatty acids (C14:0, C16:0, C18:0, C20:0).Regarding the influence of the chain length of free fatty acids added to heating media on the heat resistance of Bacillus cereus, the addition of 0.8 mmol•l −1 of myristic acid (C14:0) reduces the D value by 2.35 fold.Concerning the influence of saturated free fatty acids added to recovery media, FFA z values indicate that the thermal resistance of Bacillus cereus is more greatly reduced with the addition of myristic acid (C14:0) or stearic acid (C18:0) than palmitic acid (C16:0) or arachidonic acid (C20:0) in the recovery media.These results show that no correlation appears between D or FFA values and the length of the carbon chain of the saturated free fatty acid.
′ z′
The influence of the degree of unsaturation of the carbon chains of free fatty acids added to heating and re-covery media on the heat resistance of bacterial spores was investigated for free fatty acids with a length of 18 carbons: stearic (C18:0), oleic (C18:1), linoleic (C18:2) and linolenic (C18:3) acid on 3 different bacterial species: Bacillus cereus, Bacillus pumilus and Clostridium sporogenes (Tables 5 and 6).The addition of C18 free fatty acid to the heat treatment media slightly reduced the heat resistance of bacterial spores, whatever their degree of unsaturation.On the other hand, the addition of unsaturated free fatty acids to the recovery medium reduced the heat resistance of bacterial spores to a greater extent than saturated acids (Figure 2).
For the strains Bacillus cereus and Bacillus pumilus, the influence of linolenic acid (C18:3), added to the recovery media, reduced the apparent heat resistance of spores respectively nine and seven fold, compared to the addition of stearic acid (C18:0) (Table 6).For Clostridium sporogenes, the addition of unsaturated free fatty acid greatly reduced the FFA values to the same extent, regardless of the degree of unsaturation.The FFA value decreased from 2.36 mmol•l −1 for stearic acid to 0.36 mmol•l −1 for oleic acid.For the 3 species studied, the low values of FFA parameters of unsaturated free fatty acids characterized their high impact.
Discussion
The presence of free fatty acids in the heating media at a concentration of 0.8 mmol•l −1 globally reduces bacterial spore heat resistance.This result is in agreement with the work of Tremoulet et al. [22], who observed a 75 percent reduction of the D value by adding 0.6 mmol•l −1 of oleic acid to the heating media.However adding free fatty acid to the recovery media after heat treatment greatly reduces the apparent heat resistance of bacterial spores.It can be noted that these concentrations in media are lower than those found in some foods [26].
It is well recognized that the influence of unfavorable environmental factors during recovery strongly affects the apparent heat resistance of microorganisms.When environmental recovery factors deviate from optimal growth conditions, the apparent heat resistance of microorganisms decreases.The bactericidal effects of free fatty acids are well known [6,19,27].Concentrations of free fatty acid below the critical micelle concentrations have a stressful effect on vegetative cells and bacterial spores.When added to recovery media, this stressful effect reduces the apparent heat resistance of bacterial spores [23,24].
As no other parameter values of sensitivity ( FFA z ) are available in the literature, it may be interesting to compare the FFA sensitivity data to the minimum inhibitory concentration (MIC) presented in Table 7. Concerning bacterial spores, MIC values quantify the inhibition effect of free fatty acid both on spore germination and vegetative bacterial growth.Few data exist indicating the MIC of free fatty acids; however these few data show variability between species studied.For Clostridium per- fringens, MIC values for oleic acid range from 2.4 mmol•l −1 to 10.2 mmol•l −1 [15,28].The MIC value is 0.75 mmol•l −1 for Geobacillus stearothermophilus [22] and 0.05 mmol•l −1 for Bacillus megaterium [29].Similar observations can be made for the inhibitory effect of fatty acids for one part on no heated bacterial spores characterized by the MIC values and on the other part on heated bacterial spores quantified by the sensitivity parameter FFA .A link can be noted between their MIC values and the FFA sensitivity parameter values.For the strain of Bacillus cereus studied, our results show that no relationship appears between sensitivity parameter values associated to the acids studied and the length of their carbon chain.A similar observation can be made concerning the evolution of MIC values according to chain length between 12 and 18 carbons (Table 7).It should be noted that for palmitic and stearic acids, MIC values appear to be higher than 10 mmol•l −1 , except for Bacillus megaterium [29].
′ z′ z′ z′
Our results show that for heat-treated Bacillus cereus, Bacillus pumilus and Clostridium sporogenes spores, the higher the number of unsaturated bonds in the carbon chain, the more sensitive the spores are to the free fatty acid: linoleic acid > oleic aci > stearic acid.These ob-d servations may be related to the evolution of MIC values for these acids applied to different bacterial species (Table 7).It is clear that bacteria or bacterial spores affected by heat treatments are very sensitive to an additive stress such as free fatty acids in recovery media.During heat treatment, the rates of activated or injured spores increased with heating time.The antimicrobial effect of free fatty acid according to their characteristics, chain length or bond number on heat-injured spores are similar and amplified compared to their effect on uninjured spores.The presence of free acids in the recovery media after heat treatment is a second stress which decreases the capability of injured spores to germinate and grow thereafter.
Under laboratory conditions, in media in which free fatty acids are scattered using Tween TM 80, high mixing and ultrasound, a low level of free fatty acid concentrations presents an important bactericidal effect on heat stressed spores.The natural occurrence of free fatty acids in some foods and oils is higher than the concentration studied.The impact of free fatty acid can be taken into account to reduce the thermal intensity of food sterilization if these effects can be verified in heat-treated foods.
Table 3 . D 100˚C values (minutes) determined for B. cereus ATCC 11145 for 0.0 mmol•l −1 and 0.8 mmol•l −1 of free fatty acids with different chain lengths added to heating media.
* D values ± CI 95%.
Table 4 . D 100˚C values (minutes) determined for B. cereus ATCC 11145 for different molarities of free fatty acids with differ- ent chain lengths added to recovery media.
ND: Not Determined; * D values ± CI 95%.
Table 5 . D values (minutes) determined for 0 mmol•l −1 and 0.8 mmol•l −1 of free fatty acids with different unsaturated bond numbers added to heating media.
ND: not determined; * D values ± CI 95%.
Table 6 . D values (minutes) determined for different molarities of free fatty acids with different unsaturated bond numbers added to recovery media.
ND: not determined; * D values ± C.I. 95%.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2015-11-16T00:00:00.000
|
1909435
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/s11671-015-1148-0",
"pdf_hash": "7b9540093a052dcbb543cdac4f4a4d3362819d85",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44211",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"sha1": "7b9540093a052dcbb543cdac4f4a4d3362819d85",
"year": 2015
}
|
pes2o/s2orc
|
Focused Role of an Organic Small-Molecule PBD on Performance of the Bistable Resistive Switching
An undoped organic small-molecule 2-(4-tert-butylphenyl)-5-(4-biphenylyl)-1,3,4-oxadiazole (PBD) and a kind of nanocomposite blending poly(methyl methacrylate) (PMMA) into PBD are employed to implement bistable resistive switching. For the bistable resistive switching indium tin oxide (ITO)/PBD/Al, its ON/OFF current ratio can touch 6. What is more, the ON/OFF current ratio, approaching to 104, is available due to the storage layer PBD:PMMA with the chemical composition 1:1 in the bistable resistive switching ITO/PBD:PMMA/Al. The capacity, data retention of more than 1 year and endurance performance (>104 cycles) of ITO/PBD:PMMA(1:1)/Al, exhibits better stability and reliability of the samples, which underpins the technique and application of organic nonvolatile memory.
Background
Organic memory, a multidisciplinary and flourishing frontier of nanotechnology, has succeeded in significant breakthroughs [1][2][3][4]. As emerging information medium, devices function as the transmission and manipulation of data, based on organic semiconductor embracing small molecule and polymer. In contrast to inorganic resistive random access memory [5][6][7][8], organic resistive random access memory (ORRAM) obtains access to meet the requirements of data storage, large-scale, lowcost, flexible nonvolatile storage for commercialization and utility.
Not only single organic materials, such as poly(Nvinylcarbazole) (PVK) that is widely accepted, but nanocomposites with polymer-polymer or polymer-inorganic blending have unfolded for the research on organic memory. Progressively, ORRAM based on small molecule is attached to great importance. The spin-coated PBD and PBD:PMMA nanocomposite film, first and foremost, were characterized by means of Raman spectrum, scanning electron microscope (SEM), UV-Vis spectroscopy, cyclic voltammetry (CV), and transmission electron microscopy (TEM). The following work highlights the tunable effect of the organic material PBD and its nanocomposite blended by PMMA on electrical properties, and retention and endurance of the resistive switching were additionally detected.
Methods
2-(4-tert-butylphenyl)-5-(4-biphenylyl)-1,3,4-oxadiazole or PBD:PMMA blends with proportionality of the chemical composition 1:1 was dissolved into the chloroform with the concentration of 0.5 wt.%. At ambient temperature, the solution was stirred by the magnetic stirrer for more than 24 h. Impurities were then removed by the percolation of a 0.45-μm filter. The glass substrate, with the indium tin oxide (ITO, 2000 Å thick) deposited, was sequentially cleaned by the acetone, methanol, and ethanol and proceeded to be kept 40°C in the vacuum furnace for 30 min. After spin-coating the solution uniformly to fabricate an active layer at 3000 rpm, the solvent was eliminated from the coatings through the vacuum furnace in 70°C for 2 h. Later, the top aluminum electrode, 300 nm thick, was deposited on the PBD or PBD:PMMA hybrid film and protected by the mask layer with the mask pattern diameter 2 mm. The ITO electrode acts as the anode while the alternative can be seen as the cathode.
Results and Discussion
The sandwiched configuration of both ITO/PBD/Al and ITO/PBD:PMMA/Al is illustrated in Fig. 1a. As shown in Fig. 1b, Horiba Jobin Yvon LabRam HR800 Raman spectroscopy was adopted to test the Raman spectrum of the PBD film and the PBD:PMMA with the ratio of the chemical composite 1:1 coated on the ITO substrate. For the PBD film and PBD:PMMA nanocomposites, the maximum peak is located in 1625 and 1623 cm −1 , respectively, which is derived from the C-C stretching vibration of the aromatic ring. The rocking vibration peak for the hydrogen atoms on the benzene ring ranges from 500 to 900 cm −1 , which bears the weak Raman activity. Owing to the C-O-C stretching mode of the 1,3,4-oxadiazole ring, the obvious peak is in 1011 cm −1 . Ranging between 1514 and 1625 cm −1 , the stronger Raman activity can be found in that it has the C-C stretching pattern in the benzene ring. Besides, the peak of the C-H asymmetrical stretching vibration in the methyl for the PBD and PBD:PMMA hybrid film is separately 3074 and 3080 cm −1 , which has the weak Raman activity. Therefore, the stretching culmination, compared with the Raman spectrum of the undoped PBD film, can be boosted by blending PMMA. The cross sections of the undoped PBD nanofilm and doped PBD:PMMA(1:1) nanocomposite film below 50 nm were characterized by xHITACHI S3400-N scanning electron microscope (SEM), exhibited in Fig. 1c, d. Apart from that, Nano-Map 500LS Profilometer (aep Technology) was used to measure the thickness of the manufactured PBD as well as PBD:PMMA(1:1) hybrid film. It is shown that a 45.7-nm-thick PBD film was spin coated while the PBD:PMMA(1:1) hybrid film, 37.4 thick, was formed.
UV-Vis spectroscopy and CHI660B electrochemical workstation were utilized to calculate the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) of PBD. The PBD solution and its spin-coated film underwent the optical detection. Observed from UV-Vis absorption spectra in Fig. 2a, the absorption peak λ max is located at 309 and 332.5 nm as the absorption edge λ edge is in 356 and 374 nm for the PBD film and solution, respectively. Due to E g = hc/λ edge , the band gap E g of PBD in the form of the film and solution is 3.48 and 3.32 eV, respectively. Cyclic voltammetry (CV) analysis is shown in Fig. 2b, in which the PBD film is measured in the acetonitrile with TBAP (0.1 mol/L). Its onset oxidation potential E ox vs. the Ag/AgCl reference electrode is determined to be 1.58 eV. HOMO and LUMO can be expressed as follows: where the reference energy level of ferrocene (FOC) is 4.8 eV, the external standard potential of the ferrocene/ ferrocenium ion couple E FOC vs. the Ag/AgCl reference electrode is 0.43 eV measured by CV. E HOMO (−5.95 eV) and E LOMO (−2.47 eV) can be figured out, whose distribution is depicted in inset of Fig. 2b. JEM-2100 transmission electron microscopy (TEM) was adopted to characterize the surface of the PBD:PMMA(1:1) nanocomposite film in Fig. 2c, d, from which the PBD in the PBD:PMMA(1:1) film presents zonal distribution. Thus, the energy band diagram of ITO/PBD:PMMA/Al can be described in Fig. 2e. 2-(4-tert-butylphenyl)-5-(4-biphenylyl)-1,3,4-oxadiazole, as a kind of electron-transport and hole-blocking material, was used to fabricate ORRAM, inserted between ITO and Al electrodes. I-V characteristics of the resistive switching ITO/PBD/Al were measured by KEITHLEY 4200-SCS semiconductor characterization system, for which the compliance current was restrained to 0.1 A, as indicated in Fig. 3. For starters, the scanning depicts the OFF-state. The steep current change pops up when the device was swept up to V SET = 1.6 V. Transferring from turn-off to turn-on, it is denoted as the write process. The following scanning is attributed to the nonvolatile storage that manifests the "read" process. With the resistive switching reverse-biased, the sample shifts from ON-state to OFFstate when the voltage goes down to V RESET = −4.2 V. It is the erase process with the negative differential resistive characteristic. All of the experiments above perform the procedure "write-read-erase-read", which conforms the ITO/PBD/Al has ability in bistable I-V characteristics, and the threshold voltage is 1.6 V. The current proportion of the low resistive state (LRS) to the high resistive state (HRS) witnesses that it can reach 6. 1,3,4-oxadiazole moiety is regarded as electron-transport group, which can Polymer nanocomposites are able to exert an exceptional influence on nanoelements in a long-term stability, specifically, to enhance the ON/OFF current ratio of the bistable resistive switching. Thus, PBD:PMMA nanocomposite can be treated as an organic storage material for bistably electrical performance. One hundred consecutive cycles of I-V characteristics for ITO/ PBD:PMMA(1:1)/Al were carried out and plotted in Fig. 4a. The cumulative distribution of V SET and V RESET is displayed in Fig. 4b Nevertheless, it has a smaller threshold voltage as well as much larger ON/OFF current ratio close to 10 4 . Insulator-like PMMA has higher E g (approximately 5.6 eV) so that carriers require more energy to inject into the e PBD:PMMA nanocomposite film. The current ratio of the resistive switching for PBD blended with PMMA is boosted with the fact that the original state of the resistive switching is in higher resistive state. The mechanism of the resistive switching based on PBD:PMMA nanocomposite can be explained by the trap and detrap of electrons in the PBD material. Electrons are injected from the Al electrode to LUMO of PMMA during forward sweep, by means of tunneling mechanism through PMMA molecules [15], which fill the traps of PBD to fulfill the write process at positive bias. During reverse bias, the electrons captured by the trap of PBD are detrapped and released into the PMMA matrix by Fowler-Nordheim tunneling into the Al electrode. Thus, the resistive switching from turn-on to turn-off originates from the rupture of the conductive path that is the erase process of the storage cell. With regard to the bistability of the resistive switching for ITO/PBD/Al and ITO/PBD:PMMA(1:1)/Al, the conductive mechanism should be further to analyze. Figure 5a, b illustrates "write-read" process and its fitting curves in the logarithmic coordinate. Below 0.3 V, the slope of ITO/PBD/Al and ITO/PBD:PMMA(1:1)/Al is equally identical to 1.0, in line with Ohmic current conduction. Otherwise, the slope of the fitting lines, before the abrupt current transition, is 1.2 and 1.5, respectively. In particular, the slope of the resistive switching ITO/ PBD:PMMA(1:1)/Al can reach 2.0 when the sweep increases from 0.9 to 2.0 V. Therefore, space-charge-limited conduction (SCLC) corresponds to I-V characteristics as the applied bias is above 0.3 V in the write process. During the read process, the current linearly increases with the incremental voltage, in agreement with Ohmic Law. The results above demonstrate the undoped and doped PBD bistable resistive switches are spontaneously adjusted by SCLC and localized filament. For Fowler-Nordheim tunneling mechanism, the tunneling current as an exponential function of 1/V is where κ is the parameter related with the potential barrier shape. If it is a triangular potential barrier, where φ is the height of the potential barrier, m* is the efficient mass of holes in the polymer semiconductor, and q and h are the quantity of electric charge and Planck constant, respectively. Depicted in the correlation between ln(I/V 2 ) and 1/V in Fig. 5c, d, the write process has the negative resistance conduction because the current dramatically increases during the forward scan, consistent with the Fowler-Nordheim (FN) tunneling model. At ambient temperature, the experimental result in Figure 6a, c presents the data retention of the bistable resistive switching ITO/PBD/Al at bias V = 1 V and read pulses, respectively. In excess of 1-year retention time, the sample, whose ON/OFF current ratio is nearly 6, does not evidently decay. Under the constant voltage and read pulses, the retention of the ITO/PBD:PMMA(1:1)/Al is presented in Fig. 6b, d. With the retention time more than 1 year, this bistable resistive switching in LRS or HRS seems to be more stable and their proportion approaches 10 4 . Aside from that, the bistable switching for ITO/PBD/ Al and ITO/PBD:PMMA(1:1)/Al is subject to consecutive endurance cycles (>10 4 ), for which Fig. 7a, b exhibits I HRS and I LRS varying with the cycles at V read = 0.1 V. The cumulative distribution of ITO/PBD/Al is displayed in Fig. 7c, where the mean (standard deviation) of I HRS and I LRS is 0.24 (0.05) mA and 1.84 (0.02) mA, respectively. The statistical analysis of ITO/PBD:PMMA(1:1)/Al indicated in Fig. 7d illustrates that the mean (standard deviation) of I HRS and I LRS is 0.37 (0.17) μA and 1.83 (0.02) mA, respectively. It is shown that the resistive switching ITO/PBD:PMMA(1:1)/Al has higher ON/OFF current ratio and better read endurance cycles and retention ability, which results from the wide band-gap of PMMA that holds back the carrier transfer. The reason that the initial resistance of the pristine device is far more than that of ITO/PBD/Al leads to the enhanced memory ability, such as ON/OFF current ratio and retention.
Conclusions
This paper weighs the electrical properties of the undoped and doped PBD films. Although the components of this nanocomposite PBD:PMMA are relatively independent, its performance does not simply add up with the fact that the ON/OFF current ratio is subject to significant enhancement. The bistable behavior of the resistive switching by means of blending PMMA obviously heightens that of the PBD film, which has higher ON/OFF current ratio, and better retention and endurance performance, with the proportionality of the chemical component PBD:PMMA(1:1). Consequently, it provides a widespread prospect for the application of nonvolatile memories.
|
v3-fos-license
|
2021-01-07T09:10:03.919Z
|
2020-01-01T00:00:00.000
|
231853392
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.journals.aiac.org.au/index.php/IJKSS/article/download/6414/4489",
"pdf_hash": "869d6c08d2e2099fdd9a73fac8718c0d34ac5c4d",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44212",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "aea21c9ee7eadb3aed4e9a6baab4488244ccba4e",
"year": 2020
}
|
pes2o/s2orc
|
The Influence of Time-dependent Surface Properties on Sprint Running Performance between Male and Female Athletes
Background: The body of research on field based player-surface interaction consists of some contradictory findings and the comparison of male and female physiological responses on different surfaces is limited. Objective: The study investigates the influence of surface properties on sprint running before and after completing a muscle fatiguing intervention. Methodology: Muscle activity was recorded using surface electromyography (EMG). The vastus medialis (VM), biceps femoris (BF), medial head of the gastrocnemius (MG), and the tibialis anterior (TA) sites were selected for analysis. The mechanical properties (MPs) of each field were shown to be different using ASTM F-3189 protocol. Results: A statistically significant three-way repeated measures ANOVA interaction between field properties, sprint trial and muscle groups was determined, F(3,36) = 10.82, p = .006, ηρ 2 = .474. Further analyses revealed an interaction effect between field properties and sprint trial, F(1,12) = 26.57, p = .001, ηρ 2 = .689, between muscle groups and field properties F(1,12) = 8.78, p = .012, ηρ 2 = .422 and between muscle group and sprint trial F(1,12) = 7.29, p = .019, ηρ 2 = .378. In addition, pre-intervention mean sprint time was less on the field possessing more energy return by 9.1%. Post-intervention sprint test results show a significant difference for BF peak muscle activity on the field displaying greater force attenuation. Conclusion: Both pre and post intervention sprint results suggest timedependent properties associated with a sport field could potentially influence muscle activation patterns differently for males and females.
INTRODUCTION
Each time the foot contacts the ground during a competitive event the MPs of the playing surface has the potential to influence both biomechanical measures and physiological responses. This suggests the interaction between the athlete and a specific combination of energy storing materials and structural design of a playing surface could produce unique human performance outcomes. In other words, presumably the player-surface interface produces an identifier such as a 'finger print' -metaphorically. If athlete development professionals are able to predict how the body is going to respond when playing on a surface with specific properties, the appropriate athlete preparation protocol could potentially be implemented. The creation of a database with player-surface information could be a useful tool for player development centering on performance enhancement and surface-induced injury prevention. The interaction between the foot and the playing surface which is of most interest due to its influence on performance and injury potential occurs during skills which exhibit large accelerations. Our hypothesis states the MPs of a surface, determined by commercially available instrumentation, has an influence on human performance which makes examining the time-dependent properties of a playing surface a necessity for establishing whether or not a relationship exists between sport surfaces and human factors. It can no longer be assumed that an athlete's performance or surface-induced injury is simply related to the surface type -for example synthetic turf versus natural grass. Classifying sport surfaces based on type or composition must be reconsidered because it has been shown that synthetic turf systems or natural grass fields do not always demonstrate the same mechanical parameters from either an intra-classification or inter-classification perspective. This is supported by the findings from recent studies, which examined random samples of synthetic turf fields, quantified differences for selected properties when compared across samples (Sanchez-Sanchez et al., 2018;Villacanas et al., 2017;Sanchez-Sanchez et al., 2016;Sanchez-Sanchez et al., 2014a).
For analyzing athletic performance, there are several biomechanical and physiological factors to consider when investigating sport specific activities exhibiting explosive unidirectional or multidirectional movements on a playing surface. Factors such as muscle activation patterns, footground contact time, loading rate, ground reaction force, energy restitution, surface deformation, viscoelasticity, kinetics and kinematics provide the foundation for quantifying performance enhancement and surface-induced injury causation. When investigating the player -surface interaction and its influence on human performance we should not only rely on surface type but include the MPs of the playing surface as well. The importance of our study and others that have examined human movement patterns while performing on playing surfaces with known mechanical values is to determine the relationship between the MPs of the surface and biomechanical and physiological responses (Hales & Johnson, 2019;Lopez-Fernandez et al. 2018). In particular, the relevance of a surface's ability to store and return energy, inputted by the performer, back to the performer is of great interest due to its ability to influence performance and surface-induced soft tissue injuries. The interrelationship between the athlete and the playing surface has been well established and this provides support for considering both components when attempting to determine a cause and effect relationship. The evolution of surface testing instrumentation offers a better opportunity to easily incorporate surface testing during the same human data collection session. By incorporating instrumentation commonly used to examine sports field properties provides an opportunity to not only validate the testing devices but also identify a correlative relationship between a playing surface and human factors. Lastly, this is important because without a defined athlete -surface relationship the surface measuring device values really have no applicable meaning. Sport performance differences between male and female athletes have been reported using various performance indicators and conditions. However, to our knowledge physiological measures between sexes before and after engaging in a sport specific agility course on surfaces possessing different MPs has not been investigated. Previous studies have shown muscle fatigue influences males and females differently during isometric contractions but sex differences are diminished during high-intensity dynamic activities (Senefeld et al., 2013;Hicks et al., 2001;Semmler et al., 1999;Hicks & McCartney, 1996). Others have indicated that the specific type of activity, muscle group involved, and age can influence the magnitude of muscle fatigue between males and females. (Enoka & Duchateau, 2008;Hunter et al., 2004). These groups conclude task specificity muscle fatigue is due to sex-related differences within the neuromuscular system. This suggests males and females adopt different neuromuscular strategies to compensate for muscle fatigue and by examining the electrical signal generated during a muscle contraction an alteration in muscle recruitment and patterning could be identified.
In order to analyze the effect of a playing surface on human performance we chose to evaluate myoelectric activity using surface electromyography (sEMG) and 30 meter sprint times. Surface EMG provides invaluable information on muscle activation amplitude and the timing or patterning of muscle activity relative to the sprinting gait cycle phases (Howard et al., 2018;Mastalerz et al., 2012;Kyrolainen et al., 2005;Nummela et al., 1994;Mero & Komi, 1987). These outcome measures can provide a better understanding of the role the MPs of a surface have on the myoelectric activation patterns between sexes. In this study, we examined the influence of a sports field MPs on muscle electrical activity. Our analysis focused on identifying neuromuscular and physiological response differences between male and female athletes while performing a sprint before and after completing a high-intensity bout of sport specific running and agility skills.
Participants
Seven male and 6 female athletes signed a consent form before participating in the Kennesaw State University Internal Review Board approved research study. The test protocol was designed to analyze sprint running performance before and after a bout of high-intensity sport specific exercises so a high level of fitness was mandatory. The participants could only be included in the study if they were at minimum two years post injury and were not currently using orthotics or any type of joint supportive device. In addition to meeting a stringent qualification criterion, the participants completed a physical fitness questionnaire to ensure their safety. The participants were instructed to follow the prescribed pre-test nutritional guidelines beginning 48 hours prior to testing. The athletes were instructed to fast 2 hours prior to testing and not to participate in any type of strenuous activity 48 hours prior to testing (Hales & Johnson, 2019).
Study Design
The study follows a quasi-experimental design. Two outdoor athletic fields demonstrating different MPs were selected to assess the physiological responses and sprint times. The participants were randomly assigned to either Group 1 or Group 2. The test protocol began on Field X for Group 1 and Group 2 tested on Field Y. Four days later, Group 1 was tested on Field Y and Group 2 performed on Field X. Myoelectric activity was selected as a dependent variable to measure the vastus medialis (VM), biceps femoris (BF), gastrocnemius medial head (MG), and tibialis anterior (TA) during each sprint trial. Sprint times (dependent variable) were recorded for the pre and post neuromuscular fatigue protocol. The agility test course used for pre-exhaustion was adopted from a previous study (Hales & Johnson, 2019). The ASTM Standard F3189 Specification F1936 was used to measure three mechanical variables associated with sports fields: 1) force reduction is a measure of impact reduction percentage when compared to a standard concrete surface; 2) vertical deformation measures vertical displacement of the object impacting the surface; and 3) restitution of energy is a measure of energy percentage returned from a surface to the performer.
Test preparation protocol
At the beginning of the test session, the participant was given a properly fitted multipurpose training shoe (Men's Ultimate Turf Trainer; Under Armour, Baltimore, MD, USA). Next, the lead-practitioner attached the surface EMG electrodes to the properly measured skin locations coinciding with the muscles of the lower extremity under investigation. To ensure maximizing signal strength, the lead practitioner prepped the electrode sites by shaving, lightly abrading, and wiping with alcohol. The electrodes were secured in place with pre-wrap and athletic tape. This was followed by the participant performing a predetermined set of agility and running skills at a low-intensity pace for 5 minutes. This provided the opportunity for the participant to warm-up while familiarizing themselves with the agility course. Lastly, the participant performed a maximum voluntary isometric contraction (MVIC) protocol targeting the VM, BF, MG, and TA muscles. (Hales & Johnson, 2019).
Exercise protocol
The muscle fatigue intervention consisted of performing 4 consecutive agility course trials with a 60-second rest between each trial. The participants performed the test protocol on each field, 7 days apart. Each athlete was tested at the same time of day to minimize any effect on performance due to potential temperature and humidity differences between test days. (Hales & Johnson, 2019). Each testing protocol included a 30 meter sprint trial performed prior to the fatigue intervention and another 30 meter sprint trial conducted immediately following the last agility based trial.
Instrumentation
Four digital (120 fps@1080 p) cameras (Hero4, GoPro, San Mateo, CA, USA) were positioned in the same location for each data collection session to record the agility course and two cameras were positioned perpendicular to the start and finish lines to record the 30-m sprint trials. The cameras were synchronized using a GoPro Wi-Fi remote and video editing software (Adobe Premiere Pro, San Jose, CA, USA) was used to ensure timing accuracy since we initiated and completed all data collection events when the first body part crossed the start and finish lines. Myoelectric activity of the VM, BF, MG, and TA was recorded using bipolar surface electrodes with a direct transmission system (1000 Hz) incorporated with myo Research software (Noraxon USA, Inc., Scottsdale, AZ, USA). EMG data was filtered using a fourth-order Butterworth band pass (high pass with 20 Hz cutoff and low pass with 500 Hz cutoff) processed with 20 ms root mean square smoothing window algorithms and MATLAB R2017a (MathWorks, Inc., Natick, MA, USA) was used for EMG signal analysis and processing (Hales & Johnson, 2019). A protocol specified by the American Standards for Testing and Materials (ASTM) was used to analyze mechanical properties of the athletic fields. The Advanced Artificial Athlete recorded force reduction (FR; in percentage), standard vertical deformation (stV; in millimeters), and energy restitution (ER; in percentage) (Labosport France, Le Mans, France). The field test procedure (ASTM-1936) was performed in each quadrant of the test areas prior to data collection. The instrument was calibrated according to the manufacturer's guidelines. Baseline measurements were conducted by dropping the spring dampened system onto concrete three times consecutively.
Statistical Analysis
Group means and SDs were used to determine physiological and temporal differences influenced by sport field MPs. An alpha level of .05 (n = 13) was adopted to minimize any type I statistical errors. A three -way repeated-measures analysis of variance (ANOVA) examined muscle activation pattern (VM, BF, MG, and TA) differences while sprinting on the different test fields. Data sets were analyzed using Shapiro-Wilk's test for normality. The sample design consisted of one between-subject independent variable (field type) and two within-subject independent variables (muscle group × sprint trial). Between-subject parameter estimates compared the treatment (field type) effect on muscle activation patterns. For the sample design, within-subjects effects (trial), between subjects effects (field), and between-subjects interaction effects (field type × trial) were examined. A Mann-Whitney U calculation determined differences between males (N = 7) and females (N = 6) for the dependent variables. Follow-up pairwise comparison tests were performed, where appropriate. Data were analyzed with the statistic software SPSS v 27.0.
Environmental conditions
The test fields are referenced as either Field X or Field Y distinguished by their MP. Ambient temperature for Field X data collection was 22.3°C (3.1°C) and 23.6°C (2.5°C) for Field Y. Relative humidity for Field X testing was 58.3% (3.3%) and 61.4% (3.9%) for Field Y during data collection. No significant difference was determined for ambient temperature or relative humidity.
Surface mechanical properties
The AAA device was checked for accuracy by following the manufacturer's calibration protocol prior to data collection. The intraclass correlation coefficient (ICC) results for the concrete drop-test determined instrument reliability was excellent
Surface Properties Effect
A significant three-way interaction between field properties, sprint trials and muscle groups was calculated, F(3,36) = 10.82, p = .006, η ρ 2 = .474. Follow-up analyses identified statistical significance in two-way interactions and main effects. A two-way repeated-measures ANOVA revealed an interaction effect between field properties and sprint trial, F(1,12) = 26.57, p = .001, η ρ 2 = .689, for mean muscle activity from the group of muscles under investigation. A two-way repeated-measures ANOVA showed an interaction effect between muscle groups and field properties F(1,12) = 8.78, p = .012, η ρ 2 = .422 and also between muscle groups and sprint trials F(1,12) = 7.29, p = .019, η ρ 2 = .378. Further analysis showed significant main effect difference between muscle groups F(3,36) = 45.39, p = .001, η ρ 2 = .791. Figure 1 shows muscle activity differences for the participants performing a sprint on fields with different MP prior to the muscle fatiguing intervention.
Muscle Fatigue Factor
MVICs were recorded prior to each field data collection session. The maximum muscle contractions were compared to examine test-retest reliability from session to session. The ICC for MVIC electromyogram recordings were good (0.71 -0.88) to excellent (> 0.89) and SEMs were moderate to good (< 10%) for session to session comparisons. Table 2 presents peak means and SDs percentages of muscle activity relative to the maximum recorded values for the sprint test. Comparing peak muscle group activity means between pre-intervention sprint trials and post-intervention sprint trials across the different fields shows a significant difference F(3, 36) = 3.54, p = .016, η ρ 2 = .053. Further analyses were conducted to identify specific differences between individual muscle groups. A post hoc analysis for post-fatigue sprinting on field X depicts a significant difference for the BF t(12) = 3.56 , p = .004 and TA t(12) = 2.39, p = .034, peak muscle activation. A post hoc analysis for post-fatigue sprinting on field Y shows significant differences for MG t(12) = 9.13, p < .001 peak muscle activation.
Sprint Running Times
A repeated-measures ANOVA determined an interaction effect between field properties and sprint trial, F(1,12) = 7.08, p = .02, η ρ 2 = .346. The mean pre-fatigue test 30-m sprint time was slower on Field X than Field Y, t(12) = 19.634, p = .01, d = 1.44. The follow-up analysis, t(12) = 13.23, p = .01, d = 1.76, also revealed the 30-m mean post-fatigue test time on Field X was slower than the sprint time on Field Y. The pre-fatigue and post-fatigue test times were analyzed independently to identify any differences associated with the time-dependent field properties in Table 3. Pre-fatigue analysis on Field X and Field Y showed a difference between sexes (U = 28.00, p < .001) and (U = 4.50, p = .014), respectively. Male and female sprint times following the fatigue intervention were also different for both fields, (Field X, U = 1.50, p < .001; and Field Y, U = 2.00, p < .001).
DISCUSSION
The study identified a viable means for investigating and quantifying the athlete-surface interaction for a group of college athletes while sprinting short distances before and after implementing a series of high-intensity sport specific activities. The results provide supportive evidence for our hypothesis stating that time-dependent properties associated with a sports field elicit different myoelectric activation patterns for males and females under both pre-fatigue and post-fatigue conditions. Sprint times were also used as an additional performance indicator. For our analysis, the test fields MPs were determined to be significantly different, and both male and female athletes ran 10% and 11% faster, respectively, on the field which yielded greater ER-%. These pre-fatigue sprint times suggest the mechanical properties associated with a sports field can influence running speeds. The participants were not intentionally fatigued prior to the initial sprint test and a standardized warm-up and flexibility protocol was followed, we feel confident the pre-fatigue sprint condition was a valid means for determining the influence of a sport field's MP on performance. Previous studies have used sprint times to determine performance differences between field types. A study involving young soccer players used sprints as a performance indicator and found the children (12 years) were significantly faster on AT compared to NG in both dribbling and non-dribbling trials, while the adolescents (14 years) were only significantly faster on AT without the inclusion of dribbling (Kanaras et al., 2014). Another study analyzing a group of rugby players performing sprints on NG and AT reported significantly different sprint times on the different surfaces (Choi et al., 2015). In contrast, a study investigating a group of American football players performing a 40 yard dash on NG and AT found no significant difference in sprint times on different fields (Gains et al., 2010). These early studies which assessed the influence of sport fields on sprinting were categorized solely by surface type so comparing these findings to studies which measure playing surface MPs must be done with considerable caution. More recent studies in this area investigating physiological responses involving sprint running have done so on surfaces with known MPs. One study which analyzed the testing fields using sprint times as an indicator for performance found mean sprint times were influenced by ER-% (Sanchez-Sanchez et al., 2014a). Another study analyzed running performance on AT and NG surfaces where the fields demonstrated similar mechanical behavior. The investigators concluded the fields did not differ enough to cause different physiological and neuromuscular responses. Under this circumstance playing on AT should cause similar neuromuscular responses to NG (Lopez-Fernandez et al., 2018). Running which involves straight-head or multi-directional accelerations is an easily implemented performance indicator with practical application and should be included in future athlete-surface research studies. However, combining sprint and agility times with other instrumentation capable of examining biomechanical and physiological responses could provide a more complete understanding of the relationship between surface MPs, performance enhancement and surface-related soft tissue injury.
The study included two important components making it unique compared to other investigations in this area. These include the objective analysis of the sports fields and controlling footwear. Firstly, the test fields were analyzed using ASTM Standard 3189 test protocol which determined significant differences between fields based on the selected variables: force reduction (FR-%), standard vertical deformation (stVD-mm), and energy restitution (ER-%).
The Influence of Time-dependent Surface Properties on Sprint Running Performance between Male and Female Athletes 47
Field-Y demonstrated 32% greater ER-% than field-X, both stVD-mm and FR-% were also significantly greater for field-Y indicating a more resilient surface. Studies analyzing surface stiffness and the role it plays in human energy expenditure and energy returned back to the performer have reported differences based on surface stiffness (Kerdok et al., 2002;Nigg & Yeadon, 1987). Another study showed excessive field rigidity actually elicits an increase in running times (Sanchez-Sanchez et al., 2014b). A group investigating physiological responses for a group of athletes performing sprints and sport specific activities on surfaces possessing different MPs were able to report correlative arguments regarding the influence of surface properties on human performance (Lopez-Fernandez et al., 2018). Studies of this nature provide evidence that a surface possesses properties which have the potential to either enhance or inhibit human performance. More importantly, these studies demonstrate the importance of analyzing the interaction between the athlete and the playing surface. Secondly, controlling footwear is another consideration when evaluating the athlete-surface interface. Several studies support the notion that the interaction between footwear and the playing surface can influence human outcome measures for field based activities which entail large accelerations or abrupt changes in acceleration (Willwacher et al., 2014;Wannop et al., 2009;Heidt et al., 1996;Andreasson et al., 1986). The footwear-surface interface is important to consider when assessing performance on sports field because of the role friction plays during high speed movements (Schrier et al., 2014;Severn et al., 2011;Potthast et al., 2010). Traction during running can enhance speed development, whereas, slippage will inhibit performance. The findings from one study suggests the average college football player would attain approximately the same straight-head sprint speed on the new generation AT as achieved on NG but demonstrated players change-of-direction speed was faster on AT (Gains et al., 2010). These differences could be attributed to the variety of footwear worn by the participants since a standardized shoe was not used.. A similar study investigating surface-footwear traction performance reported significant variability in slalom run times and attributed those differences to the various stud types or stud geometrical shape. (Sterzing et al., 2009). Our decision to control footwear was influenced by previous research which demonstrated inconsistent outcomes when footwear was not controlled. The athletic shoe we chose offered a separate male and female version which had multi-surface compatibility. One of the most important factors to consider when analyzing the athlete-surface relationship is identifying an appropriate mechanism for quantifying performance. Our selection of instrumentation was partly based on the fact that previous studies support the use of sEMG while incorporating a similar methodology as the one used in our study (Hewett et al., 2005;Jonhagen et al., 1996). Our findings revealed male and female athletes demonstrated significantly different sEMG patterns during the sprint trials. During the pre-fatigue sprint trial, the BF showed the greatest difference between sexes by 79% followed by TA at 60% on field-X while VM and BF showed significant difference on field-Y by 73% and 50%, respectively. The male athletes demonstrated the greatest muscle electrical activity for each of the selected muscle group across test fields during the initial sprint trial. There was a high correlation between the sEMG recordings during the sprint and MCVCs. Even though it is necessary to be cautious when examining surface electromyogram data, it seems to be valid for analyzing the surface effect in neuromuscular activity during running activities (Fauth et al., 2010). We followed a stringent preparation procedure which produced consistent electrode site locations based on anthropometric measurements. Additional care securing the electrodes to the skin was taken so the highspeed movements would not cause electrode detachment and minimize artifact due to electrode movement. A limitation of our sEMG analysis and regardless the care taken to ensure consistency, the differences in muscle electrical activity reported could be due to co-activation of adjacent muscle or due to a person's unique fiber-type arrangement instead of the influence of the playing surface. Besides sEMG, other types of instrumentation have been used to quantify the athlete-surface interrelationship. One such study utilizing tensiomyography (TMG) quantified muscular response in the lower extremity for a group of amateur soccer players after completing a sport specific running protocol and found no significant difference whether performing on artificial turf or natural grass (Lopez-Fernandez et al., 2018). Several other studies analyzed blood lactate changes following a bout of sport specific activities (Stone et al., 2016;Hughes et al., 2013) also reported no differences in performance between field types. Another study analyzing blood biomarkers found significant differences in running performance on different surfaces (Ammar et al., 2018). Blood specimen studies provide useful insight into blood-substrate differences after sprinting on AT and NG, unfortunately, the influence of the MPs of the testing fields could not be correlated with the blood-substrate concentration since the test fields were not examined. Other studies using sprint times as a performance indicator on different surfaces (Hales & Johnson, 2019;Choi et al., 2015;Kanaras et al., 2014;Chan et al., 2014Gains et al., 2010 reported MPs did influence sprint speed. The increasing number of research studies in this area and the varying test protocols have produced information focusing on the athlete-surface relationship which is contradictory in many instances. Our research study included a sprint test on the different fields following a series of high-intensity agility drills to analyze lower extremity myoelectric activity while in a state of neuromuscular fatigue. A study using a 30 meter sprint as a performance indicator found a 16% difference in mean sprint times by a group of soccer players after performing a muscle fatiguing activity (Sanchez-Sanchez et al., 2014b). We found a similar sprint time reduction in our study, sprint performance for the male group was 11% slower following the agility course, whereas, females showed a 9% decrease in determine if the MPs of the selected sports fields could influence neuromuscular fatigue. Following the initial sprint trial, athletes completed a sport specific activity course at maximum effort to induce a state of muscle fatigue. The athletes HR and VO2 were recorded using a portable gas exchange system while performing the agility course protocol. Within 60 seconds of completing four bouts of the course, athletes performed a sprint on the test field. We were confident the athletes were putting forth maximum effort during the sport specific drills since HR mean was 80% (+4.1) and peak HR was 90% (+5.2) of mean HRmax, mean VO2 was 82% (+5.6) of the measured maximum oxygen consumption mean, and mean RER was > 1.1. These physiological parameters and the sEMG frequency domains were used to conclude neuromuscular fatigue was achieved by the group of participant. Our analysis revealed greater difference in sprint times for field-Y which produced less ER-% and demonstrated greater FR-%. These findings suggest the athletes displayed more neuromuscular fatigue after performing the sport specific drills on the softer surface. This supports the idea that the MPs of a surface can influence the rate and magnitude of mechanical work placed on the musculoskeletal system thus being a precursor to muscle fatigue. During the post-fatigue sprint trial, the results show females produced 51% less BF muscle activity following a bout of high-intensity of sport specific sprint and agility drills. This particular finding has a great deal of practical significance since a role of the hamstring muscle group acts to stabilize the knee joint during running activities. If the hamstring muscle is fatigued and unable to provide the needed stability it could result in a greater potential for knee injuries. Research indicates females experience greater incidences of knee injuries than males when participating in field sports (Voskanian, 2013;Ireland, 2002;Malinzak et al., 2001;Arendt et al., 1999). Previous research has shown fatigability differences between male and female participants (Hicks et al., 2001;Hunter, 2009;Yoon et al., 2007) suggesting males are more susceptible to a more dramatic reduction in performance when fatigued. The sprint speed decrease was slightly less for both male and female participants on the field exhibiting greater ER-%. The sprint time decreases are statistically different as well as practicality different considering the sprint distance was 30 meters. The sprint time difference between pre-fatigue and post-fatigue conditions were similar for both groups, however, the electromyogram data showed quite different muscle electrical activity patterns between sexes. This suggests the male and female participants in our study adapted to the different surfaces using different neuromuscular strategies. These findings are relevant because the probability of a musculoskeletal injury occurring is more likely to occur when the neuromuscular system is fatigued (Enoka & Duchateau, 2008;Malinzak et al., 2001;Kallenberg et al., 2007;Chappell et al., 2005). Our data set suggests the combined effect of a field's MPs and muscle fatigue effects sprint performance differently than the pre-fatigue situation.
CONCLUSION
The selected sport fields, which differed based on their mechanical properties, influenced muscle activation patterns associated with sprint running. The myoelectric activity was significantly different between the male and female performers while sprinting under both pre and post neuromuscular fatigue conditions. Sport fields should not be simply classified by their composition or type. Coaches need to understand MPs can differ between heterogeneous sports fields as well as many homogeneous sport field systems. This suggest competition preparation, or team practices, should be conducted on a playing surface with the same MPs represented by the competition or game field.
|
v3-fos-license
|
2023-12-08T16:04:48.222Z
|
2023-07-16T00:00:00.000
|
266064322
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://ejournal.uin-suska.ac.id/index.php/sitekin/article/download/23152/9369",
"pdf_hash": "67cbd9cca34518a69e1091a199c6ca9ec380d4a2",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44215",
"s2fieldsofstudy": [
"Business",
"Engineering"
],
"sha1": "71d04810b6bb52c68c82ee5dd792d49af28da5d5",
"year": 2023
}
|
pes2o/s2orc
|
Optimization of Trade Product Inventory Using Activity Based Costing Analysis
Merchandise inventory is one of the assets that requires significant capital investment. If inventory is not managed properly, it can result in high storage costs, the risk of inventory shortages, or even expired products. Therefore, it is important to have the right strategy in managing inventory. This study aims to classify inventory items owned by UD. XYZ in order to minimize inventory costs by focusing inventory procurement on high-value priority items. The method used in this study is Activity Based Costing analysis (ABC). The results of this study are based on the ABC classification there are 7 types of goods that fall into category A or priority are Aoka, Roma sandwich, My jelly, Slai olai, Trick, Choco pie, and Chiki balls. Where these 7 types of goods need special attention in terms of inventory management because they absorb the most investment funds from the total goods owned. By knowing these items, companies can focus on more precise inventory management, including in terms of procurement, storage, and inventory control. This can help UD. XYZ reduces unnecessary storage costs and ensures optimal availability of goods. The results of this study can contribute to the understanding of the application of the ABC method in the context of merchandise inventory management.
Introduction
The importance of good inventory management for a company so that inventory investment remains balanced and service to consumer satisfaction is maintained [1] [2].Poorly managed merchandise inventory can cause financial losses for the company.If too much merchandise is stored, the company will face high storage costs.On the other hand, if inventory is too low, companies can experience stock shortages that can affect the ability to meet customer demand [3]- [8].
UD. XYZ is a trading business engaged in the sale of various snacks and other snacks located in the city of Nganjuk, East Java.This company was founded in 2016, which started as a small-scale snack shop with only a few types of snacks.Nganjuk City is an industrial area with the largest producer of shallots in East Java [9].This is in line with the rapid mobility of the population which greatly affects business development, especially the business of trading various snacks and snacks in the Nganjuk city area.The positive impact with the development of industry in Nganjuk city was also felt by the owner of this snack shop, which later changed its form into XYZ trading business.This company now sells a variety of snacks and snacks of various brands wholesale.Even recorded until now, this business has become the only official distributor agent of Aoka brand toast and became one of the largest snack suppliers in the city of Nganjuk.
This trading business has a fairly large turnover, which ranges from Rp. 200,000,000 to Rp. 400,000,000 per month.However, according to information from business owners, the expenses borne for inventory costs are also quite large if accumulated to reach 15-20% of the total monthly revenue turnover.This happens because this business has a fluctuating number of orders with inconsistent order frequencies and tends to increase before holidays.This trading business faces difficulties in understanding the value and priorities of each trade item owned.Without a clear understanding of the value and sales volume of each merchandise, inventory management becomes difficult to do effectively [7], [8], [10]- [12].Companies need to prioritize products that have high sales value and a high level of damage risk.The activity aims to focus inventory control on high-value types of inventory rather than low-value ones [13]- [15], [15]- [19].No research or action has been taken by business owners or other parties to solve this problem.
Some previous studies have proven that to classify merchandise inventory, you can use the Activity Based Costing method (ABC) [20] [21].The ABC method can optimize inventory costs by improving inventory management, identifying items that have the most impact on costs, and finding improvement opportunities that can reduce storage costs and increase overall profits [22] [23].From these findings, the purpose of this study is to classify inventory items owned by UD.XYZ in order to minimize inventory costs by focusing inventory procurement on high-value priority items.The results of this study are expected to help UD.XYZ can optimize inventory management, allocate resources more effectively, make better decisions, and identify improvement and cost savings opportunities.All this will help the company achieve higher efficiency, increase profits and strengthen its competitive position in the market.
Research Design
This research is a quantitative descriptive research using activity-based costing analysis methods.The object of this study is UD.XYZ is engaged in buying and selling snacks and snacks in Nganjuk, East Java.This study uses secondary data containing data on the type of goods, data on buying and selling goods in January-December 2022.The data collection technique in this study used interview techniques on UD owners.XYZ, and the observations made went directly to the UD location.XYZ.
Stages of Data Analysis
The stages in sorting groups of goods using ABC analysis can be arranged based on the cumulative presentation of fund absorption and the presentation of the types of items of goods managed.The steps are as follows: Calculation of fund absorption Mi = Di x pi Classification into A, B, C respectively amounting to approximately 80%, 15% and 5% from above which can be drawn in the form of a pareto curve.
Possible policies based on ABC analysis include the following: -The purchase of resources spent on supplier development should be much higher for item A than for item C.
-Item A, unlike items B and C, needs to have stricter physical inventory controls.Perhaps they can be placed in a safer place, and perhaps the accuracy of inventory records for item A should be more verified.-The prediction of goods A needs to be more guaranteed of validity than the prediction of goods B and C.
Results and Discussion
For data on the types of goods taken are all types of goods sold by UD.Anugerah Snack has 60 types of goods.The qualification data taken is the name of the item, sales volume, and selling price.So that it will get a total of sales for 1 year.To calculate the absorption of funds by multiplying the sales volume (Di) and the price per unit (Pi).For the formula is as follows.
Mi = Di x pi (4) An example for the calculation can be done to calculate the absorption value of funds of the type of Aoka goods as a follow-up.The value of Aoka's fund absorption = 25700 x Rp. 105.000 = Rp.2.698.500.000For detailed calculations of all types of goods can be seen in the following table.
𝑥 100%
= 63,04 % It was found that the type of Aoka goods had a percentage of absorption of funds from the total funds of 63.04%.Data that has been calculated the percentage of absorption must first be sorted from largest to smallest.After the data is sorted, a cumulative percentage of fund absorption is calculated to find out which order of goods has the most to the least absorption of funds.After the cumulative percentage value is also known, it is possible to classify goods with ABC provisions.Based on pareto law, ABC analysis is classified into 3 categories, namely, class A goods are goods that provide high value [24].Although group A is only represented by about 20% of the total inventory, the value given is 80% of the total absorption of funds.Class B goods are items that provide moderate value.This class B inventory group is represented by 30% of the total inventory and the resulting value is 15% of the total absorption of funds.Class C goods are items that provide low value.Class C inventory group is represented by 50% of the total existing inventory and the resulting value is 5% of the total absorption of funds [25] [26].
In doing this classification, the value used as a benchmark is the cumulative percentage value.If the cumulative percentage value is 0% -80%, then the goods fall into the category of class A. If the percentage of value is between 81% -95%, it can be categorized as class B. And if the percentage of value ranges from 96% -100%, then the goods fall into category C. From these provisions, the results of the ABC classification of inventory goods at UD. XYZ can be seen in the following table.The types of goods included in class A consist of 7 types of goods, namely Aoka, Roma sandwich, My jelly, Slai olai, Trick, Choco pie, and Chiki balls.Where these 7 types of goods need special attention in terms of inventory management because they absorb the most investment funds from the total goods owned.The results of this ABC classification analysis can be important information for UD owners.Anugerah Snack to give priority to inventory management for products in class A category.This aims not to cause large cost burdens and result in large idle funds and increase storage costs in category A.
To represent the results of ABC analysis on inventory of goods, a pareto curve can be formed containing 3 groups of goods based on the volume of annual fund absorption.For more details can be seen in the following picture.From the picture of the pareto curve above, it can be clearly seen the division of classes.That the types of goods included in category A must be reviewed more strictly than goods included in categories B and C to improve program efficiency or reduce inventory costs.In addition, priority in procurement activities is focused on highvalue goods and high-quantity consumption use, which refers to class A goods.
Conclusion
By using UD's ABC (Activity Based Costing) method.XYZ can give a higher focus to 7 types of goods namely Aoka, Roma sandwich, My jelly, Slai olai, Trick, Choco pie, and Chiki balls.Where these 7 types of goods need special attention in terms of inventory management because they absorb the most investment funds from the total goods owned.The results of this ABC classification analysis can be important information for UD owners.Anugerah Snack to give priority to inventory management for products in class A category.This aims not to cause large cost burdens and result in large idle funds and increase storage costs in category A. By applying the ABC method can improve operational efficiency, reduce storage costs, improve responsiveness to customer requests, and optimize profits of UD.XYZ.
fund absorption ∑Mi = Total absorption value of funds Order inventory items by annual volume of rupiah from largest in value to smallest.Determine the cumulative percentage Volume tahuana dalam nilai uang per unit ∑ Volume tahuana dalam nilai uang per unit x 100%
Table 1 .
Results of Annual Fund Absorption Calculation After obtaining the results of the absorption of funds for each type of goods and their annual total value, the next step is to calculate the percentage of annual fund absorption.Calculating the percentage of absorption of funds can use the following formula.
Table 2 .
ABC Analysis Results on all inventory itemsFrom the table above, it can be seen that of the total 60 types of goods classified, only 7 goods are included in the class A category, where category A is the type of goods that absorb the most investment funds, which is 78.78%.If the total amount of funds absorption in class A is Rp.3,372,461,500.For more details, the classification and total distribution of funds can be seen in the following table.
|
v3-fos-license
|
2017-06-24T17:12:06.757Z
|
2007-06-12T00:00:00.000
|
10486671
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://retrovirology.biomedcentral.com/track/pdf/10.1186/1742-4690-4-40",
"pdf_hash": "1e9be6e6a68ffc6c133d83c2af6978aac716319c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44216",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "1e9be6e6a68ffc6c133d83c2af6978aac716319c",
"year": 2007
}
|
pes2o/s2orc
|
The control of viral infection by tripartite motif proteins and cyclophilin A
The control of retroviral infection by antiviral factors referred to as restriction factors has become an exciting area in infectious disease research. TRIM5α has emerged as an important restriction factor impacting on retroviral replication including HIV-1 replication in primates. TRIM5α has a tripartite motif comprising RING, B-Box and coiled coil domains. The antiviral α splice variant additionally encodes a B30.2 domain which is recruited to incoming viral cores and determines antiviral specificity. TRIM5 is ubiquitinylated and rapidly turned over by the proteasome in a RING dependent way. Protecting restricted virus from degradation, by inhibiting the proteasome, rescues DNA synthesis, but not infectivity, indicating that restriction of infectivity by TRIM5α does not depend on the proteasome but the early block to DNA synthesis is likely to be mediated by rapid degradation of the restricted cores. The peptidyl prolyl isomerase enzyme cyclophilin A isomerises a peptide bond on the surface of the HIV-1 capsid and impacts on sensitivity to restriction by TRIM5α from Old World monkeys. This suggests that TRIM5α from Old World monkeys might have a preference for a particular capsid isomer and suggests a role for cyclophilin A in innate immunity in general. Whether there are more human antiviral TRIMs remains uncertain although the evidence for TRIM19's (PML) antiviral properties continues to grow. A TRIM5-like molecule with broad antiviral activity in cattle suggests that TRIM mediated innate immunity might be common in mammals. Certainly the continued study of restriction of viral infectivity by antiviral host factors will remain of interest to a broad audience and impact on a variety of areas including development of animal models for infection, development of viral vectors for gene therapy and the search for novel antiviral drug targets.
Background
The control of viral infection by intracellular antiviral proteins referred to as restriction factors has become an important and challenging focus of infectious disease research. A clearer understanding of the role of restriction factors in immunity and the control of retroviral replication promises to reveal details of host virus relationships, allow improvement of animal models of infection, iden-tify targets for antiviral therapies, and further facilitate the use of viral vectors for clinical and investigative gene delivery. The tripartite motif protein TRIM5α has recently emerged as an important restriction factor in mammals blocking infection by retroviruses in a species-specific way. Early evidence for TRIM5α 's antiviral activity included the species-specific infectivity of retroviral vectors, even when specific envelope/receptor requirements were obviated by the use of the VSV-G envelope. Notable examples include the poor infectivity of certain murine leukemia viruses (MLV) on cells from humans and primates and the poor infectivity of HIV-1 on cells from Old World monkeys [1][2][3]. The notion that a dominant antiviral factor was responsible was suggested by the demonstration that the block to infection could be saturated, or abrogated, by high doses of retroviral cores [4][5][6]. The putative human antiviral factor was named Ref1 and the simian factor Lv1 [1,6]. TRIM5α was identified in 2004 by screening rhesus cDNAs for those with antiviral activity against HIV-1 [7]. Shortly after, several groups demonstrated that Ref1 and Lv1 were encoded by species-specific variants of TRIM5α [8][9][10][11]. TRIM5α therefore represents a hitherto undescribed arm of the innate immune system, blocking infection by an incompletely characterised mechanism. Its expression is induced by interferon via an IRF3 site in the TRIM5 promoter linking it to the classical innate immune system [12].
The tripartite motif TRIM5 has a tripartite motif, also known as an RBCC domain, comprising a RING domain, a B Box 2 domain and a coiled coil [13,14]. The RING is a zinc-binding domain, typically involved in specific protein-protein interactions. Many RING domains have E3 ubiquitin ligase activity and TRIM5 can mediate RING dependent auto-ubiquitinylation in vitro [15]. B boxes are of 2 types, either B-box1 or B-box2 and TRIM5 encodes a B-box2. Bboxes have a zinc-binding motif and are putatively involved in protein-protein interactions. The two types of B-box have distinct primary sequence but similar tertiary structures and are structurally similar to the RING domain. This suggests that they may have evolved from a common ancestral fold, and perhaps have a similar function, such as ubiquitin ligation [16,17]. It is also possible that B-Boxes contributes to ligation specificity, ie have E4 activity [16,17]. The coiled-coil is involved in homo-and hetero-multimerisation of TRIM proteins [14,18]. TRIM5 exists as a trimer with the coiled coil facilitating homo and hetero multimerisation with related TRIM proteins [18][19][20].
TRIM5 RNA is multiply spliced, generating a family of isoforms, each shorter from the C terminus. The longest, TRIM5α, encodes a C terminal B30.2 domain that interacts directly with viral capsid and determines antiviral specificity [18,21,22]. The shorter isoforms, TRIM5γ and TRIM5δ, do not have B30.2 domains and act as dominant negatives to TRIM5α and rescue restricted infectivity when over-expressed [7,23]. It is assumed that the shorter forms form heteromultimers via the coiled coil and titrate the viral binding B30.2 domains. It is therefore possible that TRIM5's antiviral activity is regulated by splicing.
The B30.2 domain comprises a combination of a PRY motif followed by a SPRY motif [24]. Whilst SPRY domains are evolutionary ancient, B30.2 domains, found in butyrophilin and TRIM proteins, appeared more recently. There is unlikely to be a precise function for B30.2 domains, rather they are involved in protein-protein interactions such as substrate recognition. A series of TRIM5 mutagenesis studies demonstrated that the TRIM5 B30.2 domain determines antiviral specificity and defined the specific regions of the B30.2 responsible [18,21,22,25,26]. In vitro capsid/TRIM5 binding assays have been developed and these demonstrate that, at least in the case of wild type TRIM5α proteins, binding correlates well with the ability to restrict infection [27,28].
The recent solution of the structure of several B30.2 domains allows us to interpret the conservation and variation between TRIM5 B30.2 domains [29-31]. The structures indicate that the B30.2 core is formed from a distorted 2-layer beta sandwich with the beta strands in an anti-parallel arrangement. Extending from the core are a series of loops and it is these surface loop structures that vary between the TRIM5 sequences from each primate and between different B30.2 domains of TRIM5 homologues. The loops form 3 or 4 variable regions, all of which appear to impact on antiviral specificity [32]. The TRIM21 structure in complex with its ligand, IgG Fc indicates that there are 2 binding surfaces, one in the PRY (V1) and 1 in the SPRY (V2-V3) and this is likely to be true for TRIM5α.
TRIM5 and the Red Queen
B30.2 mutagenesis studies, as well as sequence analysis of TRIM5α from related primates, suggested that the differences defining anti-viral specificity are concentrated in patches in the B30.2 domain [33]. The patches, which correspond to the surface loops, have been under very strong positive selection as evidenced by a high dN:dS ratio. dN:dS ratios have been calculated by comparing TRIM5 sequences from primates and comparing the number of differences that lead to a change in the protein sequence (non synonomous, dN) to the number of differences that do not (synonomous, dS). A high ratio indicates positive selection and is evidence of the host-pathogen arms race known as the Red Queen hypothesis [34]. This phenomenon, named after Lewis Carroll's Red Queen who claimed 'It takes all the running you can do to keep in the same place', refers to the selection driven genetic change that occurs in both host and pathogen as each alternately gains the advantage. Whether selection pressure on TRIM5 has been from pathogenic retroviruses or from endogenous retroviruses and retrotransposons is unclear. The relative youth of lentiviruses, as compared to other retroviruses and endogenous elements, is thought to preclude them from impacting on TRIM5 selection, although the discovery of an endogenous lentivirus in rabbits [35] has recently extended their age from less than 1 million years to greater than 7 million years and it certainly seems possible that this age will extend further as we better understand lentiviral history.
The other side of the Red Queen's arms race is the change in the retroviral capsids to escape restriction by TRIM5. TRIM5 molecules can generally restrict widely divergent retroviruses including gamma retroviruses as well as lentiviruses. For example Agm and bovine TRIMs restrict MLV-N, HIV-1, HIV-2 and SIVmac [36][37][38]. It is now clear that retroviral capsid structures are conserved and capsid hexamers are found in both lentiviruses and gamma retroviruses [39,40] so we imagine that the TRIMs recognise a conserved shape. Paradoxically, point mutants can often escape strong restriction. MLV-N CA R110E escapes human, simian and bovine TRIMs, SIVmac CA QQ89-90LPA escapes rhesus and squirrel monkey TRIM5s and HIV-1 G89V escapes owl monkey TRIMCyp [1,38,[41][42][43][44]. It therefore remains unclear how TRIM5 can be effective if a small number of changes in CA can rescue infectivity, especially given that retroviral capsid sequences appear quite plastic.
The antiviral mechanism
We are beginning to understand TRIM5α 's antiviral mechanism. TRIM5α is trimeric [19,45] and interacts with hexameric capsids [46]. TRIM5α is ubiquitinylated within cells and is rapidly turned over by the proteasome in a RING domain dependent way suggesting that autoubiquitinylation might drive this process [15,47]. We imagine that the rapid turnover of TRIM5α and presumably TRIM5α-virus complexes leads to an early block to infection, before the virus has had the opportunity to reverse transcribe ( Figure 1A). This notion is supported by the observation that inhibition of the proteasome during restricted infection allows the virus to reverse transcribe, when it is protected from degradation [48,49] ( Figure 1B). However, infection is not rescued by inhibition of the proteasome, indicating that the TRIMα-virus complex remains uninfectious, even when protected from degradation. How exactly TRIM5α renders the virus uninfectious remains unclear, but it may be that by simply coating the core with multivalent complexes TRIM5α trimers are able to disrupt the rearrangement/uncoating and or trafficking required to continue to the nucleus and to integrate. Other possibilities include TRIM5α rapidly uncoating incoming HIV-1 capsids. In fact, this has been observed using an assay of capsid density to measure uncoating [46,50] and it will be interesting perform this assay in the presence and absence of proteasome inhibitors to address whether the proteasome has a role this process. Proteasome independent degradation of capsids by TRIM5α has also been described [51]. Importantly, DNA circles remain inhibited, even in the absence of proteasome activity, sug-gesting that the restricted TRIM5α-virus complex cannot access the nucleus. [48,49] (Fig 1). It is possible that these observations indicate several independent antiviral activities of TRIM5α but we prefer the interpretation that there are several possible fates for a restricted virion. It may be degraded by the proteasome, it may inappropriately uncoat, or it may remain intact, make DNA but not have access to the nucleus. The different fates are likely to be influenced by factors such as the particular virus, the particular TRIM5α as well as virus dose and TRIM5α expression levels and the cellular background. Understanding the contribution of these activities to restriction by TRIM5α will require further study but the field continues to make steady progress.
A Role for Cyclophilin A in restriction
The relationship between Cyclophilin A (CypA) and HIV-1 has a long history. CypA is a peptidyl prolyl isomerase that performs cis/trans isomerisation of proline peptide bonds in sensitive proteins. CypA interacts with gag in infected cells leading to its recruitment into nascent HIV-1 virions [52,53]. Recent data has shown that CypA also interacts with incoming HIV-1 cores in newly infected cells and that this interaction is more important for infectivity than that occurring as cores assemble In human cells the interaction between incoming HIV-1 cores and CypA is important for maximal infectivity. Preventing this interaction reduces HIV-1 infectivity independently of TRIM5 expression [59,62]. It is suspected that in the absence of CypA activity, HIV-1 gets restricted by a TRIM5 independent antiviral activity. This suspicion is borne from the fact that the requirement for CypA is both cell type, and species, specific, suggesting that CypA is not required simply to uncoat the core. This notion is further supported by the observation that CA point mutants close to the CypA binding site such as HIV-1 CA A92E or G94D appears to lead to restriction of HIV-1 in human cells [55,56]. A92E or G94D infectivity is reduced in some human cell lines but not others and strikingly, infectivity is rescued by inhibition of CypA. It is possible that these mutants become sensitive to human restriction factor(s) and that the interaction between the factor and the virion is sensitive to the activity of CypA on the peptide bond at P90.
How might CypA impact on recognition of CA by TRIM5α? One possibility is that in some cases, capsid with CypA attached may make a better target for TRIM5α. This possibilty has been discounted on the basis that HIV-1 mutated to prevent CypA binding (HIV-1 CA G89V) remains restricted by TRIM5α from Old World monkeys [59,61]. Importantly, this mutant is not restricted by TRIM-Cyp, which relies on the CypA domain to recruit it to the HIV-1 capsid [43]. A second possibility is that recruitment of TRIM5α to capsid is improved by the prolyl isomerisation activity of CypA on HIV-1 capsid. Prolyl isomerisation has been shown to regulate protein-protein interaction in diverse biological systems including the control of cell division by cdc25C and signalling by the Itk receptor. The prolyl isomerase Pin1 catalyses the cis/trans isomerisation of a proline peptide bond in cdc25C. Cdc25C activity is regulated by phosphorylation and since its phosphatase PP2A only recognises the cdc25C trans isomer, Pin1 activity leads to dephosphorylation and cdc25C activation [63]. A similar molecular switch has been described for Itk signalling and CypA. CypA catalyses cis/trans isomerisation of proline 287 in the Itk SH2 domain impacting on interaction with phosphorylated signalling partners and regulating Itk activity [64,65]. NMR measurements have shown that HIV-1 CA contains around 86% trans and 14% cis at G89-P90 in both the presence and absence of CypA [57]. However, in the presence of CypA, CA is rapidly isomerised between the two states [57]. It is therefore possible that OWM TRIM5α binds preferentially to CA containing G89-P90 in the cis conformation [59]. In this case, in the presence of TRIM5α, CypA maintains the percentage of cis at 14% even as TRIM5α sequesters it from the equilibrium. In this way the trans form is isomerised to cis and becomes bound by TRIM5α. Blocking CypA activity would limit the availability of the cis conformation and therefore TRIM5α's ability to see the CA, resulting in rescued infectivity. This model is summarised in Fig 2. CypA also appears to impact on replication of feline immunodeficiency virus in feline and human cells although whether TRIM5 is required for this remains unclear [66].
Surprisingly in the New World species owl monkey a CypA pseudogene has been inserted into the TRIM5 coding region, replacing the viral binding B30.2 domain with CypA, leading to a molecule called TRIMCyp [43,44]. This restriction factor strongly restricts HIV-1, SIVagm and FIV by recruitment of the incoming capsid to the RBCC domain facilitated by interaction between the CypA domain and the capsid [20,66,67]. Viral infectivity is rescued by inhibition of CypA-CA interactions with CSA indicating the dependence on CypA binding to capsid for robust restriction. We assume that at some point in owl monkey evolution the modification of TRIM5 to TRIM-Cyp provided a significant selective advantage. We can only speculate on what might have provided the selection pressure but a pathogenic virus that recruited CypA is a possibility. It is worth noting that a TRIMCyp in the human genome would be a useful antiviral as we face the current AIDS pandemic.
A putative mechanism for restriction of retroviruses by TRIM5α The role of CypA in sensitivity to TRIM5, its fusion to TRIM5 in owl monkeys and its role as a target for immunosuppression implies that CypA might have a general role in immunity. Viruses are likely to be under considerable pressure to alter their shape and become invisible to antiviral shape recognition systems such as TRIMs. Molecules, such as CypA, that induce shape changing, may have an important role in making escape difficult. For example, HIV-1 appears to be invisible to OWM TRIM5 in the absence of CypA, but in its presence HIV-1 is strongly restricted [59-61]. Conversely, HIV-1 is highly infectious in human cells in the presence of CypA but appears to become restricted in its absence [42]. It seems that HIV-1 is invisible to human TRIM5 whether CypA is active or not but becomes restricted by something else in the absence of CypA activity [59,62]. HIV-1 appears to have adapted to tolerate CypA activity and this adaptation has made it dependent on CypA. Why can't HIV-1 simply avoid recruiting CypA? The answer to that is not clear but a clue can be found in alignment of the CypA binding region of lentiviruses ( Figure 3). All primate lentiviruses have con-served the proline rich CypA binding loop and many encode glycine proline motifs within it. This suggests that the motifs that recruit CypA are important, conserved and cannot easily be mutated. The loops and glycine proline motifs are also conserved in the equine lentivirus EIAV and the feline FIV [67]. Their purpose however remains unclear and this loop is not conserved in MLV [40] ( Figure 4).
Polymorphism and TRIM5 in other species
The fact that TRIM5 restricts retroviral infection so potently, at least in monkeys, has suggested that polymorphism in human TRIM5 might impact on HIV-1 transmission and/or pathogenesis in vivo. Several studies have addressed this issue and shown at best, only weak association of any particular TRIM5α allele with disease progression [68-71]. Importantly, human TRIM5α is not polymorphic in the regions of the B30.2 domain known to impact on viral recognition, and its over expression does not reduce HIV-1 infectivity by more than a few fold [7,9,10,72]. Furthermore under in vitro conditions where rhesus TRIM5 efficiently binds the HIV-1 capsid, the human protein binds only poorly [46]. It therefore seems likely that TRIM5 doesn't significantly impact on HIV-1 replication and pathogenesis in humans. Indeed, we imagine that HIV-1's insensitivity to TRIM5 has been an important factor in its success as a pathogen in humans.
Conversely the TRIM5 gene in rhesus macaques and sooty mangabeys is relatively polymorphic with a number of polymorphisms occurring in the variable loops that dictate antiviral specificity. Indeed, expression of these alleles in permissive feline cells followed by challenge with retroviral vectors derived from HIV-1, SIVmac MPMV or MLV-N demonstrated that the different alleles have slightly different antiviral specificities [72].
The antiviral activity of TRIMs in mammals other than primates remains less well characterised. A bovine TRIM (BoLv1) with broad anti retroviral activity suggests that TRIM-mediated restriction of retroviruses is widespread amongst mammals [37,38]. BoLv1 is closely related to primate TRIM5 genes suggesting that they are orthologs derived from an ancestral antiviral TRIM. Cattle encode at least 4 genes closely related to TRIM5, in addition to homologs of TRIM34 and TRIM6. The fact that one of these proteins has antiviral activity supports the notion that these genes are derived from an ancestral sequence with antiviral activity. It is likely that antiviral TRIMs will be identified in more mammals soon. Indeed, antiviral TRIMs are probably responsible for the poor infectivity of cells from pigs and bats to MLV-N and those of rabbits to HIV-1 [1,3,5,73].
A putative mechanism for activity of CypA on HIV-1 infectiv-ity in cells from Old World monkeys [20]. Whether this is because they have an alternate function or whether they are simply not active against this selection of viruses is difficult to say. It is worth noting however that comparison of the sequences of these TRIMs from primates shows that Similarity between the sequences of retroviral capsids Figure 3 Similarity between the sequences of retroviral capsids. Alignment of primate lentiviral capsid protein sequences demonstrates that they have conserved the proline rich Cyclophilin A binding loop on their outer surface. Glycine proline motifs are common (red arrow). Conserved prolines at the extremes of the loop are shown (black arrows). The alignment from which this selection was taken is available from the Los Alamos HIV sequences database [93]. Retroviruses are named according to the species from which they were isolated. Genbank accession numbers are shown. Species abbreviations are as follows: cpz chimpanzee, deb De Brazza's monkey, den Dent's Mona monkey, drl drill, gsn greater spot nosed monkey, sm sooty mangabey, stm stump tailed macaque, mac rhesus macaque, lst L'Hoest monkey, mnd mandrill, mon Cercopithecus mona, mus Cercopithecus cephus, rcm red capped mangabey, gri African green monkey Grivet, sab African green monkey sabaeus, tan African green monkey tantalus, ver African green monkey vervet, sun sun tailed monkey, syk Sykes monkey. There is an increasing body of evidence, gathered over many years suggesting that TRIM19, otherwise known as PML, may have antiviral activity. PML exists in subnuclear structures called PODs, ND10 or PML bodies and are of unclear function. It has long been known that a number of diverse viruses including influenza, SV40 and papilloma virus form replication complexes in close association with PML bodies, reviewed in [75,76]. Infection by other viruses, including herpes viruses and adenoviruses, causes degradation of PML protein and dispersal of the body components. The molecular details of PML degradation by herpes simplex type 1 (HSV-1) have been partially solved. The HSV-1 protein ICP0 is responsible for inducing proteasome dependent degradation of PML, and HSV-1 deleted for this protein replicates poorly, leaving PML bodies intact [77][78][79][80]. Importantly, mutant HSV-1 (ICP0-) becomes almost fully infectious if PML expression is reduced using RNA interference, indicating that an important function of ICP0 is to eliminate PML [81]. An antiviral role for PML is also suggested by a real time microscopy study demonstrating that PML is recruited to incoming HSV-1 (ICP0-) replication complexes [82]. Such active recruitment is strongly suggestive of an antiviral response. Furthermore, reduction of PML expression increases permissivity of human cells to human cytomegalovirus infection [83], and over-expression of PML reduces permissivity to vesicular stomatitis virus and influenza A [84,85]. These data, along with the observation that PML expression is stimulated by type 1 interferon, strongly support an antiviral role for TRIM19 (PML [86] and the observation that the expression of most of these genes is upregulated by influenza infection [87] suggests that they might have a role in immunity.
TRIM20, otherwise known as pyrin, presents as an intriguing antiviral possibility. Polymorphism in the TRIM20 B30.2 domain can cause familial Mediterranean fever, a disease characterised by recurrent attacks of fever and inflammation. Sequencing TRIM20 from a variety of primates revealed that many encode the disease causing mutations as wild type sequence [88]. Furthermore, phylogenetic analysis suggested episodic selection in the B30.2 domain, similar to that seen for TRIM5, suggesting the intriguing possibility that viral infection underlies this disease. Rather strikingly in 2001 these authors suggested that the B30.2 domain of pyrin might interact directly with pathogens and that the mutations are counter evolutionary changes selected to cope with a changing pathogen [88]. Such a model is remarkably close to what we believe to be true for TRIM5, retroviruses and the Red Queen 6 years later.
Concluding Remarks
Just as we considered that the important aspects of TRIM5 biology had been largely described, the Ikeda lab described tantalising findings that make a complicated subject significantly more complicated [89]. They show that rhesus TRIM5 causes degradation of gag in infected cells. Importantly this activity is independent of the C-ter-Similarity between the structures of retroviral capsids Figure 4 Similarity between the structures of retroviral capsids. Superimposition of the structures of the N terminal domains of HIV-1 (Red) and MLV (blue) capsids demonstrates overall structural conservation although the Cyclophilin A binding loop (yellow) is absent in MLV. The pdb files for HIV-1 (1M9C) [94] and MLV (1UK7) [40] were superimposed using pairwise structure comparison [95].
minal B30.2 domain suggesting that it acts via an alternative specificity determinant, perhaps the coiled coil. It is worth noting that APOBEC3G has also been described as being able to restrict infection of both incoming as well as outgoing HIV-1 [90,91]. It may be therefore that such dually active restriction factors are not uncommon.
Whether the study of host factors influencing viral infection will translate into improvements in antiviral therapy in the foreseeable future remains uncertain. However, it is likely to allow the improvement of animal models for HIV/AIDS as we enhance our understanding of the viral and cellular determinants for viral replication and disease [92]. This work is also likely to improve our ability to transduce cells, therapeutically and experimentally, with viral gene delivery vectors, particularly poorly permissive primary cells and stem cells. It certainly promises to remain an active and exciting field in infectious disease research.
|
v3-fos-license
|
2021-12-30T16:04:11.617Z
|
2021-12-27T00:00:00.000
|
245547100
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8777690",
"pdf_hash": "6ad539a82a2379b92c49692f953818c8e5e95941",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44217",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "f25e382e306c83a25d8d0146d706960f1f476a26",
"year": 2021
}
|
pes2o/s2orc
|
Advances in Editing Silkworms (Bombyx mori) Genome by Using the CRISPR-Cas System
Simple Summary One of the most powerful gene editing approaches is the CRISPR (clustered regularly interspaced short palindromic repeats)-Cas (CRISPR-associated) tool. The silkworm (Bombyx mori) has a great impact on the global economy, playing a pivotal role in the sericulture industry. However, B. mori came into the spotlight by representing one of science’s greatest contributors, being used to establish extraordinary bioreactors for the production of target proteins and illustrating a great experimental model organism. Herein, we focus on progress made in the field of B. mori’s genome manipulation by using CRISPR-Cas. In order to edit B. mori’s genome, remarkable advances were made, such as exposing gene functions and developing mutant lines that exhibit enhanced resistance against B. mori nucleopolyhedrovirus (BmNPV). We also discuss how CRISPR-Cas accelerated the fundamental investigation in B. mori, and beyond, thus highlighting the great potential of the insect’s biotechnology in numerous scientific fields. Abstract CRISPR (clustered regularly interspaced short palindromic repeats)-Cas (CRISPR-associated) represents a powerful genome editing technology that revolutionized in a short period of time numerous natural sciences branches. Therefore, extraordinary progress was made in various fields, such as entomology or biotechnology. Bombyx mori is one of the most important insects, not only for the sericulture industry, but for numerous scientific areas. The silkworms play a key role as a model organism, but also as a bioreactor for the recombinant protein production. Nowadays, the CRISPR-Cas genome editing system is frequently used in order to perform gene analyses, to increase the resistance against certain pathogens or as an imaging tool in B. mori. Here, we provide an overview of various studies that made use of CRISPR-Cas for B. mori genome editing, with a focus on emphasizing the high applicability of this system in entomology and biological sciences.
Introduction
The life sciences research fields were revolutionized by the outstanding development of various genome editing tools. By using specific techniques of genome editing, the genomic DNA of every living organism can be submitted to guided changes, such as deletions, insertions, and sequence substitutions [1].
In recent years, several genome editing tools have been in the spotlight. Among them, there are three remarkable technologies, namely those relying on programmable nucleases (i.e., the transcription activator like effector nucleases (TALENs)), zinc finger nucleases (ZFNs), and clustered regularly interspaced short palindromic repeat -associated nucleases Initially, only four distinct Cas proteins (1)(2)(3)(4) were reported, due to the rapid evolution of biological sciences; currently, there numerous Cas proteins have been described [25,26], Cas1 being the most analyzed [27].
CRISPR-Cas has a great adaptability, with host-related specificities; thus, it exhibits a significant diversity. The varying feature is defined by the CRISPR array and the cas gene sequences. The classification of these types of systems is based on the signature Cas proteins. Currently, there are two major classes of CRISPR-Cas systems, each also di-vided in several groups [28]. Regarding the leader nucleotide sequence, it has been shown that it has a key role by carrying the essential promoter sequences for the transcription of CRISPR loci. Besides the promoter, the leader contains specific signals that are crucial for the adaptation stage from the first phase of CRISPR-Cas activation [29].
The adaptation is the first functional stage of the CRISPR-Cas mechanism, during which the foreign nucleic acid is recognized by several Cas proteins [30] and consequently integrated next to a leader sequence. Through this mechanism, in evolution, the spacers are arranged chronologically, and this feature helps bacteria and archaea to enhance their protection against the genetic material of the latest foreign encounter [31]. Each new acquired spacer is accompanied by a repeat sequence; therefore, the CRISPR array expands with every invasion [32].
The CRISPR array is transcribed in the second step, specifically in the biogenesis phase [33]. First of all, it is being transcribed into a precursor CRISPR RNA (crRNA). At the end of this phase, there are numerous mature crRNAs molecules, resulting from the action of RNase III that process the precursor crRNA. Each crRNA includes a spacer and a repeat sequence [31,34,35].
The last step of the system's mechanism is the interference phase. It involves the degradation of the foreign nucleic acid, by targeting and cleaving it [36]. The products of the biogenesis phase, the crRNAs, act like guides for targeting the invader, which is then cleaved following a Cas proteins cascade that act like molecular scissors [37].
The CRISPR-Cas System as a Genome Editing Tool
When it comes to leading tools in genetic engineering, the CRISPR-Cas system can be considered the foremost instrument. After elucidating its function in various organisms, scientists aimed to exploit its versatility, in order to overcome the disadvantages of other available genome editing tools [38]. Even if scientific studies still report the use of ZFNs and TALENs techniques as editing tools, the CRISPR-Cas system is the most effective genome editing instrument, standing on top in regards to efficiency, cost-effectiveness, and the relative simplicity of use [39] (Table 1). Another considerable advantage of this system is represented by its capacity to simultaneously target multiple genes [40]. [43,44] Of the numerous CRISPR-Cas systems, CRISPR-Cas9 is currently the most used instrument in laboratories across the world [47]. The Cas9 nuclease is the signature protein of CRISPR-Cas II systems and it is responsible for double strand DNA breaks [27].
Three different methods to deliver the Cas9 endonuclease have been described. It can be directly delivered by microinjection into the embryos, while the other two delivery methods involve a plasmid that expresses the Cas9 enzyme, or a messenger RNA (mRNA) sequence that encodes it. Of the three techniques, in terms of genome engineering, the earliest mentioned is the best option due to its certain advantages. By directly delivering the protein, low immunogen effects were observed. Furthermore, the off-target activity is minimized compared with the other two methods [48]. The use of CRISPR-Cas9 is a simple but powerful genome editing tool, with various implementations, and their impact on new research trends has been reviewed elsewhere [49].
The CRISPR-Cas9 mechanism relies on the Cas9 nuclease and a guide sequence (gRNA). As the name implies, the gRNA has the role to guide the Cas9 nuclease to a target site in order to cleave the DNA. The key feature of gRNA is the extensive complementarity with the target sequence [50].The protospacer-adjacent motif (PAM) bordering the target complementary sequence has a key role, since in its absence, the CRISPR-Cas systems would degrade their own CRISPR loci. In order to perform a cleavage, the Cas9 protein scans for the PAM sequence. Even if the gRNA is complementary with the target sequence, the Cas9 endonuclease will not cleave it in the absence of PAM [51].
The central factor that influences the success of the gene editing process is the repair path of the double-strand breaks produced by Cas9. There are two main repair pathways: the homology-directed repair (HDR) and the nonhomologous end joining (NHEJ), respectively [52]. More often, NHEJ is exploited in order to acquire indels mutations, specially to obtain small deletions. These deletions are extremely useful for disclosing gene functions [53]. However, the HDR machinery is used not just to obtain knock-out or knock-down mutations, such as the expected output following NHEJ, but to generate target knock-ins. Therefore, by using HDR, exogenous sequences can be successfully integrated into the host's genome. Currently, major efforts are being made in order to enhance the sequence replacement by using the HDR mechanism [54].
CRISPR-Cas9 is currently used in multiple research fields, such as agriculture (editing of various agricultural plant genomes or pest insect's genome) [55][56][57], biotechnology, food industry, and medicine (modeling diseases using HeLa cells, deciphering HIV infection mechanisms, using various experimental models, such as Danio rerio to tackle cancer and neurological diseases, etc.) [58,59], just to mention a few ( Figure 1) [60].
CRISPR-Cas9 in Entomology
Being the most diverse and numerous category of organisms for decades [61], insects have been intensively studied. Countless studies have been performed due to insects' key roles in ecology, agriculture, and medicine [62,63]. Considering this, numerous research groups aimed to use the CRISPR-Cas9 system to manipulate the insects' genome. The first application of CRISPR-Cas9 was performed in Drosophila melanogaster [64] due to its strategic importance as arguably the main experimental model organism for life sciences [65]. Besides D. melanogaster, the researchers also used the CRISPR-Cas9 applicability on B. mori, Apis mellifera, Aedes aegypti, and Tribolium castaneum [66,67]. Gratz et al. (2013) [68] programmed CRISPR-Cas9 to edit Drosophila's genome. The authors targeted the yellow gene, which is commonly used in various studies. First, they aimed to determine if this genome engineering tool could be efficient and could fulfill its role to induce breaks in the target sequence. By using the CRISPR-Cas9 system in Drosophila, not only was the yellow gene successfully knocked out, but the genome's alterations were also germline transmitted. Subsequent to the deletion of the target gene, a donor sequence was designed. This sequence provided the template for the HDR repair pathway and its use was to test the accuracy of specific replacement of yellow gene with an exogenous sequence. These sequence replacements were transmitted to descendants as well. Their data showed that there was no off-target activity and it highlighted the feasibility of using the CRISPR-Cas9 technology in eukaryotes [68].
CRISPR-Cas9 in Entomology
Being the most diverse and numerous category of organisms for decades [61], insects have been intensively studied. Countless studies have been performed due to insects' key roles in ecology, agriculture, and medicine [62,63]. Considering this, numerous research groups aimed to use the CRISPR-Cas9 system to manipulate the insects' genome. The first application of CRISPR-Cas9 was performed in Drosophila melanogaster [64] due to its strategic importance as arguably the main experimental model organism for life sciences [65]. Besides D. melanogaster, the researchers also used the CRISPR-Cas9 applicability on B. mori, Apis mellifera, Aedes aegypti, and Tribolium castaneum [66,67]. Gratz et al. (2013) [68] programmed CRISPR-Cas9 to edit Drosophila's genome. The authors targeted the yellow gene, which is commonly used in various studies. First, they aimed to determine if this genome engineering tool could be efficient and could fulfill its role to induce breaks in the target sequence. By using the CRISPR-Cas9 system in Drosophila, not only was the yellow gene successfully knocked out, but the genome's alterations were also germline transmitted. Subsequent to the deletion of the target gene, a donor sequence was designed. This sequence provided the template for the HDR repair pathway and its use was to test the accuracy of specific replacement of yellow gene with an exogenous sequence. These sequence replacements were transmitted to descendants as well. Their data showed that there was no off-target activity and it highlighted the feasibility of using the CRISPR-Cas9 technology in eukaryotes [68]. Aiming to further highlight the feasibility of choosing this system to perform genome alteration in Drosophila, Yu et al. (2013) [69] designed two gRNAs to induce mutations in two regions of the yellow gene. In addition, they targeted other six sequences, both euchromatic and heterochromatic loci. Remarkably, a definite mutation in ms(3)k81 was transmitted to descendants in a proportion of 100%. By successfully targeting heterochromatic loci, their result showed that the CRISPR-Cas9 system is efficient for altering the heterochromatin [69]. Drosophila have been used in numerous studies in order to examine the insecticide resistance [70][71][72]. In this direction, Douris et al. (2020) [56] notably summarized the progress in using CRISPR-Cas9 to explore the genetic basis of this mechanism.
The CRISPR-Cas9 technique was used to perform functional analysis concomitantly on two genes belonging to the cricket (Gryllus bimaculatus) [73]. G. bimaculatus is an important insect for experimental studies; for example, it plays an important role for evolutionary developmental studies and comparative biology, but it is also a relevant model organism for neurobiology and behavioral sciences [74]. The efficiency of inserting a donor sequence via a homology-independent technique was tested in two hox genes, namely Gb-Ubx and Gb-abd-A. After inserting the donor fragment into essential exons of both genes, their function was lost. Thus, functional investigations of hox genes could be carried out by using the knock-in/knock-out approaches [73].
Being one of the most important social insects [75] and as it plays a crucial role as a pollinator, the honeybee (Apis mellifera) has been intensively studied. It also plays a pivotal role in various therapeutic areas due to honey production. This natural product has extraordinary benefits for human health, exhibiting antioxidant, antiviral, and antibacterial effects [76]. Due to its special characteristics, the use of honey is not limited to humans, but this natural product is being used to improve certain features of other insects, such as silkworms [77]. There are numerous studies that detail functional analysis of A. mellifera genes by exploiting the CRISPR-Cas9 system. For instance, Hu et al. (2019) [67] reported the successful utilization of this system for knocking out the mrjp 1 gene from the honeybee genome. The CRISPR-Cas9 complex was delivered through microinjection and they tested two specific regions of embryos, for identifying the most convenient structure for delivering the gRNA and the Cas9 endonuclease. By microinjecting the construct at the dorsal posterior side, there was a low rate of successful manipulation (11.8%); however, when choosing the ventral cephalic side, the results showed a great rate of gene editing (93.3%). Trying to validate the previous results, the authors also targeted pax6. Based on the previously obtained results, they microinjected the CRISPR-Cas9 construct at the ventral cephalic side. The results showed an editing rate of 100% [67]. Targeting the same gene, mrjp 1, similar results have been obtained in another study [78]. Thus, functional analysis of A. mellifera genes can be effectively performed by using the CRISPR-Cas9 system.
Considering the same topic of gene function research, Nie et al. (2021) [79] used the CRISPR-Cas9 technology to determine if the yellow-y gene plays a crucial role in the process of cuticular melanin synthesis in A. mellifera. They targeted this gene due to its great potential for mutants screening, being a selectable marker. By disrupting it, the phenotype of worker cuticle has changed, mainly due to the black pigment decreasing, thus confirming the yellow-y gene critical role in melanin pigmentation. However, as future prospects, this could be a great genetic marker for upcoming genomic research [79].
Referring to A. mellifera sex determination, it is controlled by the heterozygosity at a particular locus that harbors the key complementary sex determiner (csd) gene. The bees that are heterozygous at this specific locus are females, while the males are homozygous or hemizygous [80]. In a recent study, Wang et al. (2021) [81] used the CRISPR-Cas9 tool in order to knock out the csd gene and, thus, eliminated the genetic difference between females and males. Subsequently, they aimed to observe the transcriptome difference between the two sexes in this particular genetic background. They also successfully induced target mutations in mutant haploid individuals. It was observed that the expression level of several male-biased genes was higher in the mutant males. On the other hand, the expression level of several specific female-biased genes was lower. Their data also confirmed that csd interacts with certain genes, such as fruitless, troponin T, and transformer-2 just to mention a few [81].
Bombyx mori
For numerous reasons, B. mori is one of the most studied insects, especially because it presents a real interest for the scientific community. It has been completely domesticated and plays the pivotal role in sericulture, being reared principally for silk production on a large scale [82]. Considering the great role of silk in the textile industry and its use as a biomaterial in medicine, major efforts are being made to enhance silk's quality, but also to increase its quantity [12]. Moreover, there are numerous studies that describe the process of obtaining enhanced silk with appreciable properties by genetically manipulating the silkworms [83][84][85][86]. B. mori's genome was manipulated in order to enhance silk's properties. For instance, by knocking in the major ampullate silk protein from spiders into B. mori's genome, a research group obtained silk with superior mechanical characteristics [84].
B. mori is an oligophagous insect and its main food source are the mulberry leaves, a nutritional preference that influences its biological and economical parameters. There is a major drawback when it comes to this source of nutrition, namely the limited availability of mulberry leaves. In order to be able to rear silkworms not only in spring and summer, artificial diets are currently being used for their nourishment [77,87].
Regardless of human medicine advances, certain microorganisms developed survival strategies and numerous infectious diseases remain a crucial problem worldwide [88]. In this context, B. mori is a reliable experimental model and exhibits the great advantage of a short development cycle that consists of four different stages. The first period of growth is the egg phase, followed by the larvae, pupa, and moth phases. The larvae phase plays a crucial role in silkworm's development, being the longest phase and including five different stages. Another significant advantage of using B. mori is the fact that in the last instar larvae, the body size has nearly 5 cm; hence, it is facile to manipulate or to exploit it for various purposes. Moreover, this size also facilitates the body's dissection; thus, multiple target tissues or organs can easily be obtained [89,90].
B. mori as a Model Organism
The level of attention received as a model experimental organism increased, since several groups made available B. mori's genome data [91]. Various mutant strains have been described and the genetic analysis confirmed the numerous genetic traits. Another important aspect is that its manipulation is not associated with ethical concerns [90].
Being susceptible to various infectious agents, B. mori is one of the most used experimental models for drug screening, evaluation of different virulence factors, and identifying the pathogen genes responsible for its virulence [90]. Hitherto, several studies used B. mori to evaluate the effectiveness of antibiotics against certain human pathogens [92][93][94]. In a recent study, silkworms were used to examine the efficacy of three different glycopeptide antibiotics against Staphylococcus aureus infection. As a result, authors highlighted the great feasibility and efficacy of using B. mori to mimic bacterial infections in order to examine the therapeutic potential of antibiotics [95]. Moreover, recently, silkworms were used to evaluate the impact on several antibacterial compounds against Cutibacterium acnes [96]. As an experimental model organism, B. mori is currently being intensively used for various other purposes, as detailed in Table 2.
Type of Model Organism Brief Description Purpose References
Model for drugs toxicity Injecting the silkworms with three different pharmacologically active agents (4-methyl umbelliferone, 7-ethoxycoumarine) Evaluation of the metabolic pathway of these compounds [92] Exposing the silkworms to fungal infections Exploring pharmacokinetic parameters of an antifungal agent, Voriconazole [105] Injecting cytotoxic drugs into B. mori larvae Evaluation of cytotoxic drugs impact [106] Model for nanomaterials toxicity Spreading silver nanoparticles on mulberry leaves Toxicity evaluation of silver nanoparticles [107][108][109][110] Injecting subcutaneously zinc oxide nanoparticles Evaluation of zinc oxide nanoparticles toxicity, accumulation, and distribution [111] Injecting in the dorsal vein different nanoparticles with great interest in various life science branches Investigation of different silicon and carbon nanomaterials toxicity level against hemocytes [112]
Applications of CRISPR-Cas in B. mori
The first communication of successful manipulation of B. mori's genome by using the CRISPR-Cas9 tool, was reported by Wang et al. (2013) [113]. The authors targeted an essential gene [113], BmBlos2, that is orthologous to the Blos2 human gene [114]. Two sgR-NAs (23-bp) were designed to induce mutations leading to loss of target gene function. Each complex formed by one sgRNA and the Cas9 nuclease was injected in the preblastoderm embryonic stage. Ordinarily, the larval integument is opaque, but when the BmBlos2 gene function is lost, the tegument becomes translucent. This effect could be viewed as a phenotypic marker for mutant's detection. Of all individuals, 94% and 95.6%, respectively were successfully edited by using the two sgRNAs. This study highlighted the feasibility of using CRISPR-Cas9 not only in B. mori, but also in other lepidopteran insects. These findings are of great interest by revealing CRISPR-Cas9 system's applicability in pest control approaches [113].
The multiplexable potential of CRISPR-Cas9 technology was highlighted by Liu et al. (2014) [115]. First, the BmBlos2 gene was targeted for site specific mutagenesis, in order to confirm the feasibility of using the CRISPR-Cas9 system in B. mori. Following this, other six genes were targeted to confirm the system's multiplexable feature: tyrosine hydroxylase, red egg, yellow-e, kynureninase, ebony, and flugellos. Mutations were induced in each target gene, without evidence for the system's off-target activity [115]. The multiplexable feature of CRISPR-Cas9 is facilitating the process of genome engineering by simultaneously and precisely inducing mutations in different sites. This property of CRISPR-Cas9 enables researchers to perform precise and elaborate target mutagenesis in a time-effective manner [116].
Other approaches aimed to knock out the BmKu70 gene by using CRISPR-Cas9 in order to target its second exon [117]. The BmKu70 gene is coding a highly conserved protein, Ku70, which plays a key role in numerous mechanisms: cell adhesion, apoptosis, the maintenance of telomeres length, etc. In addition, numerous studies reported that by inactivating the BmKu70 gene, the frequency of homologous repair is increased. In order to test this hypothesis, the authors performed a transient analysis in genetically manipulated embryos. They knocked in the Bm702 gene that is found on the Z chromosome. Their results confirmed that by knocking out BmKu70 the homologous repair frequency is expanded. These promising results are of great interest for fundamental research in B. mori. [117]. It has been demonstrated by Fujinaga et al. (2017) [118] that in B. mori, the insulinlike growth factor-like peptide (IGFLP), is closely related to the genital disc, particularly involved in its growth. Due to the lack of studies performed in vivo on this topic, the same research group aimed to confirm the role of IGFLP by inactivating it [119]. For this purpose, they used the CRISPR-Cas9 genome editing tool. The absence of IGFLP leads to smaller ovaries and a lower number of laid eggs compared with the wild type. On the other hand, the size of laid eggs and its development were not affected. These findings indicated that this hormone has no impact on B. mori's fertility. The ecdysteroids play a crucial role in IGFLPs production, by inducing gene expression. Furthermore, it has been shown previously that ecdysteroids have a key role in ovary development. Therefore, the authors initially appraised that a low ecdysteroids titre caused reduced ovary weights. However, by analyzing the transgenic females, their data showed that the ecdysteroids titre was the same. This study reveals insights on IGFLPs impact on ovarian development [119].
In a recent study [120], a research group explored the effect of activating the BmFibH gene in the B. mori embryonic cells. For this purpose, they constructed a complex that involved the inactive form of Cas9 nuclease (dCas9) and a VPR activating domain, driven by a certain promoter. This activating domain consists of several different activators: VP64, p65, and Rta, respectively. When it comes to sgRNAs, three specific constructs were designed to target the promoter of the gene of interest. In order to confirm the success of BmFibH activation, first they determined its expression in normal embryonic cells. Their results showed that the BmFibH gene is strongly downregulated in untransformed cells. On the other hand, the transfected cells exhibited a higher BmFibH expression level. Moreover, their data showed that the activation of the target gene impacted the cellular stress responses [120]. Cui et al. (2018) [121] targeted the BmFibH gene in order to explore its role in the development of the silk gland. After the CRISPR-Cas9 construct was designed, a total number of 630 eggs were microinjected, but only 12.5% hatched. After analyzing the unhatched eggs, they observed that all embryos were genetically edited. By knocking out the BmFibH, severe changes were observed, such as naked pupa or thin cocoons. All individuals that exhibited naked pupae died. Moreover, by inactivating this gene, several other genes involved in the processes of degradation, such as the autophagy, were upregulated. These findings offer a better understanding of FibH protein's role in silk gland development [121].
Keeping in mind the feasibility of expressing spider silk genes in B. mori [84], in order to obtain enhanced silk, [122] used CRISPR-Cas9 to acquire highperformance fibers. By using this technique, the authors successfully knocked in spider silk genes in silkworms' genome. Accordingly, they designed two types of systems, FibL-CRISPR-Cas9 and FibH-CRISPR-Cas9. To avoid disrupting the protein production, spider silk genes were knocked in into one of the introns of BmFibL or BmFibH. They demonstrated the feasibility of employing CRISPR-Cas9 in B. mori to obtain silk with enhanced mechanical properties at industrial scale. The strategy described in this study can be further used to obtain numerous exogenous proteins that exhibit great interest for medical applications and beyond [122].
The microRNAs (miRNAs) are key regulators when it comes to gene expression. Notably, they recognize by complementarity specific mRNAs and inactivate them [123]. It has been revealed that miR-2 is one of the most important miRNAs for wing morphogenesis in D. melanogaster and exhibits great influence on the Notch pathway. The BmAwd and BmFng genes are known positive regulators of this signaling pathway and are also potential miR-2 targets. On this topic, Ling et al. (2015) [124] used the CRISPR-Cas9 technology to investigate the function of miR-2 in B. mori. In the first phase of the study, the authors used the Gal4/UAS system to overexpress the miR-2, resulting in deformed adults' wings. However, in the second phase of the research, the CRISPR-Cas9 system was used to knock out the two miR-2 target genes. The loss of function of BmFng and BmAwd also led to deformed wings. Both phases of the study confirmed that in silkworms, the miR-2 plays a crucial role in wing development [124].
In another study, Liu et al. (2020) [125] used the CRISPR-Cas9 system to explore the function of miR-34, another miRNA that exhibits a great impact on insect development. Firstly, they overexpressed miR-34 in a transgenic line constructed by using a pBac plasmid. The miR-34 overexpression negatively impacted the body size and wing morphology. Secondly, the CRISPR-Cas9 system was used to inactivate the miR-34 by using two different gRNAs. The miR-34 ablation led to larvae developmental delay. Using several bioinformatic tools, they predicted that the miR-34 target genes could be BmE74, BmCpg4, BmLcp, BmWcp11, and BmBrc-z2. Various analyses revealed that miR-34 target genes are just BmE74 and BmCpg4. While it is well known that BmE74 plays a key role in growth and morphogenesis, functional analysis of the second gene had to be performed. Given this, the CRISPR-Cas9 system was used to knock out BmCpg4 resulting in affected wings, thus highlighting the gene's role in wing development [125].
Regarding the silkworms' innate defense mechanisms, there are two major activation pathways involved in the expression of numerous evolutionarily conserved antimicrobial peptides (AMPs) genes, and the Toll and the Imd pathways [126,127]. When Gram-negative bacterial or fungal contamination occurs, the Toll pathway is activated. One of the key genes that is involved in the Toll pathway is BmCactus. Being a negative regulator, once the infection with the mentioned pathogens occurs, this gene is being phosphorylated and inactivated. Considering this, Park et al. (2018) [128] used the CRISPR-Cas9 technology to perform site-target mutagenesis targeting the BmCactus gene in a specific B. mori cell line. The authors designed six different gRNAs and transfected the CRISPR-Cas9 complex in the B. mori ovarian cell line by electroporation, but observed a very low survival rate of only 24%. Their data showed that all gRNAs determined site induced mutagenesis. By disrupting the BmCactus gene, the expression of several antimicrobial proteins (e.g., lysozyme and lebocin) was stimulated [128]. Keeping in mind the great importance of AMPs in clinical research [129], this study underlined the outstanding potential of B. mori usage in life science fields and the feasibility of using the advanced CRISPR-Cas9 genetic scissors to edit genomes for different purposes [128].
Ecdysteroids are steroid hormones that play a crucial role in the process of molting and metamorphosis in insects. The most important molting hormone is 20-hydroxyecdysone (20E) [130], but even if its biosynthesis processes have been intensively studied, the 20E metabolism has not been well-documented. However, there are several genes that are believed to be involved in the inactivation of 20E, but their biological functions have not been fully understood. Therefore, [131] used B. mori to investigate the biological functions of one particular 20E inactivation enzyme, specifically the ecdysone oxidase (EO). Having a great impact on insects' key processes, it is crucial to regulate the ecdysteroids concentration. Accordingly, the EO participates in ecdysteroids' oxidation [132]. The CRISPR-Cas9 system was used to deplete the BmEo gene, and consequently, there was observed that the duration of the fifth instar larval phase was prolonged by 24 h [131].
Another key element that is involved in insects' development is the juvenile hormone (JH) [133]. The central role in the JH degradation process is played by the juvenile hormone esterase (JHE). Zhang et al. (2017) [134] used CRISPR-Cas9 in B. mori to deplete the encoding gene for JHE, specifically BmJhe, in order to investigate its function. Their data showed that by knocking out this gene, the fourth and fifth instar stages were prolonged due the fact that JH metabolism was delayed. These findings are not only important for functional analysis, but also for the sericulture industry. For silk production, it is of great impact to expand the larval stages, because this leads to larger larvae, and thus, larger cocoons are produced. These findings highlight the feasibility of using genome editing tools for economic purposes [134].
Moreover, CRISPR-Cas was used in B. mori to perform epigenetic modifications. In this context, Liu et al. (2019) [135] explored the impact of methylation on silkworms' development. This study is of great significance by providing a strategy for the investigation of DNA methylation importance in a target locus. Furthermore, their study represents a starting point for exploring the impact of DNA methylation status on different phenotypes in silkworms and beyond [135]. Furthermore, Xing et al. (2019) [136] used this technology for labeling endogenous regions in B. mori embryonic cell line and targeted the BmFibH gene. Using CRISPR-dCas9 as an imaging tool has extraordinary impact on performing fundamental research, but also on providing insights on insecticide resistance [136].
As for the detection of CRISPR-Cas induced mutations, there are a plethora of screening methods. On this topic, in a recent study, Brady et al. (2020) [137] described a new approach and provided a protocol for screening, characterizing, and stabilizing the mutant silkworm lines. The provided protocol involves several molecular methods that allow the recognition of the induced mutations on both autosomes and sex chromosomes [137].
In Table 3, several studies are described that used the CRISPR-Cas technology in order to genetically edit B. mori.
Applicability of CRISPR-Cas in Anti-BmNPV Therapy
Viruses represent a major threat to numerous hosts, including humans or insects, being one of the most rapidly mutating biological entities. However, there is an urgent need to develop new methods to combat these pathogens. Being one of the most valued molecular tools, the CRISPR-Cas system is currently being used for developing antiviral strategies in various organisms [155][156][157]. Of all nucleases described in the specialized literature, it has been shown that different variants of Cas9, Cas12, and Cas13 exhibit the most promising results regarding the antiviral approaches [155].
In respect to insect virology, BmNPV was the first discovered pathogen (1998) [158]. In the sericulture industry, this baculovirus causes extensive economic losses; therefore, it has been intensively studied. Being part of Baculoviridae family, its infection leads to the most severe disease and only a few silkworm strains are resistant to this virus [159]. Currently, there are several traditional methods that help to enhance silkworms' resistance to BmNPV, but they exhibit serious limitations [158,160,161].
However, CRISPR-Cas has been successfully used as an antiviral therapy in B. mori, especially against BmNPV. Chen et al. (2017) [162] selected two genes that are involved in baculovirus' replication and propagation processes: immediate early-1 (ie-1) and me53, respectively. The authors designed two gRNA for each target gene. The engineered plasmid contained three expression cassettes. One cassette contained the Cas9 nuclease, the second one included the gRNAs and the last one harbored a selecting marker, specifically the enhanced green fluorescent protein (EGFP). Even if the silkworms' viability and fecundity was not impacted, the transgenic homozygotes experienced a delay of the larval stage development. After performing viral inoculation in both wild type group and transgenic group, it was observed that the second category posed great viral resistance against BmNPV [162]. Likewise, another group of researchers targeted two other genes, ie-0 and ie-2, by using CRISPR-Cas9. After being inoculated with occlusion bodies of BmNPV, the survival rate reached 65% for the transgenic silkworms [163].
The multiplexable feature of the CRISPR-Cas9 technology was underlined in a study performed by Dong et al. (2019) [164]. The researchers targeted in BmNPV three different genes essential for viral replication: ie-1, the major envelope glycoprotein and the late expression factor-11. The success of this study revealed one promising strategy in inhibiting different B. mori by using the powerful CRISPR-Cas method. BmTudor
Deletions, insertions Plasmid
Investigating the frequency of homologous recombination Included in the stress granule formation [145] BmIdgf Deletions mRNA Analyzing the pigmentation mechanism Plays a key role in the melanization mechanism [146] BmBngr-a2
Deletions Plasmid
Exploring functional studies of certain ion transport peptides Involved in water homeostasis [147] BmTctp Deletions Plasmid Functional analysis Involved in different cell process, such as growth, development, and proliferation [148] BmGr66 Deletions Plasmid A better understanding of the specific feeding preference Involved in silkworms' specific feeding preferences [149] BmOvo Deletions Plasmid Functional analysis Involved in germline sex determination and wing metamorphosis [150] BmPhyhd1 Deletions Protein Functional analysis Exhibits a great impact on certain features of the epithelial cells [151] BmWnt1 Deletions mRNA Functional analysis Involved in the embryogenesis [152] BmE75b Deletions mRNA Functional analysis Controls the developmental timing [153] BmOrco Deletions Plasmid Exploration of adult mating behavior Involved in silkworms' olfactory system, being an odorant receptor co-receptor [154]
Conclusions
B. mori is one of the most important domesticated insects due to its great potential as a biotechnological platform to produce recombinant proteins, but also because of its great success as an experimental model organism [92,110]. Due to its extraordinary prospects, a wide range of studies have been performed that accelerated fundamental research and beyond in silkworms. Being the most feasible technology in terms of genome editing, the CRISPR-Cas system is currently used in many laboratories specialized in medicine, agriculture, alimentary industry, and entomology research [34,165]. A hallmark of CRISPR-Cas is the relative ease of designing CRISPR-based experiments. As reviewed elsewhere, there are numerous bioinformatics tools that facilitate the guide RNA design, as well as the prediction and evaluation of editing results [166]. In our experience, a thorough guide RNA design can also be achieved by using standard sequence alignment tools and manual inspection of potential target regions. Considering the aforementioned, the utilization of the CRISPR-Cas system as a gene editing tool augmented the research in B. mori. Most of the studies have been focused on using the CRISPR-Cas system to perform functional gene analysis, to elucidate certain mechanisms [135,148], or to enhance silkworms' resistance to BmNPV [164]. By reviewing the most remarkable work in this field, we provide deep insights that offer support for future research not only in B. mori, but also in other insect experimental models.
There are interesting future prospects when it comes to using the CRISPR-Cas technology in silkworms. Nowadays, by genetically manipulating the B. mori genome, major progress is being made for a better understanding of the process of fibroin and sericin synthesis, but also great advances are made to increase the understanding in the most important processes in silkworms. Notable great applications of CRISPR-Cas in B. mori refer to the development of enhanced silk fibers and the production of recombinant proteins that exhibit importance for various scientific fields. Even if compared with the other genome editing techniques, CRISPR-Cas9 exhibits lower off-target activity, although currently, it could not be confirmed that these unfavorable effects are completely eliminated. However, there is a major need to eliminate or at least reduce the off-target activity.
|
v3-fos-license
|
2018-04-03T00:32:11.896Z
|
2006-06-01T00:00:00.000
|
263947234
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcanesthesiol.biomedcentral.com/counter/pdf/10.1186/1471-2253-6-6",
"pdf_hash": "426c579518d2e34de4711a70194e02fb5ee60495",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44218",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "37276ff389ae80acb055b38a129216f0b73461f2",
"year": 2006
}
|
pes2o/s2orc
|
Anesthesiologists' practice patterns for treatment of postoperative nausea and vomiting in the ambulatory Post Anesthesia Care Unit
Background When patients are asked what they find most anxiety provoking about having surgery, the top concerns almost always include postoperative nausea and vomiting (PONV). Only until recently have there been any published recommendations, mostly derived from expert opinion, as to which regimens to use once a patient develops PONV. The goal of this study was to assess the responses to a written survey to address the following questions: 1) If no prophylaxis is administered to an ambulatory patient, what agent do anesthesiologists use for treatment of PONV in the ambulatory Post-Anesthesia Care Unit (PACU)?; 2) Do anesthesiologists use non-pharmacologic interventions for PONV treatment?; and 3) If a PONV prophylaxis agent is administered during the anesthetic, do anesthesiologists choose an antiemetic in a different class for treatment? Methods A questionnaire with five short hypothetical clinical vignettes was mailed to 300 randomly selected USA anesthesiologists. The types of pharmacological and nonpharmacological interventions for PONV treatment were analyzed. Results The questionnaire was completed by 106 anesthesiologists (38% response rate), who reported that on average 52% of their practice was ambulatory. If a patient develops PONV and received no prophylaxis, 67% (95% CI, 62% – 79%) of anesthesiologists reported they would administer a 5-HT3-antagonist as first choice for treatment, with metoclopramide and dexamethasone being the next two most common choices. 65% (95% CI, 55% – 74%) of anesthesiologists reported they would also use non-pharmacologic interventions to treat PONV in the PACU, with an IV fluid bolus or nasal cannula oxygen being the most common. When PONV prophylaxis was given during the anesthetic, the preferred PONV treatment choice changed. Whereas 3%–7% of anesthesiologists would repeat dose metoclopramide, dexamethasone, or droperidol, 26% (95% confidence intervals, 18% – 36%) of practitioners would re-dose the 5-HT3-antagonist for PONV treatment. Conclusion 5-HT3-antagonists are the most common choice for treatment of established PONV for outpatients when no prophylaxis is used, and also following prophylactic regimens that include a 5HT3 antagonist, regardless of the number of prophylactic antiemetics given. Whereas 3% – 7% of anesthesiologists would repeat dose metoclopramide, dexamethasone, or droperidol, 26% of practitioners would re-dose the 5-HT3-antagonist for PONV treatment.
Background
When patients are asked what they find most anxiety provoking about having surgery, the top concerns almost always include postoperative nausea and vomiting (PONV) [1,2]. Anesthesiologists agree that PONV is an important issue for patients [3]. Since PONV is important to patients, improving the quality of anesthesia care includes reducing the incidence and severity of PONV. A large number of prospective randomized clinical trials have been completed to evaluate the efficacy of drugs and non-pharmacologic interventions to prevent PONV [4][5][6][7][8]. Data also exist on what prophylaxis interventions anesthesiologists in routine clinical practice actually administer for PONV [9].
However, fewer studies investigate the efficacy of antiemetics for the treatment of PONV once it occurs in the Post Anesthesia Care Unit (PACU). For example, a quantitative systematic review of treatment of established PONV published in 2001 found that metoclopramide, droperidol, isopropyl alcohol vapor, and midazolam were tested in one trial only, each with a limited number of patients [10]. That review also found that 5-HT3 antagonists had absolute risk reductions compared with placebo of 20% -30%, with a less pronounced anti-nausea effect.
The discrepancy between the plethora of trials on prevention of PONV and the paucity of trials on treatment of established symptoms is due, in part, to the difficulty in performing PONV treatment studies since a large number of patients would be needed to obtain the required target sample size that eventually experience PONV. In fact, only until recently, have there been any published recommendations, mostly derived from expert opinion not clinical trials, as to which regimens to use once a patient develops PONV [11].
Few USA data exist on practice patterns for treatment of PONV once it occurs in the PACU. PONV treatment data could be collected prospectively or abstracted retrospectively from the anesthesia and medical record. However, these methods make it difficult to compare practice patterns among practitioners, as neither method controls for differences in patient's severity of illness, demographics, or practice type. Other disadvantages of the chart review methodology include recording bias (e.g., some interventions may be provided but not documented) and the skilled (and costly) experts required to accurately collect data from the medical record.
To isolate physician practice from confounding variables, simple case vignettes have been validated as a method to elicit medical practice treatment patterns [12]. Vignettes are written cases that simulate actual clinical practice. Educators, demographers, and health service researchers have used these vignettes to measure processes in a wide range of settings [13][14][15].
The goal of this study was to assess the responses to a written questionnaire (with short hypothetical clinical vignettes) to address the following questions regarding PONV in the PACU: 1) If no prophylaxis is administered to an ambulatory patient, what agent do anesthesiologists use for treatment?; 2) Do anesthesiologists use non-pharmacologic interventions for PONV treatment?; and 3) If a PONV prophylaxis agent is administered during the anesthetic, do anesthesiologists choose an antiemetic in a different class for treatment?
Methods
Approval for this study was obtained from the Stanford University Human Subjects Committee.
Physician sample
We mailed a written questionnaire to 300 anesthesiologists selected at random from the 2002 American Society of Anesthesiology Directory available to us in printed form. A random number generator was used to select names, the number indicating how far down the list to go on each page. We chose 300 because based on previous studies, we expected approximately a 33% response rate, which would generate our goal of 100 surveys returned. Questionnaires were sent by U.S. mail to each subject's address during December 2004. A stamped self-addressed envelope was to enhance the response rate.
Survey measurement methods
The survey instrument consisted of three parts. The first page was a cover letter, the second page requested basic demographic data about the respondent, and the third page contained the clinical scenarios or vignettes. [see Additional file 1] For example, the stem, or base case, for vignette #1 was, "A 22-yr-old woman status post outpatient pelvic laparoscopy under general anesthesia. She received no PONV prophylaxis. In the PACU, she reports PONV. What would your antiemetic order(s) be?" Questionnaire instructions included: "Assume all other relevant clinical history and exam is negative. Assume patients have received adequate analgesics." We did not specify whether responders could use monotherapy or combination therapy.
To assess how prophylaxis choice affected treatment, the above vignette stem stayed the same for vignettes #2, #3, #4 and #5, but the number of prophylaxis anti-emetics increased from one to four. Vignette # 2 had the patient receive a 5-HT3 anatagonist for prophylaxis, vignette # 3 had a 5-HT3 antagonist and metoclopromide for prophylaxis, and vignette #4 had a 5-HT3 antagonist, metoclo-promide and dexamethasone. Vignette #5 stated, "A 22yr-old woman status post outpatient pelvic laparoscopy under general anesthesia. She received a 5-HT3 antagonist, metoclopromide, dexamethasone, and droperidol for prophylaxis. In the PACU, she reports PONV. What would your antiemetic order(s) be?" We also aimed to assess the second choice for treatment if initial PONV treatment fails. For each vignette we asked, "What is your second choice for treatment if the first treatment fails?" Four senior, board-certified anesthesiologists in the Stanford Department of Anesthesiology reviewed the vignettes to ensure adequate content.
Five other anesthesiologists (convenience sample) were asked to take the questionnaire twice, two days apart, in a non-random, non-anonymous fashion to assist with checking the internal reliability of the questionnaire. All five respondents answered every question. Of 90 eligible responses (demographic questions were excluded), 82% were answered the same way the second time.
Results
Of the 300 questionnaires mailed out, twenty-one were returned as undeliverable because the person was no longer at that address, and three were returned with the respondent stating they were no longer in active clinical practice. The 106 completed questionnaires returned gave a response rate of 106/276 or 38%. (see Table 1 for demographics) Sixty-seven % (95% confidence intervals, 62% -79%) of the anesthesiologists we surveyed reported they would administer a 5-HT3 antagonist as first choice if no proph-ylaxis had been administered. (Table 2) (Ondansetron (53%) + dolasetron (13%) + granisetron (1%) = 67%) Metoclopramide, with 11% (95% confidence intervals, 6.3% -18%) of anesthesiologists choosing as first option, and dexamethasone (8% of anesthesiologists, 95% confidence intervals, 3.3% -14%) were the next two most popular agents for PONV treatment when no prophylaxis had been given.
Sixty-five % (95% confidence intervals, 55% -74%) of anesthesiologists would use non-pharmacologic interventions for treatment. An IV fluid bolus or oxygen via nasal cannula were the two most common choices. (Table 2) PONV treatment choice changed depending on prophylaxis agent given. (Table 2) For example, only approximately 5% of anesthesiologists reported they would repeat dose the metoclopramide, approximately 3% would repeat the dexamethasone, and 7% would repeat the droperidol. In contrast, when a 5-HT3 antagonist was used for monotherapy prophylaxis, a repeat dose of the 5-HT3 antagonist was administered by 26% (95% confidence intervals, 18% -36%) of survey responders. Promethazine utilization increased as a treatment choice as the number of other drugs (the 5-HT3 antagonist, metoclopramide, dexamethasone, and droperidol) for prophylaxis increased as stated in our vignettes. (Table 2) If no prophylaxis was administered and initial therapy for PONV failed, then the most common (reported by 24% of anesthesiologists) next choice for treatment was still a 5-HT3 antagonist, followed by promethazine.
Thirty-seven % of respondents wrote in some free text under the comments section. Forty-four % of the comments explained or reinforced answers given in the main part of the questionnaire, while 33% of the written-in comments related to droperidol availability and the FDA Black Box warning. The remaining 23% of comments referred to a variety of issues such as: formulary availability, "I examine every patient. What I do depends on my exam," "Need more treatment studies," and "Treatment and choice are often driven by the nursing staff."
Discussion
Our written questionnaire study found that 5-HT3-antagonists are the most common choice for treatment of established PONV when no prophylaxis is used. This pattern holds true following PONV prophylaxis with a regimen including a 5 HT3 antagonist regardless of the number of prophylactic antiemetics received by the patient Overall, anesthesiologists reported administering a total of eighteen different drugs for PONV treatment and twelve different non-pharmacologic interventions. Varia-tions in medical practice such that physicians treat similar patients differently may be created by uncertainty about efficacy of interventions, or formulary restrictions, or recent visits by the pharmaceutical sales representative, as well as differences in practitioners' residency training, judgment, and beliefs about drug acquisition costs and side-effect profiles [17][18][19]. Enhanced education and individualized feedback can change anesthesiologists' practice patterns [20,21].
Initial treatment
According to responses to a specific question on the questionnaire, almost all anesthesiologists (96%) preferred pharmacologic interventions for treatment, instead of non-pharmacologic (e.g., hydration, oxygen, acupuncture). The four anti-emetics chosen to be included for Other includes: encourage emptying of oropharynx/spitting, assure there is no bleeding, transfer to inpatient ward, or add glucose to IV prophylaxis in the hypothetical patient vignettes -a 5-HT3 antagonist, droperidol, dexamethasone, and metoclopramide -were intended to represent major receptor systems involved in the etiology of PONV, as well as agents commonly used in clinical practice. We did not specify the doses of the four chosen antiemetics because we were mainly interested in the choice not the dose. To keep survey length reasonable, we opted not have a vignette with promethazine as a prophylaxis agent because in our outpatient practice promethazine is infrequently given for prevention.
Two-thirds of the anesthesiologists reported they would administer a 5-HT3 antagonist as first choice for PONV treatment if no prophylaxis had been given. The efficacy of the 5-HT 3 antagonists may be more pronounced when a patient is vomiting than as treatment for nausea. There is weak evidence of dose-responsiveness with these drugs [22,23]. Therefore, small doses of the 5-HT 3 antagonists (ondansetron 1 mg) have been recommended for treatment. Interestingly, less than 15% of anesthesiologists reported using a combination of several agents for treatment, despite that combination of agents, or multi-modal therapy, may be increasingly being used for prophylaxis.
Repeat dosing
A majority of anesthesiologists reported they changed to a different agent for PONV treatment than the one(s) used for prophylaxis. However, 26% of practitioners would administer a second dose of the 5HT-3 antagonist (ondansetron (22%) + dolasetron (3%) + granisetron (1%)) if initial 5HT-3 antagonist prophylaxis failed. This is despite consensus guidelines, mostly derived from expert opinion not clinical trials, which suggest that if PONV occurs within six hours postoperatively, patients should not receive a repeat dose of the prophylactic antiemetic. Prescribing information for ondansetron states that a second dose does not provide additional control if the first prophylactic dose has failed. A drug from a different class should be used for treatment [24].
Pharmacogenomics may affect the success of a 5-HT3
antagonist because some patients have extra copies of the CYP2D6 gene, a genotype consistent with ultrarapid metabolism [25]. A separate study of patients who failed prophylaxis with ondansetron found the complete response rate was significantly higher after treatment with promethazine (78%) than after treatment with repeat ondansetron (46%) [26]. A third study of 428 patients (of 2,199 prophylactically treated with ondansetron) with PONV in the PACU, found that an additional dose of ondansetron was no better than placebo for reducing PONV two hours postoperatively [27].
Interestingly, anesthesiologists in our survey study were less likely to redose metoclopramide, dexamethasone, or droperidol for treatment (than a 5-HT3 antagonist) if any of those agents were administered for prophylaxis.
One quarter of anesthesiologists reported not having preprinted PACU orders specifically for PONV. This may increase the variability in PONV clinical practice, and make it difficult for evidence-based care to be implemented. Better mechanisms for delivering decision clinical support (e.g., evidence based guidelines) for PONV in the PACU may be possible. Four % of our sample voluntarily indicated that their group had developed their own PONV treatment guidelines.
For "older generation" antiemetics there are few data on therapeutic efficacy for established PONV. As an example, in patients who failed prophylaxis with droperidol, the complete response rate was significantly higher after treatment with promethazine (77%) than after droperidol (56%) [26].
It may be that anesthesiologists believe that interventions shown to be effective for prevention will be similarly effective for treatment. For example, many of our responders indicated they would use supplemental oxygen to treat PONV, but most studies of oxygen have been for PONV prevention, with varying efficacy [28,29]. Other non-pharmacologic treatments suggested by our respondents such as IV fluid therapy, isopropyl alcohol inhalation and acupuncture/acustimulation have been studied, sometimes for prophylaxis not treatment, while others such as forced air warming have not [30,31].
Beyond six hours, PONV can be treated a second time with any of the agents used for prophylaxis except dexamethasone and scopolamine, which are longer acting. We found that 73% of anesthesiologists reported having preprinted PACU orders for PONV at their primary practice location such that the anesthesiologist can amend the orders via checkbox, or by writing in.
To keep the questionnaire a reasonable length, we did not ask respondents why they chose the different treatments. The next study will assess if choices are based on such items as department policy, cost considerations, perceived lack of evidence or insufficient knowledge on the part of the anesthesiologist, individual patient's condition, or nursing determination.
PONV treatment research requires more precise PONV assessment
The lack of consistent assessment of PONV is an issue because studies often define endpoints differently. Nausea sometimes is defined by patient self-report, and other times as an observer asking the patient a yes/no answer. Some institutions define PONV as when actual treatment of PONV occurs, which is easily quantified, but is confounded because patients' perceptions of nausea severe enough to require intervention varies among patients, and nurses have different thresholds for initiating treatment [32]. Often nausea and vomiting are not distinguished and the symptoms of PONV are combined into a single PONV endpoint [33]. The challenge of multiple endpoints and heterogeneity of definitions need to be addressed before aiming to establish the optimal management of PONV once it occurs in the PACU. The entire observation period should cover 24 hrs [34]. Treatment responses we obtained might have varied between a patient developing nausea alone or having vomiting.
Limitations
To control for the potential impact of biases from differing case-mix, we employed a postal questionnaire vignette methodology. The limitations of this method include that the subject sample depended on anesthesiologists' willingness to participate [35]. While not significantly different from other national surveys of professional organizations, the response rate of 38 % is low and nonresponse bias may exist. This bias reflects the fraction of eligible subjects that do not respond and the difference in their answers compared to responders. Since it is unknown whether the physicians answering the questionnaire were systematically different from non-responders, there is no absolutely acceptable level of response.
The study had relatively small sample size. Determination of adequate sample size may be difficult and depends on the desired precision of the results. A larger number of respondents is always possible (to enable subgroup analyses about differences among practice types, academic vs. private practice, for example) but we obtained a reasonable sampling of current practice patterns to help design larger studies of PONV treatment.
Also, our result that a 5-HT3 antagonist is the most commonly prescribed for PONV treatment may not be applicable in other countries.
Although vignettes are suitable for comparative analyses because they control for case-mix, further studies are needed to confirm that the results from vignette-based questionnaires are in fact a valid measure of the real-life clinical care provided by anesthesiologists. The openended comments section in our questionnaire did not uncover any problems with anesthesiologists stating they didn't understand the questionnaire, or that key elements were missing. Since our vignettes were hypothetical, the answers provided by the anesthesiologists may not be what they actually use.
Conclusion
5-HT3-antagonists are the most common choice for treatment of established PONV for outpatients when no prophylaxis is used, and also following prophylactic regimens that include a 5HT3 antagonist, regardless of the number of prophylactic antiemetics given. Whereas 3%-7% of anesthesiologists would repeat dose metoclopramide, dexamethasone, or droperidol, 26% of practitioners would re-dose the 5-HT3-antagonist for PONV treatment. PONV guidelines may help reduce this unnecessary redosing.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
0001-01-01T00:00:00.000
|
5320194
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/jir/1995/014923.pdf",
"pdf_hash": "593ae04dffa46221070afe4e7d054dc681e9e930",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44220",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "593ae04dffa46221070afe4e7d054dc681e9e930",
"year": 1995
}
|
pes2o/s2orc
|
Reprints Available Directly from the Publisher Photocopying Permitted by License Only Thymic Microenvironment and Lymphoid Responses to Sublethal Irradiation
Sublethal irradiation of the murine thymus has been a useful tool for depleting the thymus of dividing immature thymocyte subsets, to sequence thymocyte differentiation events occurring from radiation-resistant precursors. This massive reduction in thymocytes also represents a model in which the bidirectional interplay between the thymic stromal cells and lymphocytes can be investigated. The purpose of this study was thus twofold: to precisely map the initiation of thymopoiesis as a prelude to assessing the effects of injected mAb to novel thymic antigens; and to use a panel of mAbs to determine the alterations in the thymic stroma during the T-cell depletion and reconstitution phases. The striking finding from this study was that following T-cell depletion, there was a marked upregu-lation of specific stromal antigens, which retracted with the reappearance of T cells. Thus, following sublethal irradiation, there are modifications in the thymic microenvironment that may be necessary to support renewed thymopoiesis and the complete restoration of the thymus involved the synchronous development of both the stromal and lymphocytic components.
INTRODUCTION
The thymic microenvironment consists of specialized cells that are both epithelial and nonepithelial in nature. Together these cellular elements are organized into well-defined cortical and medullary regions, throughout which developing thymocytes reside . As T cells mature within the thymus, they migrate from the cortex to the medulla and undergo phenotypic changes, including the acquisition of TcR, cytokine receptors, and the modulation of differentiation antigens such as CD4 and CD8 (Boyd and Hugo, 1991). The contribution of the thymic microenvironment toward this complex process, and that which thymocytes themselves impart on the thymic microenvironment, is gradually becoming defined. Thymic stromal cells such as TNC, Mq, and DC form complexes with developing thymocytes (Andrews and Boyd, 1985;Kyewski, 1986;van Ewijk, 1988;Shortman and Vremec, 1991;Gao et al., 1993) and stromal cell lines instruct some, but not all, specific stages of Corresponding author. thymocyte development (Palacios et al., 1989;Nishimura et al., 1990;Tatsumi et al., 1990;Nagamine et al., 1991;Hugo et al., 1992;Watanabe et al., 1992;Anderson et al., 1993). Accordingly, damage to the microenvironment itself via drug treatment or ionizing irradiation impairs thymocyte differentiation (Adkins et al., 1988;Kanariou et al., 1989).
That the constitution and organization of the thymic microenvironment are in turn dependent on T-cell development has been well illustrated in a number of experimental systems. In SCID mice and in mice deficient in RAG-1 or p56 lck, the medulla fails to develop due to the absence of mature TcR thymocytes (Shores et al., 1991;Molina et al., 1992;Suhr et al., 1992;Ritter and Boyd, 1993;van Ewijk et al., 1994). Similarly, in TcR knockout mice, where thymocyte development is blocked beyond CD4/CD8 cells, the medulla is greatly reduced in size (Philpott et al., 1992;Palmer et al., 1993;Mombaerts et al., manuscript in preparation). This effect has also been demonstrated in mice that have been injected with anti-CD3 mAb (Kyewski, 1991). Furthermore, a thymocyte-stromal cell signaling pathway in which medullary epithelial cells are activated via the phosphorylation of a 90-kD 102 E.S. RANDLE-BARRETT and R.L. BOYD medullary epithelial-cell glycoprotein upon contact with CD4+CD8 thymocytes has been identified (Couture et al., 1992). Thus, it is evident that a bidirectional relationship exists between thymocytes and stromal cells (reviewed by Ritter and Boyd, 1993).
In the present study, we have mapped thymocyte reconstitution post-irradiation (PIrr) to ultimately devize a model for examining the in vivo effects specific mAbs have on thymopoiesis. Using a panel of mAb-recognizing mouse thymic stromal (MTS) antigens derived in this laboratory (Godfrey et al., 1990;Tucek et al., 1992), we also observed the effects of irradiation and the associated loss of T cells on the thymic microenvironment. In agreement with others (Huiskamp et al., 1983;Adkins et al., 1988), the initial loss of lymphocytes following irradiation resulted in the collapse of the stromal architecture, in particular, that of the cortex. Most importantly, however, it was clearly illustrated that the thymic was "dynamic" immediately following irradiation, with marked upregulation in the expression of specific cortical and medullary stromal antigens. These selective changes in the thymic stroma may have been as a direct result of the irradiation itself, revealed as a consequence of the rapid depletion of thymocytes or a prerequisite for the initiation of thymopoiesis following irradiation.
Effect of Irradiation on Thymocyte Subpopulations
Total Thymocyte Cell Number. Total cell yields dramatically decreased immediately following sublethal irradiation (Fig. 1) from their normal adult value (day 0) of 6.0+1.4x107 to the nadir of 4.2+ 1.2 x 105, by day 5. Between days 6 and 7, however, there was extensive thymocyte proliferation as indicated by the tenfold increase in total cell number to 4.4+0.31x 106, with an additional tenfold increase in number by day 12 to 3.3 +0.2x 107. CD3, CD4, and CD8 Expression. To assess thymocyte subpopulations and their subsequent reconstitution, thymocytes on days 1-14, 21, and 28 PIrr were stained simultaneously for CD3, CD4, and CD8 antigen expression. By day 1, the thymus is virtually devoid of CD4/CD8 double-positive (DP) thymo- Call (day 0) mice and sublethally irradiated mice, 1-28 days postirradiation. Results represent the mean + standard deviation of four independent experiments, except for days 8, 9, and 13, which are represented as the mean of two experiments. cytes (Fig. 2), the small remnant population being mainly of the CD4/CD8/CD3 hi subset (Fig. 3). The resistance of mature populations to the irradiation was further reflected in the mature (CD3/) CD4 and CD8 single-positive populations. Although these cells markedly increased in proportion within the first 4 days (Figs. 2 and 3), they progressively decreased in number (from 4.3+1.0x106 to 7.5+0.7x104 and 1.2+_0.4x106 to 1.8+_3.0x104, respectively) until day 10, presumably due to emigration (K. Kelly, personal communication). Similarly, CD3+CD4-CD8 thymocytes were unaffected proportionally (Fig. 3) but decreased in number from 3.0_+0.8x105 to 1.0+1.0x104 by day 5.
Thymic reconstitution began around day 4 with sequential, progressive increases in CD3-CD4- CD4+CD8 cells. Thus, between days 5-7 following irradiation, all immature thymocyte populations markedly expanded. By day 7, the thymocyte profile with respect to CD3, CD4, and CD8 expression approximated that of the adult thymus ( Fig. 2), although it is not clear whether the mature singlepositive cells were remnant radiation-resistant cells that had not yet migrated or were derived from de novo production of differentiating precursors. From day 7, all thymocyte populations increased in cell number, and by day 21, normal adult values were virtually attained (data not shown).
MTS 32, 33, 35 and 37. MTS 32, 33, 35, and 37 identify antigens with the unusual property of being expressed on both thymocytes and thymic stromal cells, thereby providing an additional phenotypic profile of thymocytes distinct from that defined by CD3, CD4, and CD8 expression (Godfrey et al., 1990;Tucek et al., 1992). On thymocytes, MTS 32 stains all T cells, except a subset of CD3hiCD4 CD8-cells, which exhibit a Th0/Th2-type cytokine profile (Vacari et al., 1994); MTS 33 and 37, which detect ThB and HSA, respectively, stain immature thymocytes and mature thymocytes, but are negative on peripheral T cells; MTS 35, detects the antigen TSA-1/Sca-2, which is restricted to immature thymocytes and has an inverse expression to CD3 . Figure increased to approximately the adult values by day 12. In accordance with the dramatic loss of immature thymocytes, PIrr, ThB, TSA-1/Sca-2, and HSA expression were markedly reduced from approximately 90% to 2-5% by day 4, despite the fact that HSA and ThB are expressed on mature CD3 cells, albeit at low levels, which are virtually the only cells remaining after irradiation.
Microenvironment
The effects of sublethal irradiation on the thymic microenvironment were examined using an extensive panel of mAbs reactive with epithelial and nonepithelial stromal cells and the vasculature, the normal adult thymus reactivities have been described elsewhere (Godfrey et al., 1990). The data are summarized in Tables 1 to 4. Epithelial Cells. Following irradiation, the loss of rapidly dividing thymocytes and decrease in thymus size were associated with a collapse of the thymic stromal network, particularly that of the cortex. This was well illustrated by anti-CD4 and anticytokeratin labeling of the thymic epithelium (Fig. 5). Similarly, MTS 44 is pan-cortical epithelium, but also weakly stains infrequent, isolated medullary epithelial cells in the normal adult thymus. By 3 days PIrr, MTS 44 epithelial cells formed a compact 106 PIrr, the expressions of MTS 32 had vanished, leaving weak isolated epithelial cell staining in the cortex. Pockets of granular staining (possibly dead/ dying thymocytes) were also observed within epithelial cell-free regions. From days 3-6, MTS 32 remained weakly localized to infrequent epithelial cells and thymocytes within the cortex; reactivity had normalized by day 12. MTS 33 (ThB), normally reactive with cortical thymocytes and medullary epithelial clusters, displayed no thymocyte reactivity, but the latter were more obvious from days 1-6 PIrr, presumably as a result of the absence of T cells. Additionally, isolated clusters of weak ThB epithelial cells were revealed in the subcapsular cortex (Fig. 10). In accordance with flow cytometry, MTS 35 (TSA-1/Sca-2) and MTS 37 (HSA) were all negative on thymocytes from days 1-5, their reactivity on both cortical and medullary thymic stromal-cell subsets being more exposed. By day 6, their staining patterns had normalized.
DISCUSSION
In view of the recent realization that a major feature of thymic organogenisis is the bidirectional, symbiotic development relationship between thymic lymphocytes and stromal cells, the purpose of this study was to examine the responsiveness of these stromal cells to a dramatic loss in T cells. That is, has the microenvironment encompassed within the stromal elements the flexibility to alter in response to the need to reinitiate or enhance T-cell differentiation?
If so, can such a study reveal which specific stromal molecules are involved in this important process?
In addition to the anticipated dramatic loss of thymocytes, the most striking feature of the present study was the enhanced staining of selective thymic stromal-cell antigens following sublethal irradiation. There are three plausible explanations for this. Antigenic determinants, normally camouflaged by thymocytes, may have simply been exposed due to the loss of lymphocytes; the stromal cells may be responding directly to the effects of the irradiation; the sudden loss of thymocytes may have caused an upregulation in the stromal cell expression of these antigens. In accordance with clearly established data of previous studies (Takada et al., 1969;Huiskamp et al., 1983;, following irradiation there was indeed a dramatic decrease in thymic mass and collapse of the cortical stromal network through the loss of immature thymocytes. The developmental status of the remnant and regenerating T cells was verified not only by the changes in CD3, CD4, and CD8 profiles and the collapsed cortex, but also by the loss of TSA-1/Sca-2 (MTS 35), ThB (MTS 33), and HSA (MTS 37) positive cells. MTS 35 detects the antigen TSA-1/Sca-2, which is expressed only on immature thymocytes and is normally absent on peripheral T cells . Similar to HSA, ThB is present on most thymocytes, including the mature (CD3/) CD4 and CD8 siffgle-positive populations, but is absent on peripheral T cells . By days 3-4 PIrr, there was virtually no staining for ThB and HSA. This was surprising as phenotypically mature thymocytes were still present. It is possible that these antigens are radiosensitive or are cleaved by macrophagereleased proteases, but this would infer that the stromal cells that are still stained by the mAbs express different, more stable conformations of the antigens. There is no evidence to suggest immigration of mature peripheral (HSA-and ThBnegative) T cells into the irradiated thymus. The apparent buildup of CD3/CD4-CD8 and CD3/CD4 CD8 thymocytes presumably indicates a lag between phenotypic and functional (at least in terms of migration ability) maturity or it reveals a population of cells that may never leave the thymus reflecting the very low migration rate (,,1%/day; Scollay et al., 1980). Given the dramatic loss of cortical thymocytes, this could in principle expose the stromal antigens and hence the appearance of increased expression. This in part may be an explanation, but it does not account for the selective nature of the upregulated antigens and in any case would only be relevant in the cortex, but several antigens were enhanced in the medulla (MTS 9,14,16,17,and 29). From a purely technical viewpoint, it is also unlikely that the lymphocytes would normally sterically mask stromal antigens because they are nonoverlapping independent cell types and examined in the same plane of the section. Direct irradiation induced enhancement of the stromal antigens is also unlikely because we have observed similar, albeit less pronounced, increases following thymocyte depletion with hydrocortisone and 5-fluorouracil (unpub-lished observations). We thus favor the hypothesis that although some of the increases may be part of the "housekeeping" repair mechanisms following architectural damage to the thymus, they are essentially a direct response of the stromal-cell subsets to specifically reestablish a microenvironment geared to reinitiating or elevating basal levels of T-cell differentiation. Indeed, such specific upregulation of these molecules may be a generic feature of the thymus undergoing elevated thymopoiesis because we have found similar selective upregulation of stromal antigens during early ontogenesis (E14) and in the postcastration reversal of thymic atrophy (Price et al., manuscript in preparation).
Within the first 2 days following irradiation, TN thymocytes did not alter significantly, indicating their relative resistance to the irradiation; as to whether there were subtle shifts in the CD44/ CD25/c-kit defined TN subsets was not determined.
It is from within these TN cells that reconstitution begins around day 4 is rapidly expanded by day 7 and is virtually complete by day 21. High levels of [3H] TdR incorporation (Kadish and Basch, 1975;Sharp and Thomas, 1975) and increased lymphocyte reactivity of the mAb MTS 35,32,37,and ThB also indicated this to be a time of active cell division and increased immature thymocyte content.
Based on the findings reported herein, a synopsis of the intrathymic events following sublethal irradiation would be as follows. The initial and probably sole direct effect of the irradiation is the immediate loss of immature thymocytes, excluding the nondividing TN precursors. This causes an associated collapse of the cortex, the epithelium of which is still present, but as a compact zone of MTS 44 cells surrounding the medulla. From 24 to 72 h, there was a dramatic increase in macrophages and/ or myeloid cells revealed by enhanced MAC-l, MTS 17, MTS 37, and MTS 28 (data not shown) staining, presumably in response to the need to remove the dead and dying thymocytes. This increase in phagocytic cells is in agreement with earlier reports (Duijvestijn et al., 1982;. The loss of thymocytes was also associated with a marked increase in expression of antigens associated with the vasculature. Because it is unlikely that significant angiogenesis occurred during this time, the apparent increase in vascular endothelium (MTS 12 was more likely due to it being exposed. There was, however, an upregulation in the expression of the secreted endothelial antigen detected by MTS 15 and the extracellular matrix associated with the vasculature (MTS 16). Although the latter may be part of a tissue rebuilding process, we have also observed such endothelial activation in the regenerating male mouse thymus following castration (Price et al., manuscript in preparation). Hence as TN thymic precursors are localized around the thymic endothelium, it is possible that the vasculature initiates their proliferation and/or differentiation, evident from day 4 PIrr. The dynamic nature of the thymic stromal cells was further demonstrated by the dramatic increase in MTS 9 expression, which virtually stained the entire thymus from 24-72 h and gradually retracted with the increase in newly generated T cells, to the predominantly medullary epithelium pattern by day 6.
Hence, despite the widely held view that thymic stromal cells are a sessile population, they are capable of rapid alteration at both the cellular and molecular levels in response to the sudden depletion of thymocytes or need for renewed thymopoiesis. The post-irradiation model provides a valuable means of revealing this and we are currently using it to test the functional significance of the antigens by injecting the appropriate purified mAbs.
Animals
CBA/CaH male mice 4-6 weeks of age were obtained from the Monash University Central Animal House. The mice were exposed to 7.5 Gy of whole body y irradiation at a dose rate of 0.3 Gy/min, at the Walter and Eliza Hall Institute using Eldorado 6 teletherapy unit (Atomic Energy of Canada, Commercial Products) charged with 5000 Ci of 6Co.
Mice were subsequently killed at 12 h, days 1-14, 21, and 28 after the irradiation. Results are based on a minimum of four independent experiments and a pool of one to six mice per experiment, depending on the time point.
Cell-Surface Staining
All labeling for CD3, CD4, and CD8 was performed using anti-CD3-FITC (145-2Cll), anti-CD4 phycoerythrin (PE) (GK1.5), and anti-CD8-Biotin (B) (53-6.7) (CD4 and CD8 both from Becton Dickinson, CA). For three-color labeling, streptavidinphycoerythrin Texas red conjugate (Tandem TM, Southern Biotech, AL) was used to detect the THYMIC RECONSTITUTION AFTER IRRADIATION 115 biotinylated mAb. To control for labeling efficiency, freshly prepared thymocyte suspensions from normal adult mice were routinely used. For MTS mAb thymocyte reactivity, cells were indirectly labeled with the desired mAb followed by FITC-conjugated sheep anti-rat Ig (Silenus Laboratories). Stained cells were monitored using a FACScan (Becton Dickinson). Dead cells and nonlymphoid cells were excluded from data acquisition on the basis of 0 and 90 scatter profiles. Analyses were performed using Lysys II research software (Becton Dickinson).
Immunohistology
Thymii were snap frozen on liquid nitrogen and 4 m sections were cut using a cryostat. MTS mAb (Godfrey et al., 1990) labeling and the coexpression of epithelial cell determinants were assessed by double-labeling sections with the desired mAb and a polyvalent rabbit anti-cytokeratin Ab (Dako, Carpinteria, CA), respectively. Bound mAb was revealed by FITC-conjugated sheep anti-rat Ig (Silenus Laboratories) and TRITC-conjugated goat antirabbit Ig (Silenus Laboratories). By using these species combinations, no intermediate blocking steps were necessary. For three-color immunofluorescence, sections were labeled with anti-CD4-FITC, anti-CD8-Biotin (both from Becton Dickinson, and the polyvalent rabbit anti-cytokeratin Ab, washed and stained with a mixture of streptavidinphycoerythrin Texas red conjugate (Tandem TM, Southern Biotech) and AMCA anti-rabbit Ig (Jackson Immunodiagnostics, WG). Stained sections were washed and mounted under a coverslip using veronal buffered glycerol (pH 8.6) and examined using a Zeiss Axioskop fluorescence microscope. All photography was performed using Kodak 1000 ASA print or 1600 slide films.
|
v3-fos-license
|
2018-04-03T01:40:29.029Z
|
2015-10-21T00:00:00.000
|
17705824
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.pmedr.2015.10.003",
"pdf_hash": "695c1f8f7f4352602340123a46b16519c6e937f2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44222",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "695c1f8f7f4352602340123a46b16519c6e937f2",
"year": 2015
}
|
pes2o/s2orc
|
Human papillomavirus vaccine initiation among 9–13-year-olds in the United States
The quadrivalent and 9-valent human papillomavirus (HPV) vaccines are licensed for administration among 9–26-year-old males and females, with routine vaccination recommended for 11–12-year-olds. Despite the availability of the vaccine at younger ages, few studies have explored vaccine uptake prior to age 13, and national HPV vaccination surveillance data is limited to 13–17-year-olds. Our objective was to examine rates and predictors of HPV vaccine initiation among 9–13-year-olds in the United States. A national sample of mothers of 9–13-year-olds in the United States (N = 2446) completed a 2014 Web-based survey assessing socio-demographic characteristics, child's HPV vaccination history, provider communication regarding the vaccine, and other attitudes and behaviors pertaining to vaccination and healthcare utilization. The main outcome measure was child's initiation of the HPV vaccine (i.e., receipt of one or more doses). Approximately 35% of the full sample and 27.5% of the 9–10-year-olds had initiated HPV vaccination. Females were more likely than males to have initiated HPV vaccination by the age of 13 but not by younger ages. Strength of health provider recommendation regarding HPV vaccination was a particularly salient predictor of vaccine initiation. Approximately a third of children may be initiating the HPV vaccine series before or during the targeted age range for routine administration of the vaccine. Because coverage remains below national targets, further research aimed at increasing vaccination during early adolescence is needed. Improving providers' communication with parents about the HPV vaccine may be one potential mechanism for increasing vaccine coverage.
Introduction
Human papillomavirus (HPV) is a highly prevalent sexually transmitted infection affecting both males and females (Hariri et al., 2011;Forhan et al., 2009;Giuliano et al., 2008). HPV is the primary cause of cervical cancer and leading cause of other anogenital and oropharyngeal cancers, in addition to causing genital warts (Schiffman et al., 2007;Munoz et al., 2004;Watson et al., 2008;Jayaprakash et al., 2011). Vaccination provides effective protection against HPV and its associated adverse health outcomes (Baandrup et al., 2013;Giuliano et al., 2011;Hariri et al., 2013;Markowitz et al., 2013). The three available HPV vaccines each protect against two "high-risk" HPV types (HPV16 and 18) associated with the majority of HPV-related cancers. The quadrivalent and 9-valent vaccines also protect against two "low-risk" types (HPV6 and 11) associated with 90% of genital warts. The 9-valent vaccine protects against five additional "high-risk" types (HPV31, 33, 45, 52, and 58) responsible for 10% of HPV-related cancers (Petrosky et al., 2015). While the three-dose vaccine series may be initiated as early as age nine (Centers for Disease Control and Prevention, 2010), the Advisory Committee on Immunization Practices (ACIP) has recommended that the vaccine be routinely administered to 11-12-year-old females and males (quadrivalent and 9-valent only) since 2006 and 2011, respectively. However, only 60% of female and 41.7% of male 13-17-year-olds have received at least one dose of the vaccine as of 2014 (Reagan-Steiner et al., 2015). Limited research exists regarding rates of vaccine uptake at ages 9-12.
In addition to socioeconomic status, race/ethnicity, and general and vaccine-specific healthcare utilization behaviors (Centers for Disease Control and Prevention, 2013;Kessels et al., 2012;Reiter et al., 2013), healthcare provider recommendation appears to be a key factor in parents' decision to vaccinate their children against HPV (Dorell et al., 2013;Holman et al., 2013;Stokley et al., 2014;Donahue et al., 2014). However, most studies of predictors of vaccine uptake have focused on adolescents ages 13 and older. Greater understanding of factors influencing initiation prior to and during the recommended age range for routine HPV vaccination may identify potential targets for intervention to increase vaccination coverage.
Our objective was to explore rates and predictors of HPV vaccine initiation among 9-13-year-old males and females in the United States. Because limited data are available regarding HPV vaccine initiation prior to age 13, one aim of the study was to estimate the vaccine initiation rate among 9-12-year-olds. We were also able to compare our observed rate of initiation by age 13 to that of previously published national estimates for 13-year-olds from the same time period. An additional aim of the study was to examine characteristics of "early" initiators (i.e., individuals receiving at least one dose of the HPV vaccine by ages 9-10, prior to the targeted age for routine recommendation) as well as predictors of initiation among children who had already reached the targeted age range for routine administration (i.e., ages 11 and up).
Sample
Data were collected in August 2014 as part of a larger Web-based survey assessing attitudes and behaviors related to HPV and influenza vaccination in a national sample of mothers of 9-13-year-olds in the United States. The study was approved by the IRB at Indiana University. Data collection was facilitated by Survey Sampling International (SSI), a survey research company that maintains national panels of adults in 37 countries. Each panel member may participate in up to four surveys annually, and participants are entered into a lottery to win a monetary prize through SSI. E-mail invitations were sent at random to members of SSI's U.S. panel meeting the study's target demographic. Initially, 3208 panelists responded to a generic e-mail invitation to participate in a survey, with 2860 women (89%) agreeing to complete the survey after being presented with a brief description of the study. Of those agreeing to participate, 2446 (86%) met the eligibility criteria for participation (i.e., they were 18 years of age or older and the mother or female legal guardian of at least one 9-13-year-old child who lived in their household). Participants with more than one 9-13-year-old child were prompted to answer questions about their youngest child in this age range. Although the participants were recruited nationally, the sample does not constitute a nationally representative sample.
Measures
We assessed HPV vaccination history using mother report of the number of doses of the HPV vaccine received by the child. Children who had received at least 1 dose of the vaccine were categorized as HPV vaccine initiators.
Additional items related to HPV vaccination included mother report of whether the child's healthcare provider had discussed with her that her child could receive the HPV vaccine, and, if so, the strength of recommendation (i.e., "In your opinion, how strongly did your child's healthcare provider recommend that that your child receive the HPV vaccine?"), which was reported on a 5-point scale ranging from strongly discouraged to strongly recommended. These two items were combined and collapsed into four categories: "did not discuss," "no recommendation" (combining responses of "neither recommended nor discouraged," "discouraged," or "strongly discouraged" due to low frequency of individual categories), "recommended," and "strongly recommended." Mothers also reported whether the target child had any older siblings and whether any of these siblings had received the HPV vaccine.
Mothers responded to five items pertaining to general beliefs about the benefits of immunization (e.g., "It is important that people get vaccinated so that they can protect their health") using a 5-point response scale (Rickert et al., 2014) (Chronbach's alpha = .78), with a higher score on each item reflecting stronger beliefs about the benefits of vaccination. The scale reflects the perceived benefits construct of the Health Beliefs Model (Skinner et al., 2015), with items chosen based on previous reports assessing factors influencing parents' immunization of early adolescents (Rickert et al., 2014). "Perceived benefits of vaccination" was included as a continuous variable in our analyses, calculated using the mean of the five items.
Mothers reported on multiple healthcare-utilization behaviors such as whether or not the child had visited a doctor, nurse, or other healthcare provider in the past year; whether the child has a regular healthcare provider (defined as a healthcare provider who knows the child and his/her health history), the type of location where the child typically visits a healthcare provider; and whether the child received the influenza vaccine during the most recent influenza season (approximately September 2013 to March 2014).
Sociodemographic characteristics included child's age, gender, race/ethnicity, geographic region, and health insurance type, as these variables have previously been associated with disparities in adolescent HPV vaccination.
Statistical analyses
Children whose mothers reported uncertainty regarding the number of doses received or for whom vaccination history was missing (n = 261) were excluded from analyses predicting vaccine HPV initiation, resulting in a total sample size of 2185, or 89.3% of eligible participants. HPV vaccination data were more likely to be missing among target children (n = 261) who were male, belonged to racial/ethnic minorities, and did not have private insurance, and also more likely to be missing among 9-year-olds and less likely among 13-year-olds. No differences were found with regard to geographic region.
First, we compared HPV vaccine initiators to non-initiators with regard to categorical variables using chi-square tests of independence. We conducted a t-test of independent samples to assess whether there were differences between initiators and non-initiators with regard to mothers' perceived vaccine benefits. All variables found to have significant bivariate associations with initiation were then included in a multivariate logistic regression model comparing initiators to non-initiators. Lastly, we used chi-square tests of independence to examine differences between initiators and non-initiators separately among two age groups: 9-10-year-olds (those younger than the age range targeted for routine recommendation) and 11-13-year-olds.
Results
Descriptive statistics for all variables are presented in Table 1. Frequencies and means correspond to the sample of 2185 participants providing initiation data. Of these participants, 34.9% had initiated the vaccine series (i.e., one or more doses had been received). Table 1 presents the results of chi-square tests of independence comparing HPV vaccine initiators and non-initiators with α = .05. A significantly higher percentage of initiators were age 12 or 13, whereas a significantly higher percentage of non-initiators were age 9, 10, or 11. A significantly higher percentage of initiators were female, belonged to a racial/ethnic minority, had public health insurance or were uninsured, had an older sibling who received the HPV vaccine, received the flu vaccine during the most recent flu season, had visited a healthcare provider in the past year, and typically received healthcare services in a location other than a private office. Mothers of initiators reported significantly greater perceived general benefits of vaccination. A significantly higher percentage of initiators had mothers who reported that their child's healthcare provider had recommended or strongly recommended HPV vaccination, while a significantly higher percentage of non-initiators had mothers who reported that their child's healthcare provider had not discussed the possibility of vaccinating their child against HPV.
Comparing HPV vaccine initiators and non-initiators
All variables found to have significant bivariate associations with initiation were then included in a logistic regression model predicting vaccine initiation. Table 2 presents odds ratios representing the effect of each measure in univariate models (Table 2, Model 1) followed by a multivariate model (Table 2, Model 2). In the multivariate model, relative to mothers who reported that a healthcare provider did not discuss the HPV vaccine with them, mothers were more likely to report initiation if they reported that their child's healthcare provider (a) discussed but did not specifically recommend vaccination, OR = 8.97 (5.74-14.00), (b) recommended vaccination, OR = 21.88 (15.38-31.12), or (c) strongly recommended vaccination, OR = 38.60 (26.61-56.00). Increased odds of initiation were also found with every one-year increase in age, OR = 1.23 (95% CI 1.13-1.35); having an older sibling who received the HPV vaccine, OR = 2.52 (1.94-3.27); and having received the flu vaccine last flu season, OR = 2.51 (1.91-3.31). However, decreased odds of initiation were found when receiving services in a private office, OR = .39 (.28-.54) for those with private insurance, OR = .72 (.55-.94), and for females, OR = .76 (.59-.98). Minority status, perceived benefits of vaccination, and past-year healthcare provider visit were not significant predictors of initiation in the multivariate model.
Because observed data suggest a substantial gender difference in vaccination, with females more likely to initiate vaccination than males, we conducted additional exploratory analyses to clarify the negative effect of female gender in our multivariate logistic regression model. We found that this effect was likely due to the association between gender and provider communication-when included in the multivariate logistic regression model, a gender × communication interaction term was statistically significant. To illustrate the interaction between child gender and healthcare provider communication ( Fig. 1), we estimated the predicted probability of HPV vaccine initiation as a function of gender at each level of provider communication using estimates obtained from multivariate logit regression models and including all remaining variables from the multivariate model in Table 2 (Long and Freese, 2005). The pattern observed in Fig. 1 suggests that as the strength of provider recommendation increased, the predicted probability of initiation also increased. Provider discussion of HPV vaccination, regardless of recommendation, appeared to have a stronger effect on initiation among males than among females. The predicted probability of initiation without discussion for females (.05) was significantly lower than for males (.07, 95% CI for difference: .001, .030.) The predicted probability of initiation was significantly higher for males than for females following discussion without recommendation (.39 vs. .33, 95% CI for difference: .003, .122), recommendation (.61 vs. .54, 95% CI for difference: .005, .129), and strong recommendation (.73 vs. .68, 95% CI for difference: .004, .109).
Chi-square tests of independence also suggested significant differences in healthcare provider communication by gender (χ 2 = 40.13, df = 3, p b .001). Among mothers of daughters, 41.2% (n = 503) reported that no discussion about HPV vaccination occurred; 8.6% (n = 105) reported that the HPV vaccine was discussed but either discouraged or not recommended; and 26.3% (n = 321) and 23.9% (n = 291) reported that their child's healthcare provider recommended or strongly recommended HPV vaccination, respectively. Among mothers of sons, 54.2% (n = 485) reported that no discussion about HPV vaccination occurred; 8.9% (n = 80) reported that the HPV vaccine was discussed but either discouraged or not recommended; and 20.4% (n = 183) and 16.4% (n = 147) reported that their child's healthcare provider recommended or strongly recommended HPV vaccination, respectively.
Comparing HPV vaccine initiators and non-initiators by age group
Initiation was reported by 27.5% of 9-10-year-olds and 42.4% of 11-13-year-olds. We examined factors associated with HPV vaccine initiation vs. non-initiation by ages 9-10 as well as initiation vs. noninitiation by ages 11-13 (Table 3) using chi-square tests of independence, α = .05. A gender difference was found in initiation by ages 11-13 but not by ages 9-10. Because existing research shows that there are significant gender differences in HPV vaccination among older adolescents, we conducted additional chi-square tests to explore whether the rate of initiation differed by gender at each age within the target age range. No significant differences were found with regard to the rates of initiation between females and males by age 9 (25.4% vs. 26.0%, respectively, p = .88), age 10 (32.7% vs. 26.4%, p = .15), age 11 Mothers were asked to select multiple race/ethnicities when applicable. Participants were categorized as belonging to a racial/ethnic nonminority (i.e., White) or minority, (including 21.0% of participants who reported a single minority race/ethnicity and 12.8% who indicated multiple race/ethnicities). c Geographic location was determined from mother's reported zip code and categorized based on the U.S. census region. d Category includes mothers reporting that the vaccine was "neither recommended nor discouraged," n = 174, "discouraged," n = 9; or "strongly discouraged," n = 2. e Includes community health clinic, university-based health clinic, emergency room or urgent care clinic, or other f For variables consisting of more than 2 categories, denotes significant group differences for that category at p b .05.
Chi-square tests of independence also indicated differences in health insurance type by initiation status among 9-10-year-olds, but not among 11-13-year-olds, with a significantly greater percentage of those who had initiated the vaccine by ages 9-10 either receiving publicly funded health insurance or being uninsured (45.8%) compared to non-initiated 9-10-year-olds (39.8%). A significantly higher percentage of those initiated by ages 11-13 had visited a healthcare provider in the past year (96.5%) compared to those not initiated by ages 11-13 (91.4%), while this difference was not observed among 9-10-year-olds. Those initiating by ages 11-13 also had mothers who reported significantly higher perceived benefits of vaccination than those not initiating by ages 11-13, M(SD) 3.68 (.75) vs. 3.45 (.81). Table 2 Odds ratios (OR) from univariate (Model 1) and multivariate (Model 2) logistic regression models predicting HPV vaccine initiation among 9-13-year-olds. Table 2. Estimated values of all remaining variables held at their means. Error bars represent the 95% confidence interval of the predicted probability. Data were collected via a Web-based survey in 2014.
Discussion
We explored HPV vaccine initiation among 9-13-year-olds in the United States. To our knowledge, this is one of the first studies subsequent to the ACIP recommendation for routine immunization in early adolescence that assesses rates of HPV vaccine initiation among both males and females in this age range. This study adds to literature suggesting that provider communication and/or recommendation regarding HPV vaccination is a key factor in parents' vaccination decisions, suggesting that this may also be a key factor at earlier ages. We were also able to explore potential differences between children who initiated vaccination prior to and during the targeted ages for routine recommendation.
Our multivariate analyses indicated that child age, provider communication regarding HPV vaccination, flu vaccine history, sibling receipt of the HPV vaccine, health insurance type, and typical location of provider visits were significant predictors of initiation among 9-13-year-olds. Older age was associated with increased likelihood of having initiated the vaccine series, which is in keeping with previous estimates of coverage across later adolescence (Stokley et al., 2014;Dempsey et al., 2010;National Center for Health Statistics and National Health Interview Survey, 2010). The results of our study suggest that this effect is also present in the early adolescent period. Higher initiation rates across adolescence occur in conjunction with an increase in healthcare provider recommendation of vaccination across the age range (Vadaparampil et Our results suggest that encouraging provider recommendation of vaccination at earlier ages could result in increased initiation rates at earlier ages, as discussed below. Additionally, parents who opt to vaccinate their children against the seasonal flu may be more accepting of other non-mandated immunizations such as the HPV vaccine, and those who have older vaccinated children may also be more accepting of the HPV vaccine for their younger children. High acceptance of HPV vaccination may be more common among parents of children receiving care outside of private clinics and/or who lack private health insurance benefits (Zimet et al., 2005). Again, our findings extend previous research among older adolescents into the younger end of the vaccineeligible range. Interestingly, mothers' perceived benefits of vaccination, while associated with likelihood of initiation in bivariate analyses, were no longer a significant predictor of initiation in the multivariate analysis, suggesting that other factors may ultimately be more salient to parents' decisions regarding immunization.
Healthcare provider communication regarding HPV vaccination was a particularly strong predictor of initiation across our sample. This finding regarding younger children is consistent with previous research among parents of older adolescents indicating the significant impact of provider recommendation on parents' decision making surrounding HPV vaccination (Dorell et al., 2013;Holman et al., 2013;Stokley et al., 2014;Donahue et al., 2014;Alexander et al., 2014).. Unfortunately, almost half of mothers in our sample reported that their child's healthcare provider did not discuss HPV vaccination. When discussion did occur, however, it most commonly involved a recommendation or Table 3 Comparison of HPV vaccine initiation status by ages 9-10 and 11-13. p value is for two-sided test, α b .05. Chi-square and t-tests were used to compare vaccine initiators and non-initiators with regard to categorical and continuous variables, respectively. Note: Data were collected via a Web-based survey in 2014. a For variables with more than two categories, this denotes significant group differences for specific category at p b .05. strong recommendation; Fig. 1 suggests that the majority of mothers would elect to vaccinate their son or daughter if it was recommended. Provider recommendation may increase parent perception of the vaccine as safe and effective, leading to increased rates of initiation (Staras et al., 2014). Our finding that initiation among males may be more strongly affected by healthcare provider recommendation could reflect the relative recency of the ACIP's routine recommendation for males compared to females and the associated barriers to vaccination, such as less awareness of the importance of male vaccination among providers as well as among parents of sons. Such barriers are commonly reported by parents of sons, who may be willing to vaccinate once such barriers are addressed (Donahue et al., 2014).
Interestingly, the gender gap in initiation did not begin to emerge in our sample until around ages 12 to 13. Similarities in initiation rates among males and females at younger ages may reflect a trend toward increased availability and equality of administration of the vaccine for males, as licensure and routine recommendation of the vaccine for males (2009 and 2011, respectively) occurred more recently than for females (2006 and 2007, respectively).
Although the rates were similar among both genders at ages 11-12, coverage remains discouragingly low given that this is the age range in which the vaccine should be routinely administered. Missed opportunities for vaccination are frequent-the Centers for Disease Control and Prevention estimates that 91.3% of females born in 2000 would have received at least one dose of the HPV vaccine by age 13 if it had been administered during healthcare visits when they received another immunization (Stokley et al., 2014). In our own sample, 92.3% of 9-13-year-olds who had not yet initiated the HPV vaccine had seen a healthcare provider within the past year, highlighting the importance of healthcare provider communication regarding vaccination whenever such an opportunity arises. In conjunction with previous research, our findings suggest that provider-focused interventions aimed at promoting communication about the HPV vaccine may reduce such missed opportunities and increase HPV vaccination coverage among youth (Perkins et al., 2015). Improving provider communication about the HPV vaccine during the early adolescent age range may be particularly beneficial, as older adolescents tend to seek preventive care services less frequently (Rand et al., 2007) and are also at greater risk for experiencing HPV exposure prior to vaccination.
The study has several limitations. First, data were not collected from a nationally representative sample. However, the demographics of our sample are comparable to the most recent available U.S. census estimates pertaining to the distribution of individuals across geographic regions (U.S. Census Bureau) as well as the distribution of racial groups among 9-13-year-olds (U.S. Census Bureau, 2015a,b). Second, parental recall of vaccination history rather than immunization records may produce accurate estimates of overall coverage but less accurate estimates among racial minorities or individuals of lower socioeconomic status (U.S. Census Bureau, 2015a,b). However, mother-reported vaccination status may be more accurate than reports from other caregivers (Ojha et al., 2013;Attanasio and McAlpine, 2014). Of note, the initiation rate among 13-year-olds in our 2014 sample (females, 53.8%; males, 40.7%) is similar to coverage estimates for 13-year-olds in the 2014 NIS-Teen sample (females, 51.1% ± 4.1; males, 38.9% ± 4.2), confirmed via vaccination records (Reagan-Steiner et al., 2015). Third, it is possible that parents who initiated vaccination and/or have more favorable attitudes toward vaccination may be more likely to recall having received positive provider communication about the vaccine. Similarly, healthcare providers familiar with a family's vaccination attitudes and history may be more likely to recommend the vaccine to vaccinefriendly parents, and parents may seek out providers with whom they share opinions about the importance of immunization practices. Fourth, we did not assess age at initiation, meaning that some individuals categorized as having initiated the vaccine by ages 11-13 may have in fact initiated by ages 9-10, which would result in an underestimation of individuals as early initiators. Finally, data were unavailable regarding panel members who viewed the initial e-mail invitation for participation and opted not to participate, which prevented us from identifying possible patterns of characteristics among nonresponders.
Conclusions
In one of the first studies to assess rates of HPV vaccine initiation among 9-13-year-olds following the ACIP recommendation for routine immunization in early adolescence, we found that one quarter of 9-10-year-olds had initiated the vaccine series, with coverage increasing across the age range, and no gender differences in initiation at younger ages. Strength of health provider recommendation to parents about HPV vaccination emerged as a key factor in parents' decision to vaccinate their children. Given the ubiquity of HPV and the vaccine's effectiveness against adverse health consequence, the frequency of initiation occurring prior to the targeted age for routine administration holds promise; however, vaccination coverage among early adolescents remains well below national health goals. Improvements in provider communication with patients and their parents could substantially contribute to HPV vaccination during early adolescence becoming truly routine.
Conflict of interest statement
Gregory Zimet has been an investigator on investigator-initiated research funded by Merck, Inc., has served as a consultant to Merck, Inc., and has received an unrestricted program development grant from GlaxoSmithKline. The other authors have no conflicts of interest to disclose. The funding organizations did not play a direct role in the design and conduct of the study; management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; or the decision to submit the manuscript for publication.
|
v3-fos-license
|
2023-03-08T16:21:13.803Z
|
2023-02-27T00:00:00.000
|
257388337
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2304-8158/12/5/1015/pdf?version=1677494269",
"pdf_hash": "ac7c1ce081836812d80e8683e49e9c88bd252e97",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44224",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "d984dc8d371a849f37e2193fc1763381e9e0546e",
"year": 2023
}
|
pes2o/s2orc
|
Marine Capture Fisheries from Western Indian Ocean: An Excellent Source of Proteins and Essential Amino Acids
The Republic of Seychelles is located in Western-Central Indian Ocean, and marine capture fisheries play a key role in the country’s economic and social life in terms of food security, employment, and cultural identity. The Seychellois are among the highest per capita fish-consuming people in the world, with a high reliance on fish for protein. However, the diet is in transition, moving towards a Western-style diet lower in fish and higher in animal meat and easily available, highly processed foods. The aim of this study was to examine and evaluate the protein content and quality of a wide range of marine species exploited by the Seychelles industrial and artisanal fisheries, as well as to further to assess the contribution of these species to the daily intake recommended by the World Health Organization (WHO). A total of 230 individuals from 33 marine species, including 3 crustaceans, 1 shark, and 29 teleost fish, were collected from the Seychelles waters during 2014–2016. All analyzed species had a high content of high-quality protein, with all indispensable amino acids above the reference value pattern for adults and children. As seafood comprises almost 50% of the consumed animal protein in the Seychelles, it is of particular importance as a source of essential amino acids and associated nutrients, and as such every effort to sustain the consumption of regional seafood should be encouraged.
Introduction
The significance of global food and nutrition security is anchored in the United Nations Sustainable Development Goals (SDGs) SDG2 "Zero Hunger" and SDG 3 "Good Health and Well-Being" [1]. It is strongly encouraged that an increased food production should come from well-managed ocean resources. Land-based resources are limited, and agricultural food production is one of the major greenhouse gas (GHG) emitters [2,3].
Seafood plays an important role in food and nutrition security, particularly in lowand middle-income countries [4]. The nutritional recommendations to eat fish are based on their lipid content and fatty acid composition [5], although seafood is also an important source of vitamins and minerals [6,7] and high-quality proteins [8,9] that are important for human health and disease prevention. Seafood is also recognized as a rich source of taurine [10], considered to have a positive impact on cardiovascular diseases [11,12]. Seafood may also be a source of toxic heavy metals such as mercury, arsenic, lead, and cadmium [6,13,14], as well as persistent organic pollutants such as dioxins and dioxin-like polychlorinated biphenyls (PCBs) [15][16][17][18]. However, public agencies in Europe have reviewed available evidence through 2021 and concluded that the possible adverse effects of mercury, dioxin, and dioxin-like PCB exposure are offset by the benefits of seafood consumption on cardio-metabolic diseases in general [16,17] and that seafood consumption during pregnancy is likely to benefit the neurocognitive development of children [15]. Additionally, a relatively recent review conducted by Hibbeln et al. [19] concluded with moderate and consistent evidence that seafood consumption during pregnancy and childhood had beneficial associations with neurocognitive outcomes. Since 1986, an ongoing research project in the Seychelles (Seychelles Child Development Study, SCDS; https://www.urmc.rochester.edu/labs/seychelles.aspx, accessed on 1 January 2023) has been examining associations between maternal methylmercury exposure and neurodevelopment in children [20][21][22].
The Republic of Seychelles, one of the 38 United Nations member states of the Small Island Developing States' group, is located in Western-Central Indian Ocean. It includes a land surface of only 459 km 2 divided into 115 tropical islands scattered within an Exclusive Economic Zone (EEZ) of 1.3 million square kilometers [23]. The majority of the population resides on three islands of a large submerged mid-oceanic shelf called the Mahé Plateau. Marine capture fisheries play a key role in the country's economic and social life. In addition to the industrial tuna fisheries being a major pillar of the economy, artisanal fisheries are continuously of great importance to the local population in terms of food security, employment, and cultural identity. Fish is seen not only as a staple food but also as a delicacy in the local Creole cuisine, and the Seychellois are among the highest per capita fish-consuming people in the world, with a high reliance on fish for protein-consuming about 59 kg per year measured as live weight [23], which is equivalent to 48% of the animal protein consumed [24]. Pregnant women and mothers have been reported to consume as many as 12 meals consisting of fish per week [25]. However, the diet in the Seychelles, as elsewhere, is in transition, moving towards a Western-style diet lower in fish and higher in animal meat and easily available, highly processed foods [26]. This has contributed to the increase in the prevalence of obesity (BMI ≥ 30 kg/m 2 ) between 1998 and 2004 from 4 to 15% in men and from 23 to 34% in women [27], and it highlights the importance of fish in the diet.
Adequate protein intake is essential for tissue maintenance and growth, with amino acids being important as building blocks of proteins and as intermediates in various metabolic pathways. The nutritional quality of a protein is dependent on the content of indispensable, also called essential, amino acids, i.e., amino acids that are not synthesized in our body to meet the human requirements. The World Health Organization (WHO) recommends a daily dietary intake of protein of 830 mg protein/kg body weight for healthy adults; an additional 1, 9, and 31 g protein/day for pregnant women in the first, second, and third trimester, respectively; and 910 mg/kg body weight for children, in addition to specific recommendations for each of the indispensable amino acids [28,29].
The protein content of marine capture fisheries can significantly vary between species and even within species depending on habitat, region, and season [30]. Access to local and up-to-date food composition data is therefore essential for dietary counselling, clinical nutrition, and improvements in nutrition security and the development of effective foodand nutrition-related policies [31]. To our understanding, the protein contents and quality of different marine capture species from the Seychelles have not been investigated, nor have any data been published.
The objectives of this work were to examine the amino acid composition and to evaluate the protein content and protein quality of a wide range of marine species exploited by Seychelles industrial and artisanal fisheries, as well as further to assess the contribution of these species to the daily intake of proteins and essential amino acids recommended by the WHO.
Sample Collection and Preparation
A total of 230 individuals from 33 marine species, including 3 crustaceans, 1 shark, and 29 teleost fish, were collected from Seychelles waters during 2014-2016 (Table 1). Nearshore species were caught on the Mahé Plateau, where most of the artisanal fishing grounds are located [32], and offshore species were caught around the Mahé Plateau within the exclusive economic zone (EEZ) (Figure 1). After their capture, all organisms were measured (cephalothorax length (CL) for crustaceans, lower jaw-fork length (LJFL) for swordfish, and fork length (FL) and total length (TL) for other species) and weighted, and a piece of the edible part was collected from the tail for crustaceans and dorsal muscle for other species before being immediately stored at −80 • C. Samples were then freeze-dried for 72 h and ground to powder before amino acid analyses.
Amino Acid Composition and Protein Content
Amino acid composition was analyzed by dissolving approximately 40 mg of dried samples in 0.7 mL of distilled H 2 O and 0.5 mL of 20 mM norleucine (internal standard), which was then hydrolyzed as previously described [33,34]. Following hydrolysis, 100 µL aliquots of the hydrolysates were evaporated under nitrogen gas until complete dryness and re-dissolved to a suitable concentration in a lithium citrate buffer at pH 2.2. All amino acids were chromatographically analyzed using an ion exchange column followed by ninhydrin post column derivatization on a Biochrom 30 amino acid analyzer (Biochrom Co., Cambridge, UK). Amino acid residues were identified using the A9906 physiological amino acid standard (Sigma Chemical Co., St. Louis, MO, USA), as described previously [35]. The concentrations of 20 amino acids (histidine, his; isoleucine, ile; leucine, leu; lysine, lys; methionine, met; phenylalaline, phe; threonine, thr; valine, val; alanine, ala; b-alanine; b-ala; arginine, arg; asparagine, asn; aspartic acid, asp; cysteine, cys; glutamine, gln; glutamic acid, glu; glycine, gly; hydroxyproline, hyp; proline, pro; serine, ser; tyrosine, tyr; taurine, tau) were converted from dry weight to wet weight by using a mean moisture percentage of 72-81%, depending on species, and expressed in mg per 100 g of raw edible portion (noted mg/100 g). Tryptophan is denatured during acid hydrolysis, while glutamine and asparagine deaminate during acid hydrolysis and were therefore included in the category of glutamate and aspartic acid.
Protein content (g/100 g) was determined as the sum of the individual amino acid residues (the molecular weight of each amino acid after the subtraction of the molecular weight of H 2 O), as recommended by the FAO [36], using norleucine as internal standard. Table 1. Marine species collected from the Seychelles waters, with associated details. Length (presented as mean ± SD) refers to the mean carapace length for crustaceans, the mean lower jaw-fork length for swordfish, and the mean fork length for other teleost fish and for sharks. N = number of individuals.
Statistical Analyses
Statistical analyses were performed using IBM SPSS statistics 27. All samples were measured in duplicate, and the number of individuals analyzed from each marine species is presented in Table 1.
Protein Content
The protein content, calculated as the sum of amino acids minus the molecular weight of water, was relatively constant among all fish species (Figure 2), varying between 13 and 17 g/100 g. Crustaceans had a lower protein content of approximately 11-12 g/100 g. The total amount of essential amino acids (EAAs) constituted half of the protein content for all species (Figure 2).
Figure 2.
Protein and sum of essential amino acid (threonine, valine, methionine, isoleucine, leucine, phenylalanine, lysine, and histidine; tryptophan is denatured during acid hydrolysis and is thus not included) content (g/100 g) in species caught in the Seychelles waters.
Distribution of Essential Amino Acids
The distribution of EAAs was similar for all fish species (Figure 3), with leucine and the commonly limiting amino acid lysine being the most abundant amino acids (1500-1700 mg/100 g and 1800-2000 mg/100 g, respectively). These two amino acids were also the most abundant in crustaceans, although their contributions were slightly lower than in fish. The concentrations of histidine were highly variable among the 33 studied species (from 250 to 1350 mg/100 g), with the highest relative content being measured in tunas and mackerels. The contents of threonine (776-1093 mg/100 g), valine (867-1091 mg/100 g), methionine (450-697 mg/100 g), isoleucine (799-1071 mg/100 g), and phenylalanine (650-908 mg/100 g) were higher in the fish species compared with the crustaceans (on average 569, 646, 402, 623, and 578 mg/100 g, respectively). Figure 3. Distribution of essential amino acids (mg/100 g) in species caught in the Seychelles waters. Thr, threonine; Val, valine; Met, methionine; Ile, isoleucine; Leu, leucine; Phe, phenylalanine; Lys, lysine; His, histidine (tryptophan is denatured during acid hydrolysis and is thus not included).
Taurine Concentration
The concentration of taurine considerably varied within and among the studied species (Figure 4). Skipjack tuna and common dolphinfish were lowest in taurine (<20 mg/100 g), while humpback red snapper and peacock hind were highest in taurine (440 mg/100 g).
Protein Content
The protein content was calculated based on the amount of total amino acids minus the molecular weight of water, as recommended by the FAO [36]. This procedure efficiently hydrolyzes most of the peptide bonds while also reducing some amino acids. Tryptophan is denatured during acid hydrolysis, while glutamine and asparagine deaminate during acid hydrolysis and were therefore included in the category of glutamate and aspartic acid [37]. This may have resulted in a potential underestimation of the actual protein content and a lower protein content compared with that measured with the commonly used Kjeldahl s method [33]. The protein content was similarly high in all marine species, with the exceptions of spanner crab and lobsters that showed slightly lower protein contents.
Contribution to Daily Recommended Intake
The Codex nutritional reference values for protein are based on the best available scientific knowledge of the daily amount needed for good health (830 mg protein/kg body weight for adults and 910 mg kg body weight for children). Based on these reference values and considering a portion size of 150 g for adults and 75 g for children, the contributions of one portion of each capture fishery species to the recommended dietary intake (RDI) for a 65 kg adult person, a 65 kg pregnant woman in the third trimester, and a 10-year-old child (average body weight of 30 kg) were estimated ( Figure 5). One portion of swordfish and crustaceans (spiny lobster and spanner crab) would cover 30% of the adult, pregnant woman, and child RDIs. All other species would contribute 40-45% of these RDIs. The FAO and WHO have recommended the dietary intake of each of the indispensable amino acids, based on growth and nitrogen balance. The percentage coverage of each of these amino acids for a 65 kg person by a 150 g portion of different species is illustrated in Figure 6. One portion of crustacean from the Palinuridae (spiny lobster) and Raninidae (spanner crab) families covered 50% of phenylalanine; 60-67% of valine, leucine, isoleucine, and histidine; and 90% of threonine, methionine, and lysine. One portion of fish covered approximately 70% of the daily recommended amount of phenylalanine and around or above 100% of the daily recommended amount of the other indispensable amino acids.
Protein Quality
In addition to being building blocks for protein synthesis, each amino acid has its own metabolic pathway. The 20 proteogenic amino acids are classified as non-essential or essential. The nine essential amino acids (threonine, valine, methionine, isoleucine, leucine, phenylalanine, lysine, histidine, and tryptophan) cannot be synthesized in the human body from naturally occurring precursors at a rate needed to meet the metabolic requirements.
In this work, protein quality was determined based on the amount of essential amino acids. All analyzed fish species were high in lysine and threonine, which are strictly indispensable [1]. One portion of crustacean meat or fish filet was found to significantly contribute to the daily requirements of the indispensable amino acids, and one portion of fish filet was found to meet the requirements of threonine, methionine, and lysine. However, it is important to mention that all samples were analyzed raw, and several factors may influence amino acid contents during processing and household preparations such as boiling, baking, frying, and smoking [38,39], which may affect the amino acid contribution to the diet. These values thus indicate the amount available in pre-processed food and not the exact actually absorbed amount. The chemical score of amino acids used to assess the amount of limiting amino acids, can be used to determine if a diet meets the required amount of indispensable amino acids. The chemical score equals the ratio between each indispensable amino acid in the food protein and the corresponding amino acid in a reference protein proposed by the FAO/WHO. The protein-digestibility-corrected amino acid score can thereafter be calculated as the amino acid score multiplied by the true digestibility in humans [40]. Proteins of animal source normally have a chemical score of 1.0, while the scores of cereal proteins normally range from 0.4 to 0.6. All species analyzed in this work had a high protein quality, with the contents of all indispensable amino acids above the reference-scoring pattern for adults [40] indicating that the protein quality was superior. As tryptophan was denatured during the acid hydrolysis of the samples, it was not possible to assess whether it is a limiting amino acid.
Taurine
Accumulating evidence supports the idea that an increased dietary intake of taurine, a naturally occurring sulfonic acid, may be beneficial, as it has been documented to attenuate hypertension, suppress atherosclerosis, and exhibit antioxidative and anti-inflammatory properties [41][42][43]. Fish is recognized as a rich source of taurine [44], and urinary taurine may be used as a marker of the level of fish consumption [45]. In this study, taurine content greatly varied not only between species but also within different specimens of a given species. As a free amino acid, taurine is easily lost in handling and preparation, and its content is often found to significantly vary, even within one fillet [38,46]. A stricter control of all parts of the value chain would be necessary to avoid such variation. The levels of taurine measured in this study were within normal ranges compared to seafood in general, but as this is the first report of the levels of taurine for many of these species, comparison is challenging. The highest content was analyzed in the demersal species, humpback red snapper and tomato hind.
Regional Food and Nutrition Security
Food traditions are important. Food provides nutrients, and changes in lifestyles include nutrition transitions; the decreasing consumption of local foods is often associated with an increase in the consumption of carbohydrate-dense and highly processed foods. Such foods are normally cheaper and excessive in sugar, fat and additives. The high intake of refined food products has led to a worldwide elevated burden of overweight and obesity [47], and Seychellois are not an exception [26]. Malnutrition, excessive caloric consumption, and coexisting micronutrient deficiencies combined with declining activity levels may imply increases in and the earlier onset of lifestyle diseases, and global food systems may be leading to the poorer health of many [48][49][50].
Conclusions
This study provides detailed information on the concentrations of essential and nonessential amino acids and the protein content and quality of a wide range of tropical capture fishery species from the Seychelles (Western Indian Ocean) caught in both nearshore and offshore waters. The species' contributions to the recommended daily intake values of indispensable amino acids from the WHO were assessed, and implications for regional food and nutrition security was discussed.
The captured fish species analyzed in this work had high contents of high-quality protein, with all indispensable amino acids above the reference value pattern for adults and children. Such species with high protein contents of superior quality are perceived as healthy foods. As fish makes up as much as 48% of the consumed animal protein in the Seychelles, it is of particular importance as a source of essential amino acids and associated nutrients such as fatty acids, taurine, vitamins and minerals. Accordingly, every effort to sustain the consumption of regional fish should be encouraged. Acknowledgments: The authors would like to thank all fishermen and crews who assisted with sampling. A special thank goes to the SFA staff (in alphabetic order: Clara Belmont, Dora Lesperance, Kettyna Gabriel, Maria Rose, Natifa Pillay, Rodney Melanie, Rona Arrisol, and Stephanie Hollanda) for their help in processing the samples and to Emmanuel Chassot (IOTC) for assisting with data management. The authors are also grateful to chief engineer Guro K. Edvinsen and senior engineer Hanne K. Maehre for analytical contributions.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-09-22T23:12:49.874Z
|
2018-09-14T00:00:00.000
|
52271369
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1049732318798358",
"pdf_hash": "beff28f0072d14b55f84050f59f9f73211849258",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44227",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "beff28f0072d14b55f84050f59f9f73211849258",
"year": 2018
}
|
pes2o/s2orc
|
Wandering as a Sociomaterial Practice: Extending the Theorization of GPS Tracking in Cognitive Impairment
Electronic tracking through global positioning systems (GPSs) is used to monitor people with cognitive impairment who “wander” outside the home. This ethnographic study explored how GPS-monitored wandering was experienced by individuals, lay carers, and professional staff. Seven in-depth case studies revealed that wandering was often an enjoyable and worthwhile activity and helped deal with uncertainty and threats to identity. In what were typically very complex care contexts, GPS devices were useful to the extent that they aligned with a wider sociomaterial care network that included lay carers, call centers, and health and social care professionals. In this context, “safe” wandering was a collaborative accomplishment that depended on the technology’s materiality, affordances, and aesthetic properties; a distributed knowledge of the individual and the places they wandered through, and a collective and dynamic interpretation of risk. Implications for design and delivery of GPS devices and services for cognitive impairment are discussed.
Introduction
Of the many problems faced by people with cognitive impairment and their carers, wandering is considered one of the most challenging (Cipriani, Lucetti, Nuti, & Danti, 2014;Lai & Arthur, 2003). The term "wandering," though rarely defined, is used to describe a number of different behaviors based on the attributes of walking and movement. Attempts to understand wandering in people with cognitive impairment generally fall under two main perspectives. On one hand, it is seen as a symptom of cognitive impairment, defined according to observable actions (Martino-Saltzman, Blasch, Morris, & McNeal, 1991;Tariot, 1997). A contrasting perspective presents wandering as a social practice, through an appreciation of how it relates to people's identity and sense of place (Brittain, Degnen, Gibson, Dickinson, & Robinson, 2017;Graham, 2015;Martin, Kontos, & Ward, 2013).
In this article, we draw on existing literature on wandering and technological developments for the management of wandering through the use of global positioning system (GPS) tracking technology, to propose a further shift: from viewing wandering as a social practice to a sociomaterial practice. We argue that it is the mutual configurability of the social and the material that is critical for successful and appropriate solutions to the challenges of wandering. We apply strong structuration theory (Greenhalgh & Stones, 2010) to analyze how GPS tracking technology is used in practice to care for people with cognitive impairment. We elaborate wandering as sociomaterial practice through detailed case studies, illustrating how wandering is (and likely to be increasingly) mediated by technology. In addition, we highlight the importance of applying a sociomaterial perspective to the development of GPS tracking solutions. due to difficulties defining and recording such instances. It is estimated that 5% of wandering instances result in the person becoming physically harmed (Petonito et al., 2013), but it often causes great anxiety to carers (Brittain et al., 2017).
In contrast with this literature, research within the person-centered care tradition (academic nursing and critical social sciences) takes the view that identity, and sense of self resides at the level of the body and is enacted through habitual embodied actions and routines (Graham, 2015;Martin et al., 2013). In short, walking is not merely a way to travel but a social practice. Graham (2015), for example, uses Ingold's (2011) concept of wayfaring (people inhabit the world through the embodied experience of walking) to understand the significance of movement for people with dementia living in a residential care home.
The nonbiomedical perspectives on wandering underscore the importance (and the ethical implications) of enabling freedom of movement for people with cognitive impairment. But this freedom is also associated with risks. Brittain et al. (2017), for example, found that outdoor spaces often provided positive experiences for people with cognitive impairment, but such spaces could also be threatening and unfamiliar to the person. They also found that exploring outside the home was invariably entangled with caregivers' fears about a person's well-being and safety. Therefore, when devising practical steps to facilitate wandering, it is important to pay particular attention to ways in which these contrasting perspectives-wandering as a healthy, meaningful practice to be supported and wandering as a dangerous and problematic practice, whose risks need to be carefully managed-may be reconciled.
The concepts of risk and risk management have become central to everyday life in late modernity (Beck, 1992). The dominant view in health care holds that risk needs to be avoided where possible, and, if not then, it must be managed within acceptable limits; who gets to set those limits and how then becomes an important issue. The field of risk assessment is founded on implicit assumptions that evaluating risk is a technical matter to be resolved through objective and rational means to minimize uncertainty. The National Service Framework for Older People in the United Kingdom, for example, talks about "risk management strategies" to reduce the risk of falling or becoming lost (Department of Health, 2001). More broadly, there have been a number of governmental risk management initiatives in health and social care, with increasing attention to ensuring standards and compliance to key areas, such as consent to treatment, personal safety, and supervision. However, such pressure can lead to a greater focus on minimizing harm to patients and avoiding more positive approaches to promoting health and social well-being that involve greater inherent risk (Taylor, 2006).
In contrast to this "objective assessment and management" approach to risk, a classic text by anthropologist Mary Douglas highlighted the ways in which hazards and dangers come to be defined by the local social and cultural context (Douglas, 1992). Risk is not part of objective reality but a multidimensional, social construct, perceived in different ways by different people in different social contexts and circumstances. This sociocultural approach to risk perception is important when exploring the management of wandering behavior and considering how to maximize freedom of movement for the individual, while helping maintain safety. Take the example of Atul Gawande's moving account of the decline and death of his father (including a review of the literature), in which the tension between autonomy (the father's priority) and safety (the priority of both his children and care professionals) loomed large (Gawande, 2014). Gawande describes this tension as one of the most important ethical conundrums of our age and makes a cogent case for careful, individualized trade-offs between support for autonomy and protection from harm. Containing and constraining the vulnerable older person on the grounds of "safety" to the exclusion of their dignity, personhood, and fulfillment, especially in the face of loss of mental capacity, is dehumanizing. A contemporary ethics of care can and must rise above such approaches. As the delivery of care increasingly relies upon technology-based interventions, the ethics of care must be woven into both its social and material practices.
In the case of wandering in the cognitively impaired, mobile devices, in particular, have assumed a growing role. Hence, understanding how these technologies are used (or why they are not used to their full potential) to support meaningful and fulfilling walking by the individual with cognitive impairment behoves us to reframe wandering: No longer can it be understood simply as a social practice. Instead, it has become a sociomaterial practice, in which the material properties and affordances of the technology and the negotiation of the relationship between these devices and the social context become key elements of the analysis.
Using GPS Tracking to Manage the Risks of Wandering
A potential technological support for those who wander involves the person wearing a GPS tracking device (e.g., on a wristband or belt) that alerts relevant caregivers (often a remote monitoring center in the first instance, who in turn contact a nominated carer) when the device leaves a predefined geographical area (a "safe zone" bounded by a "geofence"). The use of GPS tracking to locate people with cognitive impairment is ethically controversial and divides opinion (Landau, Auslander, Werner, Shoval, & Heinik, 2010;Robinson et al., 2007). There is a perceived need in some circles to work toward greater consensus on ethical principles on who should be offered such devices and when, with calls to develop clear policies and strict procedures to protect against the "misuse" of GPS tracking (Landau & Werner, 2012;Rialle, Ollivet, Guigui, & Hervé, 2008;Welsh, Hassiotis, O'Mahoney, & Deahl, 2003). Although such efforts are laudable, it is arguable whether any attempt to resolve ethical tensions through rational assessment criteria and standardized procedures, underpinned by a set of agreed ethical principles, could possibly succeed, given that the tensions between autonomy and safety will play out differently for different individuals in different situations. The philosophical question of whether a situated, narrative approach to the ethics of GPS tracking may be more appropriate than a focus on universal principles is beyond the scope of this article (but see Pols, 2010). That aside, we argue that the balance between autonomy and safety is more likely to be achieved as a situated accomplishment, justified by a narrative account of the person-in-context, than via the technocratic application of generic principles or criteria.
Little if any research on the use of GPS devices has centered on the in-depth study of the actual experience and uses of the technology. But as we and others have previously shown more generally in relation to assisted living technologies, it is necessary to understand how such technologies are actually used "in the wild" and how people come to obtain meaning and function through their use (Gibson, Dickinson, Brittain, & Robinson, 2015;Greenhalgh et al., 2015;Greenhalgh et al., 2013;Pols, 2010;Procter et al., 2014;Roberts Mort, & Milligan, 2012;. The current literature focusing on the application of GPS tracking to address problems of wandering has ignored the sociomaterial dimension. Studying the interplay between the technology (and its material properties) and social agency will require research strategies to understand how the technology can shape-and become shaped by-the social roles, relationships, and perceptions in relation to the management of wandering.
A Case Study of GPS Tracking in Cognitive Impairment
The analysis presented in this article is based on an ethnographic study of the lived experience of wandering by people with cognitive impairment and how this was managed in practice through lay and professional care networks using GPS tracking technology. The study, funded mainly by the National Institute for Health Research (NIHR), was linked to a wider program of research-Studies in Co-creating Assisted Living Solutions (SCALS) funded by the Wellcome Trust Society and Ethics Program. The SCALS program is following six organizational case studies of technologysupported health and social care, described in detail elsewhere . Each of these case studies involves a health or social care organization that seeks to improve care through the use of technologies; it includes an ethnographic component of the patient/client experience as well as action research with the organization to support delivery of the technological (or sociotechnical) solution.
The GPS tracking case study was conducted in partnership with the Inner City Borough (ICB) Adult Social Care service, which provides assisted living equipment and technology to people in a London borough. ICB Adult Social Care initially provided two GPS devices, which later increased to six different devices (from five technology providers) during the study. All devices included GPS tracking functionality, with tracking of the location of the user and capability of raising an alert when the wearer exits a predefined safe zone. The alerts were raised by a monitoring center operator, and the carer could also view the location using a digital map on an online portal. Beyond the GPS tracking and alert features, the devices varied with regard to functionality, design, and other material properties (see Table 1 for a summary of devices). Working with ICB and the local dementia care team, we explored the lived experience of GPS tracking technology users, their caregivers, and support service staff. We were particularly interested in their experiences with using (or choosing not to use) the GPS devices provided and the technology-supported opportunities offered by the service to better meet their needs.
In this article, we describe our methodology and report our findings on how wandering was experienced and understood by persons with cognitive impairment and their carers. Using a sociomaterial theoretical perspective informed by strong structuration theory, we then explore the networks and practices involved in managing the risks of wandering using GPS technology. The key research questions were as follows: Research Question 1: How and why do people with cognitive impairment engage in and experience wandering activities? Research Question 2: How do members of the formal and informal care network balance the tension between autonomy and safety, with or without the aid of technologies? Research Question 3: What kind of knowledge and social relations are needed to support the effective and ethical use of GPS tracking for people who "wander"?
Theoretical Orientations
As described in detail elsewhere , our theoretical approach rejects the prevailing technological determinism assumed by many policy makers and biomedical researchers (the assumption that the introduction of a technology as part of a health or care service will "cause" particular intended effects such as empowerment of the patient/client, better or safer care, improvement in health outcomes, greater efficiency, and so on). Rather, we view technologies as elements in complex, dynamic systems that are typically unstable; the behavior of these systems depends on human actions, interactions, and relationships as well as on the material properties, affordances, and symbolic meanings of the technologies. Furthermore, any sociotechnical system that delivers technology-supported care has a history (and therefore a degree of path dependency); it sits within wider social structures including regulatory and political systems, and it evolves dynamically over time. Researchers who study such systems are broadly agreed that their empirical study requires naturalistic methods, particularly ethnography, but they differ in their choice of analytic approaches.
Our preferred approach is strong structuration theory (Greenhalgh & Stones, 2010), an adaptation of Giddens' structuration theory (Giddens, 1979) that emphasizes the networked nature of social relations and the need for rigorous and detailed empirical study of small-scale social situations (conjunctures). Strong structuration theory analyzes the reciprocal and dynamic relationship between social structure and human agency; it divides social structures into external (meaning-systems, prevailing moral codes, political economic realities, and so on) and internal (internalized versions of these realities that are held by individuals in the form of habitus and knowledge, studied from a subjective, phenomenological perspective). Technologies, similarly, are viewed as not only generated in and by society but also as possessing inscribed internal social structures (e.g., role assumptions and access controls built into software) and as both creating and constraining possibilities for human action.
In common with actor-network theory, strong structuration theory holds that an individual's social role (or position-practice) depends on their position in the sociotechnical network. A telecare call center staff member, for example, only becomes a "carer" because (and to the extent that) he or she is connected to a wider network of individuals and technologies involved in the support of the individual with cognitive impairment. In what actor-network theorists call translation, individuals within a sociotechnical network seek to mobilize other individuals and technologies to relate to one another in particular ways, so as to produce a more or less stable arrangement to achieve an ulterior goal (e.g., in this instance, safe wandering by someone with cognitive impairment). Our analysis sought to study the relationship between the (changing) network and how the individuals and technologies "acted" within it.
An important aspect of strong structuration theory is the detailed study of how each individual (and each technology) fits into the network and what assumptions they make about the other people and technologies in the network. An individual's knowledge may be incomplete or flawed (e.g., a carer may believe, wrongly, that the person with cognitive impairment finds the technology intrusive or that the technology is 100% reliable). But whether flawed or not, this knowledge is an important influence on their action. Similarly, flawed assumptions built into technology (e.g., a flashing light indicating "charging" will ensure that the user keeps it plugged in) may have unintended consequences (e.g., drawing attention to the device, leading to unplugging by the cognitively impaired individual-see example in "Findings" section).
The analysis was supported by existing literature on embodied selfhood and movement in cognitive impairment (Graham, 2015;Martin et al., 2013), human geography (Middleton, 2009), social construction of risk (Hillman, Tadd, Calnan, Calnan, Bayer & Read., 2013;Tulloch & Lupton, 2003), and sociotechnical systems (Bijker & Law, 1992;Leonardi & Barley, 2008;Williams & Edge, 1996). The latter is characterized by very distinct accounts of the relationship between technology and society, ranging from technological determinism on one hand to social constructivist on the other. Leonardi and Barley (2008) adopt a position that claims a conceptual middle ground between purely deterministic or constructivist positions-technologies are adaptable but there are limits and these then push back on practices. It is this recursive relationship between technology and practice at the micro, meso, and macro levels on which strong structuration theory aims to shed some analytical light.
Sample and Recruitment
The sample consisted of seven participants (index cases) with complex multimorbidity (cognitive and physical impairment). Participants presented different levels of severity of cognitive impairment and different physical comorbidities; they were also diverse in terms ethnicity, family settings, and social networks. Each index case was identified by the care practitioners as clients who may benefit from the provision of GPS technology and provided with the technology as part of their usual care.
The action research component involved the first author being directly involved in supporting users and addressing problems with the technology provided. Five cases were enrolled in this phase of the study, in which the researcher worked alongside service staff to resolve issues and improve the solution in place for the client, while also generating generic insights to feed into organizational learning. To this end, the researcher met with the ICB team following home visits to discuss the types of problems faced by service users and opportunities to adapt the technology or service to address such problems. These meetings provided the ICB team with a more detailed insight into the everyday experience of service users and provided a context to identify work practices that would better meet users' needs, as well the broader organizational challenges that would need to be addressed to routinely perform these practices.
NHS Ethics approvals were granted by the NRES Committee London-Camden and Kings Cross (15/ LO/0482). All participants and at least one carer provided written consent. If there was evidence that the index case participant lacked capacity to consent, then the carer would be asked to provide consent as their personal consultee. Participant and organization names and other identifiable information have been removed to maintain confidentiality.
Data Collection and Analysis
To investigate the use and nonuse of GPS technology using strong structuration theory, we collected smallscale, detailed ethnographic data on individual technology users (micro) as well as organizational level data (meso), and wider data on the sociocultural and policy context (macro). Qualitative data were collected longitudinally for each index case using semistructured and narrative interviews, observations, and "tours" of indoor and outdoor spaces that participants wanted to show the researcher. Participants and their carers were visited on up to six occasions over a period of 6 to 8 months to build a rich picture of their lives, focusing mainly on specific incidents and challenges (conjunctures in the language of strong structuration theory). These data were supplemented by interviews with the service staff involved in the individual's care and relevant paper and electronic documentation (e.g., assessment forms, GPS activity data). We also undertook a detailed study of the material properties of the GPS technologies in use, focusing on the affordances and constraints that shaped how it was used and how it mediated interaction across the care network.
Ethnographic study of work practices included observations and naturalistic interviews to map the people and processes involved in providing and supporting the GPS technology. This included health and social care staff, as well as staff within collaborating organizations (technology suppliers, monitoring center operators). For the action research component, the researcher engaged in discussions with service staff to explore how problems could be addressed. This aspect of data collection focused on how staff drew on their accumulated general experience and existing knowledge (flawed or otherwise) and mobilized new sources of conjuncturally specific information and knowledge to move the problem on.
Data for each index case were drawn together using narrative synthesis to produce a case summary as described previously (Greenhalgh et al., 2013). Each narrative covered (a) the participant's social, cultural, and historical background; (b) their experience of aging and ill health; (c) the people and technologies in their life and how these were linked in relevant networks; (d) their perspective (and caregivers interpretation) on "what mattered" about outdoor and public spaces; (e) the specific GPS technology that had been offered (and which may or may not have been in use) to support them; and (f) the problems that emerged, how these were resolved (or not) over time and any unintended consequences of the efforts to resolve them.
The case narratives were used both practically (to identify service user needs in relation to activity outside the home and the roles of technological and social support, thereby informing the action research) and also theoretically (as the raw material for theorization of the lived experience of the technology and how ethical challenges emerged and were addressed).
Our analysis sought, first, to map relevant external social structures (what Stones, 2005 calls the strategic terrain) and the internal structures that were embodied by individuals and inscribed in the material properties and affordances of technologies. Second, we sought to document how people (the index individual with cognitive impairment and the members of his or her care network[s]) assessed particular situations and drew on their knowledge of the situation (including their assumptions and beliefs about what was ethical in the circumstances and about what other people knew and believed) and on the functionality of technologies to take particular action(s), and what the consequences (intended and unintended) of those actions were. Finally, we sought to theorize how the actions of individuals-and whether the technology "worked" [acted] as intended-fed back in the longer term to influence wider social structures (including policy assumptions and prevailing views on the ethics of surveillance).
Our interest lay in determining whether safe wandering for people with cognitive impairment was achievable through the introduction of GPS technologies and-if so-how, and how this might explain when nominally identical technical artifacts lead to quite different outcomes. This required understanding the nature of the changes in both artifacts and social practices to support safe wandering, that is, how this shaping or coevolution of the technical and the social was explored, negotiated, and achieved, by whom, and what this meant for the practice of wandering as experienced by the participants and their carers.
The starting point of this process may be characterized by a number of social and material conditions that are constitutive of the external and internal structures. In the setting of our study, these included health and social care policies and their political economic drivers; organizational rules and practices for assessing and managing risks of wandering and supporting technologically enabled care interventions to minimize those risks; the rules and practices of telecare call centers and their operators; designers' assumptions about participants and their requirements and how these are inscribed into the artifacts; and the habitus and lived realities of the person and their families. These structures may then recursively evolve, driven by participants' discovery of what kinds of adaptations the technology affords (not necessarily those intended or foreseen by designers) and what kinds of reconfigured social practices, both at the organizational and personal level, are necessary and feasible to deal with the limits of the technology (and vice versa). It is this dynamic that we are particularly interested in exploring, recognizing that implementation is a key site for the study of the exercise of human agency and how this is shaped by-and shapes-the artifact and the social practices within which it is being embedded.
Overview of Data Set
Data collection included twenty-two ethnographic visits with seven index cases and eight lay carers (approx. 50 hr), 30 hr of ethnographic visits with organizational staff (including shadowing and meeting with occupational therapists, ICB telecare coordinators, call center operators), six staff interviews with health and social care staff, and three interviews with other stakeholders (technology supplier, two monitoring center managers), and approximately 40 pages of documents (including national and local policy on assisted living, business plans, extracts from websites, emails, correspondence with technology suppliers). The seven individual case studies, structured under the six headings listed above, were between 4 and 6 pages long. Table 2 presents the seven cases, living and care arrangement and GPS tracking technology provided. All cases were males aged 72 to 89 years. Five participants lived in their own home (one living alone) and two lived in a formal care (group care home) setting. The participants had mild to moderate cognitive impairment and considered at risk of wandering and becoming lost outside. Three participants were diagnosed with Alzheimer's disease, two with mixed-type dementia, one with vascular dementia, and one with Korsakoff syndrome.
GPS Tracking in Its Social and Historical Context
At the time of our empirical work (2015-2017), U.K. health and social care services were severely stretched as a result of "austerity measures" in the public sector (Glasby, 2017), with tightening of resources in every sphere of social work (Fenton, 2016). There was strong pressure at national policy level for local providers to identify and implement innovations to improve efficiency of service provision. Digital technologies were viewed as one important way of achieving improved services and reduced costs; they were also widely viewed as representing progress and linked in policy discourses to economic and scientific progress for the country (Greenhalgh, Procter, Wherton, Sugarhood, & Shaw, 2012). Indeed, the assumed ability of technology in general to improve the effectiveness and efficiency of services was so pervasive that government initiatives set out to encourage "digital by default" across public services and the NHS to go "paperless" by 2018 (Cabinet Office, 2012;NHS England, 2014).
Although many people with severe cognitive impairment are cared for in institutions, mild to moderate cognitive impairment is much commoner, and such individuals usually live independently or with families (Parkin & Baker, 2016). The cost of searching for a missing person is estimated to cost the police force £2,400 per case (Greene & Pakes, 2012).
Local providers of care services were thus considering GPS tracking of the cognitively impaired in a context of falling real budgets and rising need, with the threat of high and unpredictable costs of searching if wandering clients became lost. In addition, the London Metropolitan Police were working with the Adults Social Care team to promote the use of GPS tracking to reduce cost conducting search operations. The imagined solution, certainly in the minds of administrators and managers, was one in which all or most individuals with a propensity to wander would accept a GPS tracking device that they would use this device whenever they wandered outside the home, that the device would be programmable with a suitable geofence that clearly delineated "safe" from "unsafe" territory, that the alert would be triggered reliably when the individual ventured into the latter, and that a search and rescue solution by members of the user's care network would follow logically from the alert.
The key influences on the local policy of introducing GPS tracking in our case study site were thus the bad and worsening financial situation along with a prevailing discourse of modernism (technology as efficient, clean, rational, and reliable) and increasing bureaucratic controls on social care. Social care staff took account of these influences but were also influenced by professional values and ethical principles, most notably the goal of enabling people with cognitive impairment to remain living at home for longer, reducing caregiver stress, and providing clients with greater freedom outdoors.
The Individual Cases
There was wide variation in how participants experienced cognitive decline and how this related to their mobility and engagement with outdoor and public spaces. Wandering was closely tied to changes in the person's mental and physical capabilities, chronic health such as diabetes, edema (swollen feet), and other reoccurring illness (e.g., urinary tract infections). During the study, six of the seven participants engaged in wandering activities outside the home, which they often sought to do alone and independently. One participant did not leave the house alone during the study (to anyone's knowledge) because he had recently experienced a fall outside the house front entrance. He was unsteady on his feet, due to low blood pressure and edema. However, he continued to move around the house when alone, resulting in a series of falls and heightened anxiety for the family.
The wandering we observed consisted of activities inside and outside the home, including repetitive movement (e.g., visiting or walking around a particular area) and actions or gestures (e.g., manipulating, dismantling, or moving objects). Carers' attempts to control these activities or accompany them outdoors were sometimes met with resistance and conflict.
Each case was distinct in terms of clinical, social, biographical, and geographical contexts, and wandering was experienced and managed in different ways. Of particular interest for us as researchers was how this management evolved over the course of the study. For example, at the start, the ICB team offered a choice of only two GPS devices; by the end of the study, this had expanded to six different GPS device options (from five different suppliers), which varied in design and functionality (e.g., some designed to be worn or locked on the wrist, and others on a lanyard or key ring). These all had the same GPS tracking and geofence features, but different material properties that turned out to be important in terms of their acceptability.
Four of the seven participants abandoned the GPS technology they had been provided with at some point during the study. Five cases required active involvement by the researcher to help the service identify and adapt solutions in use, to address problems that would have negatively affected the sustained and effective use of the technology.
The analysis revealed three themes related to the use of GPS tracking. The remaining part of this section describes these themes using field note extracts from the case studies.
Wandering as a Meaningful and Worthwhile Practice
Our study design, based on in-depth and longitudinal ethnographic observation, allowed our lead researcher to develop a detailed biographical and tacit (informal and implicit) knowledge of the index case. In all seven cases, it became evident over time that engagement in wandering was a meaningful and worthwhile activity for that person (and that different individuals found different kinds of meaning and fulfillment from wandering). "What mattered" to the individual powerfully shaped the ways in which the GPS technology was used, as the following fieldwork extracts illustrate.
First, wandering was important for maintaining habitual practices that were linked to particular places and that reinforced the person's identity. It involved spending much of the day moving through, and acting in, socially and culturally familiar spaces. In the extract below, the participant, who is in his late eighties with Alzheimer's type dementia, centered his daily routine around visits to the local betting shop, supported by his son who lived with him: [Participant's name] tells me he's been to the bookmakers today "I stay there till my money runs out [laughs] . . . . I bet on horses and dogs . . . As long as my money lasts." His son later explains that he does not actually follow the races. His mild cognitive impairment means that he cannot select runners, nor does he follow the race and know if he has won any money. But he still places bets by taking the betting slip from the counter, writing "FAV" (bookies' favourite) across the front of the slip and handing it to the cashier with his cash. His son will limit how much money he takes with him, so that he is free to use whatever cash he has in his pocket. He relies on the cashier to tell him when he has won and return the winnings. But sometimes this isn't done. This annoys his son, but he feels it important that his father goes to the bookmakers on his own and enjoys placing bets. As his son is often at home and the bookmakers is a short walk away, he can occasionally pop into the bookmakers to check on him, or collect him when necessary. This participant's meaningful social practice of placing bets at the local bookmakers included interacting-in ways he had done all his adult life-with the staff. This practice was actively enabled and managed by the efforts of his son, who understood the biographical significance of this practice for his father, someone who spent much of his working life as a lorry driver, traveling independently and away from home and enjoying a bet as part of the way he relaxed when away. For his son, the value of this activity significantly outweighed any monetary losses. But, it also relied on his personal and tacit knowledge of the local betting shop, how his father would act within this space, and his capacity to coordinate his own activities alongside this. For the participant, the familiar and habitual actions and gestures were important in forming place attachment (Degnen, 2016), in that it is not only about being physically present in this place but is also an interactional process with social and physical aspects of the environment.
The GPS solution developed for this participant was aligned with these practices, with the geofence encompassing the home, betting shop, and local pub (which the landlord would occasionally invite him to if he saw him in the betting shop), allowing him to come and go as he pleased. In this case, wandering was depicted as a problem (to be stopped) if he breached these parameters. This occurred on a number of occasions, including an incident when he went to find another betting shop nearby (with which he was less familiar and the staff and other clients did not know him) and when he went to the post office to try to withdraw his pension money.
Second, the case studies highlighted wandering as an aesthetic practice, in which the destination was less important than the experience of moving through, and interacting with, the outdoor and public spaces. Sometimes, the places to and through which the individual walked were richly evocative of positive memories (past) and/or linked to positive dreams and plans (future). In the next extract, this participant, who is in his late 80s and has mild cognitive impairment, highlights how his wandering elicits memories of his life growing up and living in Jamaica and his dreams to return there. The GPS device was requested by his wife because he was spending long periods of time out and about but could not say where he had been: As we walked through what he calls his "plantation" (outdoor garden space where his wife has planted fruits and vegetables), he talks in a great amount of detail about the vegetation, touching and smelling them as we pass through. He stops at the sweetcorn plant to explain, in minute detail, how it is cooked and eaten in the Jamaican way. In detail he explains, with hand gestures, how the sweetcorn is cooked on an open fire, and then mixed with crushed dried coconut: "Its heaven . . . You know that God must be a good god because wherever you go the food is different . . . We are all brothers on this earth and we will all go to heaven." As he talks freely and energetically about the plants and his culture, you wouldn't know he had cognitive impairment. He is knowledgeable of each plant, what they are, stage of growth and when the fruit would be ready to pick. As we continue to walk, he stops suddenly, and points up at the sky, telling me to look up, quickly. Unaware of what I am meant to be looking at, and unable to follow his direction, he puts his right arm across my shoulder, positions my head with his left hand, and points at a cloud for my eye line to follow: "Wait, it's coming . . . where is it? . . . hang on." We stand for some time, looking up at the sky. Then, emerging from the cloud is a plane, barely visible, flying high in the distance. When I eventually see it and understand, he laughs out loud. He says he loves looking up and watching out for planes. He spends lots of time standing or sitting outdoors, looking up at the sky, watching out for them, wondering where the people are going. He used to love travelling and dreams of going back to Jamaica one day. This extract illustrates how this participant's wandering in the garden links him powerfully to his early roots in Jamaica and also how being outdoors makes possible his satisfying fantasies about people traveling the world and (perhaps one day) his own return to his homeland.
Third, wandering provided purpose and occupation of time, helping satisfy a need to feel useful. The following extract describes the wanderings of a participant, who is in his late seventies and originally from Pakistan. His wandering largely consisted of searching the ground and gathering objects found along his path, an activity also observed in another case in this study: As we approach the house to visit [participant's name] and his family, the occupational therapist tells me that this is a particularly challenging case, as he routinely engages in "searching" activity outside the home, without paying attention to people and traffic around him. She recalls her last visit, when she was getting into the car preparing to leave and saw him walk straight out of the house and across the main road. He was looking down at the ground, saw a plastic bag on the pavement and stopped to pick it up. He continued to walk, holding the plastic bag, scanning the ground by his feet as he moved, completely fixated, as if searching for something important. Hunched right over to get as close to the ground as possible, he would occasionally stop to pick something up, inspect it and place it into the bag. [Later at the house] the participant's granddaughter tells us that his "searching" behaviour has got worse. He brings back all sorts, and even rummages through bins along the street, taking out discarded and rotten food and bringing it back to the house.
Initially, this behavior appeared to fulfill the biomedical terms often associated with wandering: purposeless, disoriented, and risky. But over the course of the study and as a result of repeated discussions with the participant and his family, the researcher and social care staff collectively came to realize the significance of this activity and how it related to his previous occupation in the textile industry: The occupational therapist asks the family if there is anything he can do at home. There's nothing. He doesn't even watch television and most of the family are at work during the day. He only likes walking. There's a back garden, but he doesn't go out there. The occupational therapist asks the family what his previous occupation was. His granddaughter tells us that he worked in textiles, mainly sewing buttons onto bus seats. At this point she realises that, although he gathers all sorts of things, he particularly likes finding buttons, "they are like treasure to him." The occupational therapist turns to him: "We need to find you a job." She comes up with an idea for the family to place buttons and other interesting materials around the garden for him to search and collect.
Through their understanding of the biographical context, the family explored how this participant could continue to do the searching activity that gave him a sense of purpose and filled his time, but do so safely in the family garden rather than out in the streets.
Finally, wandering was often characterized by a spatial temporal rhythm, providing continuity and structure to the person's daily life. In some cases, carers were attuned to these patterns, which helped monitor and support their activities (e.g., expecting them to return home at particular time, finding them at their "usual haunts"). As others have previously observed (Brittain et al., 2017), spatial temporal rhythms also shaped how the carer perceived wandering as meaningful and purposeful or considered it "aimless" or hazardous when the person moved out of these areas. These understandings formed the basis of the geofence configurations and carers' decisions about how to act. In the extract below, the granddaughter of one participant describes how the family monitor his movements and synchronize their own activities with him to help use the GPS device: "He doesn't go out after six o'clock. After his dinner, after he has eaten, he stays in. He doesn't go out till eight in the morning. So at six o'clock my mum will take it [GPS device] off him, because he's in the house. And then when he wakes up, sometimes he can go out at eight o'clock, so my mum will put it in his pocket, where he will keep it all day . . . . He goes around [the park] after breakfast. And then comes back for lunch and then out again before tea time."
Freedom of Movement and Social Construction of Risk
Family carers' concerns about wandering were dominated by threats to the person's physical safety (e.g., traffic, personal security). In six of the seven cases, GPS tracking was introduced following a significant event, including occasions when the person had gone missing for a long period and found (by family or police) in an area they would not usually go to.
The various uses and configurations of the GPS technology were shaped by the differing interpretations of the risks associated with wandering, which were socially situated and influenced by different (and often changing) knowledge, values, and beliefs among carers. One key difference in the perceptions of risk associated with wandering emerged between formal care arrangements (i.e., group care or assisted living facilities) and those living with family members. In the formal setting, the focus was on mitigating potential harm to the person and knowing where they were at all times. The extract below is from a case, in which the participant, who is in his late seventies with mild cognitive impairment, lives in a group care home. The GPS geofence was set tightly around the house, so that staff could be alerted if he left the premises and ensure he did not leave unaccompanied: [Participant's name] will often attempt to leave the house. The care manager has been granted official authorisations to lock the front entrance, as it is believed that he will quickly become lost outdoors and presents a lack of road traffic awareness. However, the GPS tracker is still seen to be needed, as he makes attempts to leave the house when people open the front door, and he has even managed to climb out of a ground floor window. So, the tracker does not enable him to walk within a "safe zone," but rather to help care workers to respond if he leaves the house. The geo-fence is set tightly around the house, as there are only two carers on duty at any one time, so would need to respond quickly.
In the two formal care arrangements studied, the notions of risk were constructed in relation to what group care staff considered to be competent and responsible practices for mitigating the potential harm to the individual. As described by one care service manager in the extract below, this was underpinned by fear of litigation if the person was harmed or considered to be put at unnecessary risk, as well as the need to justify resource allocations to effectively manage the person's wandering: Using the lens of strong structuration theory, the strategic terrain as viewed by these paid care staff was dominated by regulatory social structures (the legal and contractual conditions associated with their professional role) and by a strong sense of professional duty, as set out in the code of ethics for social workers (British Association of Social Workers, 2014) to protect their vulnerable client from harm. These social structures (as perceived and internalized by the professionals concerned) served to both create accountabilities and limit what was (believed to be) possible for them in their professional role. From the perspective of the care worker in the case above, the GPS technology had a very specific potential-to help ensure that the participant would not leave the building unaccompanied. The actual technical potential of the GPS device (illustrated by how it was used by other participants in our sample) was much wider, but because of their particular position-practice, these wider choices were not open to them.
In contrast, the informal care arrangements involved a reciprocal care relationship and a sense of responsibility by families to ensure the person's safety and material comfort while also helping them achieve fulfillment and happiness and respecting their independence. This involved a pragmatic and ongoing balancing of the tension between autonomy and safety. In the following extract, the granddaughter of one participant illustrates how the family's relationship and biographical knowledge of them informs their judgments about how and when to control his movements following his fall after an episode where he became lost and tired while wandering: "It came to a decision, should we lock the main doors and not let him out? But he would get really angry and upset. And we can't do that because he really enjoys walks. It is the only thing he can do and he really enjoys it . . . . He's always liked to walk. He never used public transport. One day we thought to lock the door. But that wouldn't work because he'd get frustrated and angry." Decisions to balance autonomy with safety were not limited to concerns about the individual's physical safety. Some carers had wider concerns, including a need to protect their own time or psychosocial resources, and opt for situations that were more manageable. Family members typically had multiple accountabilities (job commitments, other dependents) and/or limited physical strength or patience; it was simply not practically possible to be "on call" for the wandering individual 24 hr a day. Using the lens of strong structuration theory, each family member was situated in a social network; their multiple accountabilities were defined by the social expectations associated with particular kinship ties (e.g., fatherdaughter) or other social relationships (e.g., employeremployee). Different solutions to the challenges of the individual's wandering would put different kinds of strain on the network of accountability.
Within the network, technologies (telephone, email, GPS tracking device) were used pragmatically and creatively to support activity by and around the index case, but-as with the formal carers-technology use was limited by how the lay carer saw the wider strategic terrain. The (perceived) possibilities for how the GPS device could be used were limited, for example, by his relatives' culturally shaped views on how a British Asian daughter or granddaughter should behave toward an older male relative. In this case, the women considered it highly appropriate for them to allocate many hours per day to walking with and searching for him. Relatives of some other index cases held different perspectives, depending on their cultural background and competing accountabilities.
As others have previously found (Kindell, Sage, Wilkinson, & Keady, 2014;Oyebode, Bradley, & Allen, 2013), lay carers can also be concerned about other people's perceptions of their relative. Our data illustrated that (unlike professional care workers), lay carers' actions around wandering were sometimes influenced by their perceptions of how other people viewed the behavior. In the extract below, for example, one family member was concerned about how their relative's wandering behavior might be perceived or misunderstood by members of the public: "They [call centre operator] called, and he was on Queen Road. What I panicked about, because it was half-past eight in the morning, and he was just sitting there, on the road. It's a school day, and that's a girl's school. And the parents might see him there, thinking he's watching them. That's not what he's doing. He's lost. But my worry is what they are thinking, what he is doing there. That was my concern then." In this example, the participant's meaningful wandering routine takes him past a girls' school. As he stops near to the school his behavior is open to the very different interpretation of sexual predation. In a society alert to pedophiles, the knowledge of how to (in Giddens' terminology) "go on" in society would include an awareness among older men not to linger outside a girls' school. The participant's family are concerned that as he has lost this awareness because of his cognitive decline, his behavior could generate the unintended consequence of confrontation or even arrest. This example illustrates how the individual's "freedom to wander" was, in reality, restricted by the fact that his behavior was socially inappropriate and open to misinterpretation by people who did not know him well.
Our data set also contained examples of neighbors and acquaintances who did know the individual well enough to interpret their behavior and take account of their cognitive impairment, thus enhancing the person's freedom of movement. This knowing well included recognizing the individual, understanding where they tended to go, their familiarity with the setting, and how they were likely to act in these places. In some cases, the safe zones on the GPS device were set to local areas that the person was familiar with and also where they were known to the various people in these settings. In the extract below, the daughter of one participant describes how the wider social network (that went far beyond immediate family) could "keep an eye" and assist if needed: "He is pretty lucky here because all the neighbours know him, mostly. And if they did see him walking [outside the estate], they would probably fetch him back. The neighbours next door, they have lived here since we've lived here. So, they would fetch him or hopefully someone will see him." Such tacit knowledge of the person and the local area played an important role in carers' interpretation of GPS information and response to alerts. For example, in the extract below the granddaughter of one participant describes how the shared knowledge and capabilities across the family network supported their capacity to enable greater flexibility in response to geofence breaches: "If he has gone to the corner shop, they [call operator] will say he's out of his boundary, but we know he comes back. If he doesn't come back within ten minutes, we will look where he is . . . About three times a week [they get a call], and twice out of that three we know where he is . . . . If they say three roads away or further, we know he's not familiar with the area."
Everyday Risk Management of the GPS System
Although GPS devices were implemented to minimize risk, they also introduced new risks associated with complexity of both technical aspects (e.g., false alerts due to erroneous GPS readings) and social aspects (e.g., wearing the device, charging it in the house). Resolving such problems demanded a great degree of "tinkering" and adaptation by carers, and doing so while it was in use. For example, some carers sought to induce "covert" use of the device (e.g., by hiding it in clothing) and avoided talking about it with the participant, to minimize risk of confusion, distress, and fiddling with the device. In the extract below, the son of one participant describes his efforts to minimize his father's awareness of the device as he attempts to charge it overnight: Everyday risk management strategies extended beyond the informal care network. Drawing on their own knowledge of the practical challenges, the ICB telecare coordinators utilized the GPS device portal (initially accessed to set up the device settings and alert parameters) to monitor the battery levels and GPS readings, to confirm that it was being adequately charged and used. If they detected that the device was not being charged regularly, then they would contact the carer to encourage use or resolve any issues. They were well aware that such calls could be intrusive and engender a sense of being under surveillance. This was further managed through their social interactions and relationship formation with carers over the phone, as explained by one of the ICB Telecare Coordinators below: "So we just call them to say "Hi," just be quite general. It's not like a telling off. It's just, "Oh, remember that Mr Smith has a GPS device and it needs to be charged every night, and we can see on the system that it's only got 30% battery left, so you might want to put it on charge for a while". It's just a gentle approach that we take with them. We don't want them to think "We're watching you."" For the GPS monitoring center operators, strategies evolved to overcome the challenges of communicating the person's location to the "responder" on the ground. The following extract highlights an ICB Telecare Coordinator's account of two very different practices (and outcomes) with similar GPS technology. In one monitoring center, staff learnt to cross the boundaries of standardized protocols to inform, and work with, the responder on the ground. In the other case, interactions with the responders were structured along fixed protocols: "We have had positive feedback from carers about [monitoring center A]. The operator doesn't close the call until the person is found, so that makes the difference. The family is in constant contact and the operator was actually willing to call the person and say, "Okay, well this person has now moved from here to there . . ." With the other [monitoring center B], they weren't very helpful-just said oh, he's at [road name] and that was it. Obviously, he's not going to stay there. There was no "Okay, I'll stay on the phone" or "I'll call you back in another ten minutes." Because remember, they've got to go and retrieve their relative. So, by the time they get there, they could be somewhere else. And when they phone back, they had to go through the whole process again. They called the center and it was a different person who answered . . . there was no continuity." Another challenge was introducing and explaining the GPS device to the person and their family. In particular, support staff and carers were faced with the sensitive challenge of gaining agreement of the person to wear or carry the device, while enabling them to feel that they were maintaining their dignity and freedom of choice. In one case, an ICB team member visited the participant after he had refused to wear it on his wrist, saying his wife was tagging him "like a dog." The ICB team member decided to meet him in person (something they rarely have time or resources to do), present an alternative option (to carry the device on a key ring), and represented it in a way that he would find more acceptable. The extract below presents the ICB team member's experience of this encounter and how it supported continued use: "There was a connection. We are both from the Caribbean. He wanted to take me on holiday [laughs]. He didn't really want to talk about the GPS. We kept bringing him back onto it. He knew he had it, but couldn't see the reason why he should take it with him. I was telling him-he called me his girl lollipop-I said he must take it on his keys. I said "You need to put it on your key and keep it on there!" [laughs] . . . . But we have had other conversations after that. Because I could see he wasn't using it, about a month later, and I phoned the wife and said "I can't see him using it-what's going on?" And she goes "Oh he doesn't want to put it on the key" . . . . So I got him on the phone. He was giving me these stories, "It's too big, is it ok if I just keep it in my pocket." I said "Yes! Just keep it on you."" These interpersonal aspects of the delivery and use of GPS devices were achieved by working around the formal work processes and duties (the regulatory structures mentioned above) intended to manage and regulate risk and deliver efficient and standardized care. In many cases, it was felt that the organizational structures and protocols actually impeded their effort to help users on a personal level in relation to the management of wandering.
The tacit knowledge (know how) and hidden work (work not formally recognized or remunerated) involved in supporting the use of the technology were not the product of the service protocols or structures but of the informal and personal interactions and relationships that developed over time as the care workers got to know the clients and family members. As is strikingly illustrated by the quote "we're both from the Caribbean" and the very informal and humor-ridden exchange that ensued, this care worker's relationship with the service user is not merely a "system" (social worker-client) one but also a shared understanding or lifeworld one (common ethnic ties; Habermas, 1987). The latter brought different accountabilities and reciprocities, which the care worker used to help cajole him into using the device.
Meaningful Wandering and Telecare Surveillance
There is a growing body of literature in the person-centered care movement that puts the body and embodied practices at the center of exploring how dementia is experienced. This has progressed understanding of cognitive impairment beyond deficit-focused accounts that characterize biomedical thinking and directed attention to the significance of embodied agency in dementia care (Kontos, 2005). This work has largely focused on long-term residential care settings, in which institutional regimes for order and efficiency (e.g., clothing, sleep/wake patterns, and mealtime routines) preclude important embodied and social aspects of the activities (Martin & Bartlett, 2007;Twigg, 2010). Kontos and Martin (2013) also warn of the biomedical assumptions underpinning the deployment of monitoring technology to help control and manage "pathological" behavior patterns in residential care.
Following previous studies on wandering as an embodied practice in cognitive impairment (Brittain et al., 2017;Graham, 2015), findings from our study of real case participants in context has highlighted the need to acknowledge wandering as a potentially meaningful and worthwhile activity. Our findings highlight first that wandering may support habitual activities linked to particular places and social settings that the person can make sense of and belong to. Actions and gestures that may be defined as "disinhibition" or "agitation" from the cognitive perspective can present meaning when considered as an embodied practice. Second, participants draw attention to the intrinsic value of esthetic, rather than purposive, walking (Wunderlich, 2008), shaping a person's mood, thoughts, and preserving links with the past. Third, following (Feil & de Klerk-Rubin, 2002), wandering appeared to help satisfy an unmet need to feel useful, as one may have felt when engaging in family, home, and working life. The absence of meaning in a situation leads to boredom-an emotional feeling of anxiety, resulting in a restless feeling that there is need to get on with something interesting (Barbalet, 1999). As the example of the participant (an ex-textile worker) collecting buttons illustrates, actions and gestures which at face value appear aimless could be viewed as holding (at least momentarily) symbolic occupation and a productive use of time.
To date, the biomedical notion of wandering has formed the basis of GPS tracking development, including recent technical advancements applying machine learning algorithms to distinguish between "mobility trajectories," such as pacing and lapping (Lin, Zhang, Huang, Ni, & Zhou, 2012). However, the individual case narratives highlight the potential limiting factor of GPS tracking interventions if the design and implementation is not grounded in an understanding of the ways that people experience and live with wandering. In addition, they highlight that individual use (and nonuse) of GPS tracking technology is embedded within, and dependent on, a particular network of social relationships and positionpractices. Indeed, technology-supported wandering for our participants was revealed as a sociomaterial practice, in which the mutual configurability of the social and the technical was critical for success.
Key to the collaboration and decisions made by particular actors in the sociotechnical network was what those actors knew (correctly or incorrectly) about other actors, both human and technological. The social relations and accountabilities within the network and the material properties of the technologies, both created opportunities and constrained choices, making some options appear more possible and/or more or less ethical.
Embodiment and Cognitive Impairment: Care at a Distance
The study of wandering in the community setting has illustrated that, as Beck (1992) observed, risk-taking permeates many aspects of ordinary, daily life. Studies on the "governmentality of risk" in health and social care work practices have shown how organizational structures intended to eradicate uncertainty of patient safety and standardize care provision can inadvertently erode the person-centered interactions with patients and shift focus onto the systems of risk regulation. Through her analysis of interactions between care workers and patients in acute care wards, Hillman et al. (2013) showed how governance strategies and patient safety regulations ended up impacting the caring relationships in ways that compromised patients' autonomy and dignity. In the present study, risk governance affected formal care workers' and managers' priorities in relation to patient safety, underpinned by fears of legislation and amplified by their interpretation of the GPS device as an organizational risk management system (with geofence parameters and alert protocols), which they were responsible for implementing effectively to further maximize the client's safety. Furthermore, risk governance structures affected care practitioners' capacity to engage with users on a personal and ongoing basis as their attention turned to the administrative duties and paperwork whenever they engaged with and supported the user and family. Such restrictions are, arguably, a more or less inevitable consequence of the current regulatory and professional structures within which these workers were positioned. Our longitudinal analysis showed how, over time, some care workers were able to build informal, kinship-like relationships with clients and their families based on interpersonal ties rather than organizational roles-and to utilize these relationships to persuade clients to use the GPS technologies.
We have seen how telecare (specifically GPS tracking) in a community setting involves numerous people supporting the individual, with distributed roles, responsibilities and knowledge, and working across different organizations. In many such cases, achieving a meaningful understanding of the person's wandering in context will be very challenging. Different people will have varying degrees of interaction with, and knowledge and/or representations of, the person and their wandering activities-and most will have little or no human contact with the individual. For example, to GPS monitoring center operators, the person is represented with minimal personal information and a coordinate on a digitized map. Despite this, they are often drawn into undertaking emotional labor (Procter et al., 2016), providing social contact and developing ways to support intersubjective sensemaking. Similarly, the ICB team's initial interactions with users were framed by their distant role of setting up and configuring the GPS technology using the online portal. But over time, they devised ways to use this information to understand what was happening at home, supported by relationship formation over the phone, and in some cases by visiting service users and their carers at home. This human engagement is encouraging given Bauman's (1989) warning of the dangers of modernist bureaucracies. He argued that through hierarchical structures, the use of technology to achieve control at a distance, and a highly formalized division of labor, humans can become detached from the reality of their work and fail to take moral responsibility for the consequences of their actions.
The sociomaterial perspective highlights how support for wandering as a meaningful practice would require greater attention to ways to support these interactions and relationships that help harness knowledge and understanding of the person and their wandering activities.
Situated Judgments on Autonomy and Risk
Previous studies have shown how the introduction of new technology in care settings can challenge existing assumptions, values, and ways of working. Dealing with moral conflict and change is part of the fitting and tinkering of technological applications within everyday care activities (Kamphof, 2017;Pols, 2010). Kamphof (2017) draws on the notion of reflection-in-action (Schon, 1983) to describe the ways in which care home workers engaged with new telemonitoring technology, which challenged their ethical values on privacy and dignity of residents. Carers initially felt a need to be absolutely open to clients about observations to be respectful. But over time, they discovered that weighing what to watch or ignore, and what and how to communicate this information to residents, while keeping a relationship of trust was needed to work with the technology.
Similarly, in this study, the formal organizational roles and regulatory structures dominated the actions and perceived capabilities of the GPS solution. But over time, relationships developed with users (and across services) and the organizational structures exerted less influence, with lifeworld expectations and values presenting a more significant influence. Carers and practitioners developed ways to deal with the complex social and technical realities of everyday use of GPS devices. This included ways to talk about and represent the technology so as to not disrupt the person's dignity, but also address issues of acceptability and continued use. In some cases, "covert" strategies were employed by carers to avoid distress or disruption to the system dependability. We also observed how the ICB team took the initiative to routinely monitor the status and movement of each individual's device to check it was being sufficiently used and maintained, while making efforts to minimize the risk of carers feeling as though they were being watched. This complex and evolving relationship between the people and technology was necessary to deal with the uncertainty and risk that the tracking system introduced.
Evetts (2006) points out two types of professionalism in decision making, termed organizational professionalism (control lies with rational-legal forms of decision making) and occupational professionalism (collegial authority, drawing knowledge and values beyond formal procedures). In studying social care practices, Fenton (2016) warns of the threat that the, increasingly dominant, organizational professionalism framework presents to a working knowledge of the "right thing to do", the ability to work with service users and put their interests first. She proposes a need for social care to become more conducive to occupation values and a sense of agency rather than procedural sources of knowledge for decision making.
This suggests that ethical debate on the use of GPS should shift from a pursuit for consensus and instead focus on the ways in which people deal with the complexities of use and how occupational professionalism can be enhanced. This will require greater attention to people's position-practice across the sociotechnical network and how to enable capabilities to adapt and support appropriate and workable solutions. The exact nature of such adaptations, however, must remain open to continuing review to remain effective in the face of changes in the external structures-that is, social care policies and their political economic drivers-and the internal structures-that is, material affordances of the technologiesand how these shape, and are shaped by, the knowledge held within the care network.
Conclusion
In this article, we explore how GPS tracking technology is used in practice to support people with cognitive impairment. The application of a sociomaterial perspective seen through a strong structuration lens has revealed the ways in which members of the care network dealt with the social and technical realities of using and supporting GPS tracking solutions.
The findings suggest that current research and debate on the appropriate use of GPS tracking and the ethical implications are misplaced as it has been considered in isolation from everyday use. Greater attention needs to be paid to the ways in which people deal with the social and technical complexities of use and how this can be fed back into the development of the sociotechnical infrastructureas embodied in the external and internal structures-so that it is better able to adapt and thereby support more meaningful wandering practices more effectively.
|
v3-fos-license
|
2020-01-21T14:08:47.307Z
|
2020-01-20T00:00:00.000
|
210829328
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2019.00826/pdf",
"pdf_hash": "4fb15bf7fe106f178ceee5fc52bbc496ec920c68",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44228",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "4fb15bf7fe106f178ceee5fc52bbc496ec920c68",
"year": 2019
}
|
pes2o/s2orc
|
The Azores: A Mid-Atlantic Hotspot for Marine Megafauna Research and Conservation
The increasing public perception that marine megafauna is under threat is an outstanding incentive to investigate their essential habitats (EMH), their responses to human and climate change pressures, and to better understand their largely unexplained behaviors and physiology. Yet, this poses serious challenges such as the elusiveness and remoteness of marine megafauna, the growing scrutiny and legal impositions on their study, and difficulties in disentangling environmental drivers from human disturbance. We argue that advancing our knowledge and conservation on marine megafauna can and should be capitalized in regions where exceptional access to multiple species (i.e., megafauna ‘hotspots’) combines with the adequate legal framework, sustainable practices, and research capacity. The wider Azores region, hosting EMHs of all key groups of vulnerable or endangered vertebrate marine megafauna, is a singular EMH hotspot on a migratory crossroads, linking eastern and western Atlantic margins and productive boreal waters to tropical seas. It benefits from a sustainable development model based on artisanal fisheries with zero or minor megafauna bycatch, and one of the largest marine protected area networks in the Atlantic covering coastal, oceanic and deepsea habitats. Developing this model can largely ensure the future integrity of this EMH hotspot while fostering cutting-edge science and technological development on megafauna behavior, biologging and increased ocean observation, with potential major impacts on the Blue Growth agenda. An action plan is proposed.
INTRODUCTION
Marine megafauna, a broad definition for large marine vertebrates including marine mammals, reptiles, birds and large fishes, has captivated human mind since pre-historic times. The petroglyphs and bone carvings depicting whale hunting, the leviathanic scenes in classical art (many inspired in biblical episodes), the sacred nature of sharks, turtles or whales in many cultures, the profusion of Hollywood movies and TV documentaries featuring fearsome or tender sea giants, all of these are cultural manifestations of a genuine human fascination for these creatures. Today, their iconic role and charismatic nature gained a new momentum, as they embody the contemporary challenge of saving wild animals from mass extinction caused by an unsustainable human development model. The increasing public perception that most marine megafauna species reached a threatened or endangered conservation status, in spite of their great ecological as well as economic value for fisheries and ecotourism, renders them a unique flagship role both for conservation research and citizen science.
This contemporary paradigm represents an unprecedented push to investigate megafauna, including the discovery of the habitats essential for their survival, gauging their individual and population responses to exploitation, shipping, climate change or pollution, or understanding the many behaviors, physiology and motivations behind the migrations, feeding, mating and other vital functions throughout their lives that are still unknown or remain largely unexplained (e.g., Hays et al., 2016). Yet, as obvious as it can be, this strategic scientific move faces serious challenges.
First, the elusiveness and remoteness of many marine megafauna species make them hard and costly to access and to study in detail. The good news here is that the use, performance and sophistication of electronic tagging devices have increased substantially, and appropriate statistical tools to make sense of the wealth of data retrieved from these equipments have now been developed, allowing observation/measuring of behavior of free-ranging organisms with a detail and accuracy that we would only dream of a couple of decades ago (e.g., Hussey et al., 2015;Hays et al., 2016). This change was also accompanied by an increasing capacity to collect and analyze large volumes of oceanographic and remote sensing data at the scales needed to understand the environment in which these animals live in (e.g., Druon et al., 2016;Braun et al., 2019;Chambault et al., 2019). Second, the growing scrutiny and legal rules imposed to the handling and study of threatened megafauna, including the publication of results, requires proven high-standards in research, especially with respect to captivity facilities and at-sea procedures (e.g., tagging and restraining, mitigation of behavioral disruption due to human presence). Third, although the same can be arguably said about other animal groups, it is almost impossible to find situations without some sort of potential human interference on megafauna's individual behavior, given the high sensitivity to human activities (including research) brought about by their general characteristics (large size, high mobility, increased sensory capacities). Thus, it becomes very hard to disentangle the key effects of environmental drivers from human disturbance and, consequently, our capacity to forecast those effects and devise appropriate conservation measures.
There are, however, some areas around the globe where the conditions under which megafauna subsist may be considered less stressful (as opposed to the fable concept of more pristine), as they profit from environmentally sustainable developmental models, adopted rules and cultural behaviors. Arguably, these areas should be broadly favorable from the megafauna conservation biology and research perspectives. Advancing our scientific knowledge and conservation progress on marine megafauna can and should also be capitalized in regions where an exceptional access to multiple species (i.e., megafauna research 'hotspots') combines with the existing adequate legal framework, know-how and research infrastructure. Areas fulfilling the three conditions could, therefore, be targeted for research. In this paper, we argue that the wider Azores region (mid-north Atlantic) is one of such areas, and discuss possible strategies and measures toward achieving that goal.
A MID-ATLANTIC HUB FOR OCEANIC MEGAFAUNA
The Azores (Portugal) is the most remote oceanic archipelago in the north Atlantic, distancing about 1,400 and 2,000 km from continental Europe and north America, respectively. It represents a sub-area of Portugal's Economic Exclusive Zone (EEZ) of around 1 million km 2 , one the largest in the European Union. This group of nine volcanic islands and the numerous seamounts surrounding it sits right on the mid-Atlantic ridge at a triple (tectonic plate) junction, and was formed by the high eruptive activity in this region. In climatological-oceanographic terms, the Azores represent an ecotone: its otherwise temperate geographic location is tuned for a subtropical hint by the north Atlantic subtropical gyre via the southeastern branch of the Gulf stream (the Azores current) and its eddies flowing through the southern part of the region (Santos et al., 1995;Caldeira and Reis, 2017). This unique blend of a dynamic oceanography interacting with high seafloor complexity in the middle of the north Atlantic basin is thought to provide the particular conditions which attract oceanic vertebrate megafauna.
The Azores hosts one of the highest cetacean biodiversity in the world, with 24 species of toothed and baleen whales sighted regularly in the region ( Table 1). It includes a mix of resident species (e.g., bottlenose and Risso's dolphins), species that are present year-round (e.g., sperm whales, common and striped dolphins, pilot whales, Mesoplodon beaked whales), and seasonal visitors (baleen whales, Atlantic spotted dolphin, northern bottlenose whale) . A common trait seems to be the exceptional access to cetacean prey which are available either seasonally (e.g., the krill and baitfish upon which baleen whales and dolphins feed during their spring and summer visits, respectively) or year-round (e.g., the deep-sea squid fed upon by sperm whales - Clarke et al., 1993 -or the mesopelagic prey targeted by dolphins, beaked whales, pelagic sharks or swordfish - Clarke et al., 1995Clarke et al., , 1996. Some year-round or seasonal visitors also use the region as a nursery, namely sperm whales, common and spotted dolphins . It also represents an important ornithological transition between tropical and temperate regions. Although not ranking as high in number of nesting species than other archipelagic regions such as the Orkneys or Cabo Verde, ten seabird species (six procellariiformes and four charadriiformes) use the Azorean islands and islets as a primary nesting area ( Table 1). The region holds 100% of the world's breeding population of Monteiro's storm petrel (Bolton et al., 2008), almost 75% of Cory's shearwater, up to 33% of Barolo shearwater and nearly half the European breeding population of roseate tern (BirdLife International, 2019), the most oceanic population of this species globally. Studies have also revealed that breeding adults and their reproductive success depend on the epi-and mesopelagic feeding resources around the Azores (Monteiro et al., 1996;Granadeiro et al., 1998;Magalhaes et al., 2008;Amorim et al., 2009;Neves V.C. et al., 2012;Paiva et al., 2018). Four out of seven species of sea turtles occur in Azorean waters ( Table 1). The area is used as a prime oceanic juvenile (growth) habitat by the loggerhead turtle population nesting in southeastern United States (Bolten et al., 1993(Bolten et al., , 1998 and is along the migratory corridor during oceanic leatherback turtle migrations between feeding and nesting areas (e.g., Fossette et al., 2010). The region's oceanic and ecotonic position favors the blooming along the year of a wide range of gelatinous organisms (Lucas et al., 2014), the main staple of sea turtles in the open ocean (e.g., Frick et al., 2009;Dodge et al., 2011).
Large bony and cartilaginous fishes are another key component of the megafauna ensemble occurring in the region, including six tropical and temperate tuna, five billfishes/spearfishes, five sun/moon fishes, three large groupers (one endemic to Macaronesia) and over 60 species of benthic and pelagic sharks and rays (Porteiro et al., 2010;Das and Afonso, 2017) (Table 1). In the case of tuna/billfishes and pelagic/deepsea sharks, this represents a relatively high diversity (e.g., Das and Afonso, 2017). Some are mostly visitors during the warmer season, i.e., June to November (e.g., tropical tuna and billfishes, mobulid rays, whale shark), but others apparently use the area throughout their lives (e.g., groupers, several deepwater sharks, Afonso et al., 2011;Reid et al., 2019) or as a long-term nursery ground for juvenile growth (e.g., blue, smooth hammerhead and tope sharks, Afonso et al., 2014b).
Collectively, these taxa constitute by far the most vulnerable and protected group of animals occurring in the region, including the terrestrial realm (Table 1). 80, 29, and 17% of the sea turtles, sharks/fishes and marine mammals that occur in the region are classified as Critically Endangered, Endangered or Vulnerable by the International Union for the Conservation of Nature (IUCN), respectively, and a large number of cetaceans and sharks/fishes are still Data Deficient ( Table 1). Their catch, trade and use as well as their disturbance and habitat degradation is strictly forbidden by national and international laws and conventions including the EU Common Fisheries Policy (CFP), Natura 2000 and Marine Strategy Framework (MSFD) Directives, the Convention for Biological Diversity (CBD), the Convention on International Trade in Endangered Species of Fauna and Flora (CITES) and the Convention on Migratory Species of Wild Animals (CMS). Nearly all of the large fishes including sharks are of commercial interest worldwide. Large groupers, tuna and most elasmobranchs are IUCN redlisted, protected by international law (e.g., CITES, CBD) and managed tightly by regional marine fisheries organizations, namely the International Council for Exploration of the Sea (ICES) and the International Commission for the Conservation of Atlantic Tunas (ICCAT), in some cases forbidding their catch and trade globally (e.g., mobulid rays, hammerhead and thresher sharks) or in the northeast Atlantic (e.g., most deepwater sharks) ( Table 1).
In short, the Azores hosts multiple essential megafauna habitats (EMH) for the north Atlantic populations of all four key groups of vulnerable/endangered marine megafauna combined (marine mammals, seabirds, sea turtles, fishes), be them feeding, mating, spawning, pupping, or even resting grounds during their large scale migrations. In addition, documented large-scale migrations, from both Azorean and non-Azorean-based tracking studies, directly connect these EMH in the Azores to the eastern and western north Atlantic and/or to the arctic waters and the tropical/equatorial regions at the individual spatial ecology level of several whales (Silva et al., 2013;Prieto et al., 2014Prieto et al., , 2017, seabirds (González-Solís et al., 2007;Neves et al., 2015;Ramos et al., 2015), turtles (Bolten et al., 1998), sharks (Afonso et al., 2014a;Thorrold et al., 2014;Vandeperre et al., 2014) and tuna/billfishes (Druon et al., 2016) (Figure 1).
From the broader Atlantic scale perspective, the wider Azores emerge as a singular multispecies oceanic EMH hotspot on a migratory crossroads, linking the eastern to western basin margins as well as the cold productive boreal waters to the tropical and equatorial seas. Yet, we still lack the basic knowledge of the population dynamics, spatial ecology and fine-scale behavior for most of these species, and therefore ignore the full extent of the region's role (and any other region, for that matter) for marine megafauna conservation. Nevertheless, it is clear that the relevance of this Atlantic hotspot results from (1) the diversity of meso-and local scale EMH hotspots located in Azorean island shores, adjacent deepsea and open ocean, some of which are concurrently utilized by multiple species, and (2) the valuable resources (food, shelter, mates, nests) they offer for the survival of the resident and visiting megafauna.
HARNESSING MEGAFAUNA TO SPEARHEAD AN INTEGRATED MARINE CONSERVATION, RESEARCH AND DEVELOPMENT STRATEGY
Marine megafauna populations face rising menaces at the broad scale of their ocean basin distribution and movements including: (1) the targeted or accidental capture by longlining and purseseining industrial fishing (e.g., Bolten et al., 1998;Ferreira et al., 2001;Amandè et al., 2011;Filmalter et al., 2013); (2) the degradation of their habitat due to chemical, noise and light pollution (Halpern et al., 2008;Fontaine et al., 2011;Peng et al., 2015;Rodríguez et al., 2017;Romagosa et al., 2017), to introduced predators and diseases (e.g., Fontaine et al., 2011;Hermosilla et al., 2016;Neves et al., 2017) or to traffic (Tournadre, 2014); (3) the effects of climate change such as rising sea temperatures (Sundby et al., 2016) and the expansion of oxygen minimum zones (Stramma et al., 2012) which may lead to physiological stress, reduced foraging opportunities or higher parasite loads, and to the subsequent reduction of their physiological condition and reproductive success. These threats are recognized in current European (MSFD and N2000) and global (CBD, Ramsar, Convention for the protection of the marine environment of the northeast Atlantic OSPAR) policies, which tie signatory countries including Portugal to establishing effective protection measures and rigorous scientific monitoring programs.
The global oceans already support few areas of wilderness and even less so in the northern hemisphere and the Atlantic Ocean (Jones et al., 2018). The wider Azores region is one area where those threats, taken together, are less severe and with a slower annual change in the north Atlantic (Halpern et al., 2008(Halpern et al., , 2019. The region hosts a small (1/4 million) human population and promotes a sustainable development model, with ecotourism now being the fastest growing sector. Fisheries are essentially artisanal and, although the Azores was once an arena for whaling, there is no taking of cetaceans, seabirds or turtles for decades. A moratorium put in place by the European Commission in 2005 as a result of the region's previous policies (independent of the EU CFP) bans all trawling inside the Azores EEZ (Probert et al., 2007). Tuna are an important fishery but caught exclusively using one-byone line fishing. The bottom hooks-and-lines fishery by-catches very small elasmobranch quantities compared to continental fisheries (Torres et al., 2016;Fauconnet et al., 2019). Industry is very small in scale, and direct sources of human pollution considered to be of minor concern. There are also conservation policies and best practice programs implemented by the region that target or benefit megafauna: The Azores has one of the largest and more diverse networks of marine protected areas (MPAs) in Europe and the Atlantic, covering a mix of coastal, oceanic and deep-sea habitats (including several seamounts and pelagic seabird foraging areas), although many still require specific regulations and proper enforcement (Abecassis et al., 2015); whale and shark watching are limited to legally defined carrying capacities and codes of conduct are broadly followed by operators; several public and civil environmental education and impact mitigation programs are now well established, such as the annual rescue campaign of seabird fledglings (Fontaine et al., 2011), the marine litter cleaning events, and the catch-and-release in big-game fishing.
Yet, the region's megafauna also faces some threats locally. The most evident is the high by-catch of pelagic sharks and sea turtles in the EU pelagic longlining occurring within Azorean waters (Pham et al., 2013;Afonso et al., 2014b) (Table 1). The increasing marine traffic and noise produced by international cargo vessels, inter-island fast ferries and whale-watching vessels are also a potential problem to cetaceans and other marine megafauna (Romagosa et al., 2017). Documented areas of megafauna aggregation, such as the cetacean ground south of Pico and Faial islands and the of large pelagic fishes aggregations on the summits of the Princess Alice, Condor and Formigas banks, still lack effective protection even when already declared as an MPA (Abecassis et al., 2015;Afonso et al., 2018). Marine litter is, as elsewhere, a growing and pervasive problem all way up to megafauna (Pham et al., 2014).
We argue that the current international-to-local push for an integrated conservation approach and full implementation of a sustainable development model in the Azores, where sustainable harvest levels based on low impact gear and effort may subsist with ecotourism, can support the future integrity of this EMH hotspot. This model could also have major impacts in promoting an innovative Blue Economy agenda leveraged on R&D, where hybrid research programs based on new technological developments could foster cutting edge science on megafauna behavior and biologging, and vice-versa. Some already existing examples demonstrate the feasibility of developing this concept (e.g., Fontes et al., 2018a,b). Importantly, it could promote substantial opportunities for studying and testing the ecosystem approach to the management of marine resources and the understanding of ecosystem-level impacts of climate change. The multispecific nature of this megafauna hotspot also renders it an added opportunity in that it allows the concurrent study of both patterns and processes and the transversal hypothesis testing involving evolutionarily contrasting species, thus partially overcoming the traditional limitation of understanding those mechanisms using single-species approaches.
Thus, the Azores fulfills the three major conditions to qualify as an area of priority for research and development on megafauna conservation biology. The strategic centrality of the region, its exceptional access to multiple megafauna species and hotspots very close to harbor, and its historical low levels of (artisanal) fisheries impact, pollution, and reduced habitat degradation when compared with most other regions, turn it into a realistic opportunity with substantial gains and few, if any, downsides.
AN ACTION PLAN
In order to promote and materialize this vision, we propose an integrated action plan.
Fist, this plan should ensure the long-term survival of effective measures already in place, including an unequivocal political commitment to enforce and periodically reassess current management and conservation measures. On the legal side, these measures include the maintenance of the current legal conservation status of most megafauna species (cetaceans, seabirds, turtles, some elasmobranchs) as well as their associated protection actions (e.g., protection and restoration of seabird nesting sites, mandatory release of listed turtle and shark species), the maintenance of the trawling ban and the prohibition of high impact tuna fishing practices in the region, or the maintenance of the broad protection status of some offshore areas, including seamounts (Table 1).
Second, the region should adopt new and expand existing protecting measures when necessary in order to ensure an effective contribution to the conservation of megafauna populations. Among the most obvious are a set of measures to protect pelagic and coastal sharks, which currently have little protection, including the banning of shark landings and gears with higher shark by-catch (i.e., pelagic longlining and coastal gillnetting) and the adoption of best practices to release sharks and turtles in surviving conditions ( Table 1). Both these fisheries have a minor social-economic impact in the Azores as they contribute a very small fraction to the landings and the number of employments (Carvalho et al., 2011;Pham et al., 2013). Fifteen coastal countries in the Atlantic, Indian, and Pacific Oceans have already opted to ban commercial shark fishing altogether, and have laws that prohibit the possession, trade or sale of sharks and shark products (Ward-Paige and Worm, 2017). Another would be a set of measures targeting cetaceans, such as tightening and effectively enforcing the whale watching codes of conduct and legislation, establishing stringent regulations to reduce noise (including seismic surveying) and the risk of ship strike in areas of high cetacean concentration. Finally, the region should establish notake MPAs in areas known to serve as multispecific EMHs. The very few currently existing no-take areas in the Azores are all coastal and very small in size (Abecassis et al., 2015;Afonso et al., 2018) and, consequently, have very little, if any, impact on megafauna populations. This measure could be easily achievable by updating the current legislation and zoning of some partially protected MPAs that are known to host multiple megafauna, such as the Condor, D. João Castro, Formigas and Princess Alice seamounts.
Third, this plan requires an ambitious research agenda that can ensure the acquisition of relevant knowledge from local to global scales in support of megafauna conservation while effectively promoting R&D. For example, a thorough multidisciplinary investigation of where those multispecific hotspots are located (patterns) and why they are important (processes) for diverse megafauna is needed in order to better understand what would be the sites of priority for full protection, and what would be the relative contribution of creating a 'megafauna sanctuary' to the populations' health. However, achieving that goal will take several years to decades, even in a relatively well studied area such as the Azores. This agenda should thus focus on ensuring an adequate level of multidisciplinary research infrastructure and funding for the next decade in the region. Essential to the feasibility and broader benefits of this agenda is to be anchored on international collaborations and partnerships that can ensure state-of-the-art scientific and technological developments.
Such an action plan could benefit not only many highly migratory megafauna populations that live and depend on the broader Atlantic Ocean basin, but also leverage the Azores and its marine megafauna as a case study for global environmental awareness of the stakeholders and the wider public about the urgent need for an effective ecosystem approach to marine management. It can serve as a flagship political program to change practices, techniques, policies and options while promoting ocean literacy that help revert the problems menacing marine conservation.
DATA AVAILABILITY STATEMENT
The datasets for this study will not be made publicly available since the data has not yet been published. The data used in the broad mapping presented in Figure 1 is part of a separate publication and will be available once finalized.
AUTHOR CONTRIBUTIONS
PA designed the study and drafted the manuscript. All other authors improved the draft and critically reviewed the manuscript. PA, MS, and MM provided additional data for
|
v3-fos-license
|
2022-02-09T16:12:36.585Z
|
2022-02-01T00:00:00.000
|
246659664
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3425/12/2/224/pdf",
"pdf_hash": "e1ac821c975f773475f25e1d5f39d862e76aa2ec",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44233",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b46e3eb14d4defc983107c35906c90b5968cb855",
"year": 2022
}
|
pes2o/s2orc
|
Wilhelm von Waldeyer: Important Steps in Neural Theory, Anatomy and Citology
Heinrich Wilhelm Gottfried von Waldeyer-Harz is regarded as a significant anatomist who helped the entire medical world to discover and develop new techniques in order to improve patient treatment as well as decrease death rates. He discovered fascia propria recti in 1899, which is important in total mesorectal excision which improves cancer treatment as well as outcomes. He played an important role in developing the neuron theory which states that the nervous system consists of multiple individual cells, called neurons, which currently stands as the basis of the impulse transmission of neurons. Waldeyer was also interested in cytology, where he made a substantial contribution, being the first who adopted the name “Chromosome”. Therefore, he accelerated the progress of what it is now known as Genetics. In conclusion, starting from the Fascia propria recti and continuing with great discoveries in cytology and neuron theory, Wilhelm von Waldeyer represents a key person in what we today call medicine.
Introduction
Heinrich Wilhelm Gottfried von Waldeyer-Hartz ( Figure 1) was a well-known German anatomist who is known for his efforts in giving both the medical world and humanity an acquaintance in multiple fields of study such as anatomy, embryology and pathology, which today plays a vital role in treating genetic diseases and cancer.
He was born on 6 October 1836 in Hehlen, a small village near Brunswick. He completed his studies at the Gymnasium Theodorianum in Paderborn, were he obtained a graduation diploma in 1856, which attested his eligibility of attending university courses. Further on, he attended the Universität Göttingen were he focused his studies on mathematics and natural sciences [1]. This is a place that played an important role in the future of Waldeyer, because he met the recognised anatomist Jakob Henle (1809-1885), who discovered the loop of Henle which has great importance in the kidney's physiology. Waldeyer was so impressed with Henle's work that he entered medical school in 1857. In 1861, he acquired his doctorate diploma based on his thesis entitled "De claviculae articulis e functione" [2]. Later on, in 1867, he became a professor of pathological anatomy in Breslau, then Waldeyer became a full professor in Strasbourg in 1872, and in 1883, he moved to Berlin where he lived for more than 33 years working at the Institute of Anatomy [3].
Before his death on 23 January 1921, his desire was for his hand, skull and brain to be preserved at the Institute of Anatomy in Berlin in order to be studied and examined. Hans Virchow was the one who dissected the hands and published an entire detailed description of the anatomy of this body part of Waldeyer ( Figure 2) [4]. However, the studies based on his brain and skull were not assigned to Virchow and were not found. The entire idea behind his desire to donate his body parts to the Institute was a popular decision among well-known people from the medical world of that era due to a belief that distinguished signs could be seen in tremendously intelligent people's brains [2].
Waldeyer's Medical Contributions
One of Waldeyer's biggest anatomical contribution was represented by the "Fascia propria recti", which at the time of discovery in 1899, did not present such a great medical interest, but in the last century, its importance in surgical practice has grown higher Later on, in 1867, he became a professor of pathological anatomy in Breslau, then Waldeyer became a full professor in Strasbourg in 1872, and in 1883, he moved to Berlin where he lived for more than 33 years working at the Institute of Anatomy [3].
Before his death on 23 January 1921, his desire was for his hand, skull and brain to be preserved at the Institute of Anatomy in Berlin in order to be studied and examined. Hans Virchow was the one who dissected the hands and published an entire detailed description of the anatomy of this body part of Waldeyer ( Figure 2) [4]. However, the studies based on his brain and skull were not assigned to Virchow and were not found. The entire idea behind his desire to donate his body parts to the Institute was a popular decision among well-known people from the medical world of that era due to a belief that distinguished signs could be seen in tremendously intelligent people's brains [2]. Later on, in 1867, he became a professor of pathological anatomy in Breslau, then Waldeyer became a full professor in Strasbourg in 1872, and in 1883, he moved to Berlin where he lived for more than 33 years working at the Institute of Anatomy [3].
Before his death on 23 January 1921, his desire was for his hand, skull and brain to be preserved at the Institute of Anatomy in Berlin in order to be studied and examined. Hans Virchow was the one who dissected the hands and published an entire detailed description of the anatomy of this body part of Waldeyer ( Figure 2) [4]. However, the studies based on his brain and skull were not assigned to Virchow and were not found. The entire idea behind his desire to donate his body parts to the Institute was a popular decision among well-known people from the medical world of that era due to a belief that distinguished signs could be seen in tremendously intelligent people's brains [2].
Waldeyer's Medical Contributions
One of Waldeyer's biggest anatomical contribution was represented by the "Fascia propria recti", which at the time of discovery in 1899, did not present such a great medical interest, but in the last century, its importance in surgical practice has grown higher
Waldeyer's Medical Contributions
One of Waldeyer's biggest anatomical contribution was represented by the "Fascia propria recti", which at the time of discovery in 1899, did not present such a great medical interest, but in the last century, its importance in surgical practice has grown higher and higher due to its importance in total mesorectal excision, a surgical procedure involved in rectal cancer treatment. It was described for the first time by Professor Bill Heald in 1982, at the Basingstoke District Hospital in the United Kingdom [5].
Fascia propria recti is regarded as a thin layer of connective tissue which lies between the presacral fascia and the rectal proper fascia. It is also known as the rectosacral fascia, according to its position, defining the retrorectal space in two compartments: a superior and an inferior one [6].
The great debate that appeared around the fascia propria recti is whether Waldeyer was the first one who discovered it or not. In the first edition of the anatomical book "Traité d'Anatomie Humaine", revised by P. Poirier, which was probably published in 1894, but definitely between 1892 and 1896 in Paris, Toma Ionescu described this fascia for the first time, under the name of "rectal sheath", about 5 years before the name of "fascia propria recti" was spread around the entire medical world [7].
It is not clear why Toma Ionescu was not perceived as the first anatomist who discovered it, but some probable theories suggest that it was mostly because of the big difference between Toma Ionescu and Waldeyer's age. Waldeyer was at that time with 25 years older than Toma Ionescu and was already one of the most well-known anatomists across the world, having a wider influence and a recognised reputation [7].
Nevertheless, French authors carry the entire merit of giving Toma Ionescu the credits for his discovery, considering him as the first one who claimed the name for the rectosacral fascia.
The Neuron Theory
The neuron theory, which is also called the neuron doctrine, represents an idea of Santiago Ramón y Cajal, which states that the nervous system consists of multiple individual cells called "neurons", which have an individual structure and function, working together in order to create a singular and refined machinery that controls the entire human body [8].
However, the path to achieve this concept was not an easy one and Wilhelm von Waldeyer played an important role in expressing the neuron theory.
The history of the neuron theory starts back in 1873, when Camillo Golgi invented a new staining method, known as "la reazione nera" ("black reaction"), later called Golgi staining technique in his honour [9].
This method was used for microscopic research, which at that time was difficult due to the lack of staining techniques. Therefore, the new method discovered by Golgi played a vital role in the discovery of the nervous system, because he could differentiate the dendrites from the axon of the neuron.
Thus, he observed an entire network of neurons in the grey matter and proposed what was called "The reticular theory". This concept proposed that the entire cerebro-spinal axis was one continuous neural network that acted as a single organ. This theory represented the main idea of how the entire nervous system works, but in reality, the truth was totally different from what Camillo Golgi proposed [10].
In 1887, Santiago Ramón y Cajal used the Golgi staining technique to study the neural network, making a discovery that would change the entire approach of the nervous system ( Figure 3). He discovered that between the neurons, there is not a continuous link, but instead there is a space between them, which is now known as the synaptic cleft. This was the moment "The neuron theory" was born [11].
Golgi's concept of a continuous nervous system was therefore obsolete, and even though he never agreed with Cajal's theory, Waldeyer was a firm supporter of it. Moreover, the impact of Waldeyer's contributions is mostly represented by naming the nervous cells "neurons", which comes from the Greek word "sinew" [12].
neurons, which could be later described using electronic microscopy in order to make a clear statement about the way the entire nervous system works [13].
However, we do not have to assume that Golgi's reticular theory was entirely wrong. Nowadays, studies have determined that there is an intense interconnectedness between neurons and astrocytes, and even if we could describe the nervous system as a network composed of many independent cells, it is much more important to assume that it works as a unitary and perfectly coordinated system.
Waldeyer's Contributions to Cytology
Cytology in the 19th century represented a controversial study subject due to the lack of information, as well as the absence of the lab techniques needed in order to analyse the structure of the cell and its mechanisms. However, different studies were conducted and step by step, the researchers of that time made rapid progress. Waldeyer, played an important role in refining the cytology.
In 1888, Waldeyer published an article intitled "Über Karyokinese und ihre Beziehungen zu den Befruchtungsvorgängen" ("About karyokinesis and its relationships with the fertilization processes") [14], which signifies what is now called a review article, since it has 210 references, used to objectify a vast amount of information in just one paper.
Among the scientists of that century who were citated in this extended review, names including Rudolf Virchow, Theodor Boveri, Oskar Hertwig, Edouard-Gerard Balbiani, Walther Flemming and many others provided both theoretical and experimental information which was used to enhance the explanation of the entire fertilization and karyokinesis process.
One of the most relevant information that can be extracted from this article, is the word "Chromosomen" (in German) [14], which was translated into English under the form we use today of "Chromosome". Before this name was introduced by Wilhelm von Waldeyer, the name "Chromatinelemente" (Chromatic elements) was proposed by Theodor Boveri and used by the entire scientific community [2]. Therefore, the neuron theory constituted a solid base for the following discoveries in terms of impulse transmission, as well as structural and functional particularities of the neurons, which could be later described using electronic microscopy in order to make a clear statement about the way the entire nervous system works [13].
However, we do not have to assume that Golgi's reticular theory was entirely wrong. Nowadays, studies have determined that there is an intense interconnectedness between neurons and astrocytes, and even if we could describe the nervous system as a network composed of many independent cells, it is much more important to assume that it works as a unitary and perfectly coordinated system.
Waldeyer's Contributions to Cytology
Cytology in the 19th century represented a controversial study subject due to the lack of information, as well as the absence of the lab techniques needed in order to analyse the structure of the cell and its mechanisms. However, different studies were conducted and step by step, the researchers of that time made rapid progress. Waldeyer, played an important role in refining the cytology.
In 1888, Waldeyer published an article intitled "Über Karyokinese und ihre Beziehungen zu den Befruchtungsvorgängen" ("About karyokinesis and its relationships with the fertilization processes") [14], which signifies what is now called a review article, since it has 210 references, used to objectify a vast amount of information in just one paper.
Among the scientists of that century who were citated in this extended review, names including Rudolf Virchow, Theodor Boveri, Oskar Hertwig, Edouard-Gerard Balbiani, Walther Flemming and many others provided both theoretical and experimental information which was used to enhance the explanation of the entire fertilization and karyokinesis process.
One of the most relevant information that can be extracted from this article, is the word "Chromosomen" (in German) [14], which was translated into English under the form we use today of "Chromosome". Before this name was introduced by Wilhelm von Waldeyer, the name "Chromatinelemente" (Chromatic elements) was proposed by Theodor Boveri and used by the entire scientific community [2].
However, there was a long way that had to be followed in order to reach the chromosome discovery. First of all, near the middle of the 19th century, Theodor Schwann (1810-1882) and Matthias Schleiden (1804-1881) were regarded as the discoverers of cell theory, suggested in 1838-1839. They strongly believed that the cells were produced de novo from a substance called "cytoblastem", which did not have a specific structure [15].
Even though this theory sounds aberrating these days, the scientific community did not have any strong experimental or theoretical information to correlate, and this lack of data led to a wrong perception of the cytokinesis.
The theory was not rejected until 1855, when Rudolf Virchow (1821-1902) stated "omnis cellula e cellula" [15], which means that every new cell is created from a pre-existing one through division (Figure 4). However, there was a long way that had to be followed in order to reach the chromosome discovery. First of all, near the middle of the 19th century, Theodor Schwann (1810-1882) and Matthias Schleiden (1804-1881) were regarded as the discoverers of cell theory, suggested in 1838-1839. They strongly believed that the cells were produced de novo from a substance called "cytoblastem", which did not have a specific structure [15].
Even though this theory sounds aberrating these days, the scientific community did not have any strong experimental or theoretical information to correlate, and this lack of data led to a wrong perception of the cytokinesis.
The theory was not rejected until 1855, when Rudolf Virchow (1821-1902) stated "omnis cellula e cellula" [15], which means that every new cell is created from a pre-existing one through division (Figure 4). Nevertheless, after Virchow's statement became clear for the entire scientific community, the new debate about nucleus division became more and more thought-provoking for all researchers. It could be seen at that time that during cell division, the nucleus disappears and appears immediately along with the birth of new daughter cells.
Some of the scientists of that time considered that karyokinesis was actually a "generatio spontanea" inside the cells, while others, such as Walther Flemming, were supporters of the indirect division (amitosis) of the nucleus. Later on in 1917, Oscar Hertwig made a discovery that the role of the chromosomes is represented by their hereditary information, helping the cells to become specialised according to the indications made by the chromosomes [16].
Therefore, the 19th century represented a century of discovery that laid the foundation for future research. Starting from cell theory and going up to the main role of chromosomes in deciding the fate of the cell, many experimental advancements were made and opened a new path in understanding the basic functions of the cell.
Moreover, Waldeyer represented a key person in the development of the cytology, and in addition to familiarising the name "chromosomes" throughout the world, his exceptional microscopy methods led to important observations in fertilization and karyokinesis, being able to describe even the way polar bodies are formed during oogenesis. Nevertheless, after Virchow's statement became clear for the entire scientific community, the new debate about nucleus division became more and more thought-provoking for all researchers. It could be seen at that time that during cell division, the nucleus disappears and appears immediately along with the birth of new daughter cells.
Some of the scientists of that time considered that karyokinesis was actually a "generatio spontanea" inside the cells, while others, such as Walther Flemming, were supporters of the indirect division (amitosis) of the nucleus. Later on in 1917, Oscar Hertwig made a discovery that the role of the chromosomes is represented by their hereditary information, helping the cells to become specialised according to the indications made by the chromosomes [16].
Therefore, the 19th century represented a century of discovery that laid the foundation for future research. Starting from cell theory and going up to the main role of chromosomes in deciding the fate of the cell, many experimental advancements were made and opened a new path in understanding the basic functions of the cell.
Moreover, Waldeyer represented a key person in the development of the cytology, and in addition to familiarising the name "chromosomes" throughout the world, his exceptional microscopy methods led to important observations in fertilization and karyokinesis, being able to describe even the way polar bodies are formed during oogenesis.
|
v3-fos-license
|
2020-12-14T20:14:05.817Z
|
2020-11-07T00:00:00.000
|
234358413
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2020SW002657",
"pdf_hash": "4a293d303aabf92c69d0b555a9a0ac7d5f95f33d",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44234",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"sha1": "d0d0de6dc4f6fecb23c4407c505193b2989f30f7",
"year": 2021
}
|
pes2o/s2orc
|
Comparing Three Approaches to the Inducing Source Setting for the Ground Electromagnetic Field Modeling due to Space Weather Events
Ground‐based technological systems, such as power grids, can be affected by geomagnetically induced currents (GIC) during geomagnetic storms and magnetospheric substorms. This motivates the necessity to numerically simulate and, ultimately, forecast GIC. The prerequisite for the GIC modeling in the region of interest is the simulation of the ground geoelectric field (GEF) in the same region. The modeling of the GEF in its turn requires spatiotemporal specification of the source which generates the GEF, as well as an adequate regional model of the Earth’s electrical conductivity. In this paper, we compare results of the GEF (and ground magnetic field) simulations using three different source models. Two models represent the source as a laterally varying sheet current flowing above the Earth. The first model is constructed using the results of a physics‐based 3‐D magnetohydrodynamic (MHD) simulation of near‐Earth space, the second one uses ground‐based magnetometers’ data and the Spherical Elementary Current Systems (SECS) method. The third model is based on a “plane wave” approximation which assumes that the source is locally laterally uniform. Fennoscandia is chosen as a study region and the simulations are performed for the September 7–8, 2017 geomagnetic storm. We conclude that ground magnetic field perturbations are reproduced more accurately using the source constructed via the SECS method compared to the source obtained on the basis of MHD simulation outputs. We also show that the difference between the GEF modeled using laterally nonuniform source and plane wave approximation is substantial in Fennoscandia.
In spite of the fact that the importance of performing simulations using fully 3-D conductivity models is currently widely recognized (Kelbert, 2020), such simulations are still rather rare in the GIC community (e.g., Liu et al., 2018;Marshalko et al., 2020;Marshall et al., 2019;Nakamura et al., 2018;Pokhrel et al., 2018;Wang et al., 2016), mostly due to the lack of the credible 3-D conduc-MARSHALKO ET AL. tivity models of the regions of interest as well as unavailability of adequate tools to model the problem in the full complexity.
As for the source, approximating it by plane waves still prevails in the GIC-related studies (e.g., Campanya et al., 2019;Kelbert & Lucas, 2020;Kelbert et al., 2017;Lucas et al., 2018;Sokolova et al., 2019;Wang et al., 2020). This approximation seems reasonable in low and middle latitudes, where the main source of anomalous geomagnetic disturbances is a large-scale magnetospheric ring current. However, the plane wave assumption may not work in higher latitudes, where the main source of the disturbances is the auroral ionospheric current, which is extremely variable both in time and space, especially during periods of enhanced geomagnetic activity (Belakhovsky et al., 2019). Marshalko et al. (2020) provided some evidence for that by comparing ground EM fields modeled in the eastern United States using the plane wave approximation and the excitation by a laterally variable source which was constructed using outputs from 3-D magnetohydrodynamic (MHD) simulation of near-Earth space. The authors found that the difference increases toward higher latitudes where the lateral variability of the source expectedly enlarges. However, their modeling was mostly confined to the midlatitude region, thus it is still unclear how pronounced the difference between the plane wave and "laterally varying source" results could be in auroral regions. In this paper, we investigate this problem using Fennoscandia as a study region. The choice of Fennoscandia is motivated by: (a) high-latitude location of the region; (b) the availability of the 3-D ground electrical conductivity model of the region (Korja et al., 2002); (c) the existence of the regional magnetometer network (International Monitor for Auroral Geomagnetic Effect, IMAGE (Tanskanen, 2009)) allowing us to build a data-based model of a laterally variable source, which is a natural alternative to physics-based (MHD) source model in the areas with a reasonably dense net of observations. MARSHALKO ET AL. . Snapshots of the magnitude and direction of the equivalent current computed using the SECS method at an altitude of 90 km above the surface of the Earth at 23:16 and 23:52 UT on September 7, 2017. Locations of IMAGE magnetometers (including Abisko (ABK) and Uppsala (UPS)), the data from which were used for the equivalent current construction, are marked with squares. The location of the Saint Petersburg (SPG) geomagnetic observatory is marked with a circle. Note that SPG is not a part of the IMAGE magnetometers network and its magnetic field data were not used for the equivalent current construction.
Specifically, we perform 3-D modeling of ground electric and magnetic fields in Fennoscandia using three different source models and taking September 7-8, 2017 geomagnetic storm as a space weather event. Two models approximate the source by a laterally varying sheet current flowing above the Earth's surface. One of the models is built using the results of physics-based 3-D MHD simulation of the near-Earth space, another model uses the IMAGE magnetometer data and the Spherical Elementary Current Systems (SECS) method . The third modeling is based on a "plane wave" approximation which assumes that the source is locally laterally uniform. Note that previous GIC-related studies in Fennoscandia operated with both 1-D (e.g., Myllys et al., 2014;Pulkkinen et al., 2005;Viljanen & Pirjola, 2017) and 3-D (Dimmock et al., 2019(Dimmock et al., , 2020 Earth's conductivity models, the magnetic field in most of these papers was allowed to be laterally variable, but the GEF was always calculated implicitly assuming the plane wave excitation.
We compare modeling results and discuss found differences and similarities. We also compare the results of magnetic field modeling with observations. The paper is organized as follows. The methodology used is described in Section 2 followed by the presentation of our results in Section 3. Finally, the discussion of our results and conclusions are given in Section 4. MARSHALKO ET AL.
Governing Equations and Modeling Scheme
We compute electric, E(t, r), and magnetic, B(t, r), fields for a given Earth's conductivity distribution σ(r) and a given inducing source j ext (t, r), where t and r = (x, y, z) denote time and position vector, correspondingly. These fields obey Maxwell's equations, which are written in the time domain as where μ 0 is the magnetic permeability of free space. Note that this formulation of Maxwell's equations neglects displacement currents, which are insignificant in the range of periods considered in this study. We solve Equations 1 and 2 numerically using the following three-step procedure:
Maxwell's equations in the frequency domain
are numerically solved for the corresponding angular frequencies ω = 2πf, using the scalable 3-D EM forward modeling code PGIEM2G , based on a method of volume integral equations (IE) with a contracting kernel (Pankratov & Kuvshinov, 2016).
We would like to note here that in our previous papers (Ivannikova et al., 2018;Marshalko et al., 2020) we used modeling code extrEMe (Kruglyakov et al., 2016) which is also based on the IE method. The distinction between the two codes lies in the different piece-wise bases used. PGIEM2G exploits a piecewise polynomial basis whereas extrEMe uses a piece-wise constant basis. We found that in order to properly account for the effects (in electric field) from extremely large conductivity contrasts in the Fennoscandian region, extrEMe requires significantly larger computational loads compared to the PGIEM2G. This is the reason why we used the PGIEM2G code rather than extrEMe to obtain the modeling results presented in this paper. Specifically, PGIEM2G was run with the use of first-order polynomials in lateral MARSHALKO ET AL.
10.1029/2020SW002657 7 of 18 where L is the length of the (input) times series of the inducing current j ext (t, r), and Δt is the sampling rate of this time series. In this study Δt is 1 min, and L is 8 h.
3. E(t, r) and B(t, r) are obtained with an inverse FFT of the frequency-domain fields.
3-D Conductivity Model
3-D conductivity model of the region was constructed using the SMAP (Korja et al., 2002)-a set of maps of crustal conductances (vertically integrated electrical conductivities) of the Fennoscandian Shield, surrounding seas, and continental areas. The SMAP consists of six layers of laterally variable conductance. Each layer has a thickness of 10 km. The first layer comprises contributions from the seawater, sediments, and upper crust. The other five layers describe conductivity distribution in the middle and lower crust. SMAP covers 0°E-50°E and 50°N-85°N area and has a 5′ × 5′ resolution. We converted the original SMAP database into a Cartesian 3-D conductivity model of Fennoscandia with three layers of laterally variable conductivity of 10, 20, and 30 km thicknesses (Figures 1a-1c). This vertical discretization is chosen to be compatible with that previously used by Rosenqvist and Hall (2019) and Dimmock et al. (2019Dimmock et al. ( , 2020 for GIC studies in the region. Conductivities in the second and the third layers of this model are simple averages of the conductivities in the corresponding layers of the original conductivity model with six layers. To obtain the conductivities in Cartesian coordinates, we applied the transverse Mercator map projection (latitude and longitude of the true origin are 50°N and 25°E, correspondingly) to original data and interpolated the results onto a regular lateral grid. The lateral discretization and size of the resulting conductivity model were taken as 5 × 5 km 2 and 2,550 × 2,550 km 2 , respectively. Deeper than 60 km we used a 1-D conductivity profile obtained by Grayver et al. (2017) (cf. Figure 1d).
EM Induction Source Settings
In this section, we discuss the construction of two models of a laterally variable source and also explain how the EM field is calculated in the framework of the plane wave (laterally uniform source) concept. The sources are set up for the geomagnetic storm on September 7-8, 2017, more specifically, for 8-h time period from 20:00 UT, September 7, 2017 to 03:59 UT, September 8, 2017, thus, before and during the main phase of the storm. The disturbance storm time (Dst) index during this geomagnetic storm reached −124 nT according to the World Data Center of Kyoto (http://wdc.kugi.kyoto-u.ac.jp/dstdir/). More details on the September 2017 storm can be found in Linty et al. (2018) and Dimmock et al. (2019).
Construction of the Source on the Basis of an MHD Simulation
The first source model is based on the results of the physics-based 3-D MHD simulation of near-Earth space. In this study, we employ the Space Weather Modeling Framework (SWMF, Toth et al., 2005Toth et al., , 2012. The input to this MHD model is solar wind (density, temperature, velocity) and interplanetary magnetic field parameters measured at satellites located at L1 Lagrange point, such as Advanced Composition Explorer (ACE) and Deep Space Climate Observatory (DSCOVR). The other input is the solar radio flux at F10.7 cm (2,800 MHz). The outputs are time-varying 3-D currents in the magnetosphere, horizontal currents in the ionosphere, and fieldaligned currents flowing between the magnetosphere and the ionosphere. These output data are then used to calculate (via the Biot-Savart law) external magnetic field perturbations B ext (t, r) at the ground using the Cal-cDeltaB tool (Rastätter et al., 2014). (5) where δ(z − 0+) is Dirac's delta function, e r is the radial (outward) unit vector, and ∇ ⊥ is the surface gradient. The whole scheme of the equivalent current density calculation from the outputs of MHD simulations is discussed in Ivannikova et al. (2018).
The SWMF run, results of which are used in the current study, was performed at NASA's Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. The version of the SWMF is v20140611. The Rice Convection Model was used to simulate the inner magnetosphere dynamics (Toffoletto et al., 2003). The ionospheric electrodynamics is simulated using the Ridley Ionosphere Model (Ridley et al., 2004). The MHD modeling domain consists of about one million grid cells. The size of the smallest cells is 0.25 R E (where R E is the Earth radius) close to the inner boundary of the modeling domain. The size of the largest cells is 8 R E (close to the outer boundary in the distant tail). The outer boundaries are set at 32 R E in the +x upstream direction, 224 R E in the −x downstream direction, and 128 R E in the ±y and z directions (GSM coordinates). The inner boundary is located at a distance of 2.5 R E from the Earth's center. One-minute OMNI solar wind data were used as an input in this run. The F10.7 cm flux was set to 130.4. Details and results of the run are available at the CCMC website (https://ccmc.gsfc.nasa.gov, run number Naomi_Maruyama_011818_1).
We would like to note that we also performed SWMF simulations with the same input parameters as were used in the CCMC Naomi_Maruyama_011818_1 run, but with different spatial resolutions at the inner boundary of the modeling domain, namely, 0.125 and 0.0625 R E . External magnetic fields (see Figure S1 and Table S1 in the supporting information) from higher-resolution MHD simulations appeared not to differ significantly from those obtained based on Naomi_Maruyama_011818_1 run in the region of our interest. Taking into account that small differences in the external magnetic field should not notably affect modeling results, we construct the "MHD-based" source using Naomi_Maruyama_011818_1 simulation outputs.
Construction of the Source Using the SECS Method
The second model of the source was constructed using the SECS method . In this method, the elementary current systems form a set of basis functions for representing two-dimensional vector fields on a spherical surface. An important application of the SECS method, which is relevant for this study, is the estimation of the ionospheric current system from ground-based measurements of magnetic field disturbances. Note that elementary current systems, as applied to ionospheric current systems, were first introduced by Amm (1997).
With the help of the SECS technique, it is possible to separate the measured magnetic field into external and internal parts, which are represented by two equivalent sheet currents placed in the ionosphere and underground .
To construct the external sheet current, we used IMAGE 10 s vector magnetic field data from all available stations, except for Røst and Harestua, for which the baselines are not yet determined. Baselines are subtracted from variometers' measurements according to the method of van de Kamp (2013). Ionospheric current density is computed using the 2-D SECS method with the following parameters: Table 2 The Same Caption as in Table 1 Table 3 The Same Caption as in Table 1 Note that extrapolation of the equivalent current density up to 42°E is performed in order to cover the whole region of Fennoscandia, even though the estimates of the equivalent current far from the stations are less reliable. This applies not only to estimates in areas outside of the region covered by the stations but also to estimates inside of the region covered by the stations at locations where the distances between the nearby station are large. We further perform the equivalent current extrapolation in order to ensure its smooth decay outside the region covered by the data. This is done to avoid the occurrence of artifacts from the edges of the current sheet. We also reduce the temporal resolution of the estimated equivalent current from 10 s to 1 min in order to perform a comparison of modeling results obtained via the MHD-based and SECS-based sources. We then project the current density onto a region of interest and perform vector rotation, which is required for the results' transition from spherical to Cartesian coordinate system. After that we interpolate the current density onto a regular Cartesian grid.
Plane Wave Modeling
The scheme of the GEF calculation via the plane wave approach differs from the one described in Section 2.1. The plane wave modeling results are obtained as follows: 1. 3-D EM forward modeling is carried out via PGIEM2G code with two (laterally uniform) plane wave sources for the SMAP conductivity model at FFT frequencies corresponding to periods from 2 min to 8 h. 3-D MT impedances Z(ω, r) (Berdichevsky & Dmitriev, 2008) that relate the surface horizontal electric field with the surface magnetic field at each grid point r are then calculated for each FFT frequency ω. 2. We then consider the magnetic field modeled using the PGIEM2G code and the SECS-based source as the "true" magnetic field, thus mimicking the actual magnetic field in the region. 3. Further, the horizontal GEF is calculated for each frequency and each grid point r as 4. Finally, an inverse FFT is performed for the frequency-domain GEF to obtain the "plane wave" GEF in the time domain.
Comparing Results at a Number of Locations in the Region
We first compare modeled and recorded magnetic field variations at the locations of the geomagnetic observa- Table 4 The Same Caption as in Table 1
but for the Horizontal Electric Field, SECS-Based and Plane-Wave-Based Results and for Three Extra Locations
locations are shown in Figure 1. The sampling rate of the time series is 1 min. The linear trend was removed from observatory data before comparing them to modeling results.
Three upper plots in Figures 4-6 demonstrate time series of (total, i.e., external + induced) magnetic field modeled using MHD-based and SECS-based sources (hereinafter to be referred as MHD-based and SECS-based magnetic fields), as well as time series of the observed magnetic fields. We do not show in these plots "plane wave" magnetic fields because by construction they coincide with the SECS-based magnetic field (see the second item in Section 2.3.3). It is seen that the agreement between SECS-based and observed magnetic fields for ABK and UPS observatories is very good in all components. This is not very surprising because magnetic field data from these observatories were used to construct the SECS source.
As the construction is based on the least-square approach, it inevitably attempts to make predictions close to observations. In this context probably the most interesting comparison is for the SPG observatory because this observatory is not a part of the IMAGE array, and its data were not used for the SECS source construction. Remarkably, the agreement between SECS-based and observed magnetic fields for SPG is also good, except B y component. The disagreement in B y may be due to a localized geomagnetic disturbance which is not accounted for in the SECS source model. Table 1 supports quantitatively the above observations by presenting correlation coefficients between corresponding time series and the normalized root-mean-square errors which is defined as where a and b are modeled and observed time series, respectively, a i and b i are elements of these time series, and n is the number of these elements.
It is also seen from the figures and Table 2 that the agreement between MHD-based and observed magnetic field is significantly worse for all considered observatories and all components. The agreement is especially bad in the B y component. On the whole, the magnitude of MHD-based magnetic field perturbations is underestimated (compared to the SECS-based and observed magnetic field perturbations). Moreover, MHDbased magnetic field captures less of the short-period variability. These results are consistent with results of Kwagala et al. (2020), who carried out SWMF simulations for a number of space weather events and compared SWMF-based (external) magnetic fields with observed ones at a number of locations in northern Europe. According to their modeling results, the SWMF predicts the northward component of external magnetic field perturbations better than the eastward component in auroral and subauroral regions, which is also the case in our modeling of the total magnetic field. As it was mentioned by Kwagala et al. (2020), poor prediction of the eastward component of magnetic field perturbations is directly related to the northward current density in the ionosphere and may arise from the misplacement of the currents in the SWMF with respect to the magnetometer stations.
Finally, two lower plots in Figures 4-6 show plane-wave-based, SECS-based, and MHD-based horizontal GEF. Note that long-term continuous observations of GEF are absent in the region, thus only modeling results are shown in the plots.
Similar to the MHD-based magnetic field, the MHD-based GEF is underestimated compared to the SECSbased GEF. The correlation between these modeling results is very low and nRMSE are high (see Table 3).
On the contrary, SECS-based and plane-wave-based electric fields are rather close to each other, especially at locations of UPS and SPG observatories; Table 4 illustrates this quantitatively. Correlation between modeling results at ABK observatory is lower and nRMSE is higher most likely due to the fact that this observatory is situated in the region with high lateral conductivity contrasts (resistive landmass and conductive sea). To put more weight on this inference last three columns of Table 4 and Figure 7 demonstrate SECS-based and plane-wave-based results for three "sites" also located in the regions with high lateral conductivity contrasts (their locations are shown in Figure 1). Now we observe that the difference between the results is even more pronounced which is, in particular, reflected in lower correlation coefficients and higher nRMSE.
Comparing Results in the Entire Region
Contrary to the previous section where we compared modeled and observed time series of the EM field at a number of locations, in this section, we compare MHD-based, SECS-based, and plane-wave-based electric fields in the entire region for two time instants discussed earlier in the paper.
Top and middle plots in Figure 8 show magnitudes of respective SECS-based and MHD-based GEF. Bottom plots show the absolute differences between corresponding GEF magnitudes. It is seen that SECS-based GEF significantly exceeds MHD-based GEF throughout the Fennoscandian region and for both time instants. The largest differences occur in areas of strong lateral contrasts of conductivity (e.g., at the coastlines) and at higher latitudes.
In a similar manner, Figure 9 presents the comparison of SECS-based and plane-wave-based GEF. In contrast to MHD-based results, at a first glance magnitude of plane-wave-based GEF is in overall comparable with SECS-based GEF (cf. top and middle plots in the figure). However, bottom plots show that the difference is substantial but more localized (compared to the difference between SECS-based and MHD-based results), occurring, again, in areas of strong lateral contrasts of conductivity and increasing toward higher latitudes.
Conclusions and Discussion
In this work, we performed 3-D modeling of the EM field in the Fennoscandian region during the September 7-8 geomagnetic storm in 2017. The goal of this model study was to explore to what extent the resulting EM field depends on the setup of the external source which induces this field. We have used three different approaches to the EM induction source setting. The first technique is based on the retrieval of the (laterally variable) equivalent current from the dedicated MHD simulation. In the second method, the laterally variable equivalent current is constructed on the basis of IMAGE magnetometers' data using the SECS approach. The third technique exploits the plane wave concept, which implies that the source is laterally uniform locally.
Two main findings of the paper are as follows. Magnetic field perturbations in Fennoscandia are reproduced much more accurately using the SECS rather than the MHD-based source, constructed using the SWMF. The difference between the GEF modeled using the SECS-based laterally varying source and the plane wave excitation is substantial in Fennoscandia, especially in the areas of strong lateral contrasts of conductivity (e.g., at the coasts), and at higher latitudes where lateral variability of the source becomes more pronounced.
We would like to remind the reader that in order to obtain MHD-based 3-D EM modeling results presented in this paper we calculated the external magnetic field perturbations on a coarse 5° × 5° grid, which was done to reduce the computational time. Ideally, the resolution of the grid should be much higher to account for the effects of small-scale current structures. However, according to our results the external magnetic field is not reproduced accurately enough at locations of geomagnetic observatories ABK and UPS using the SMWF outputs irrespective of the resolution of the MHD modeling domain (see Figure S1 and Table S1 in the supporting information). That is why increasing the external magnetic field grid resolution most likely will not significantly improve the 3-D EM modeling results, at least in the case of the September 7-8, 2017 geomagnetic storm, the Fennoscandian region, a particular setup of the SWMF described in the current paper and three considered resolutions of the MHD modeling domain. However, it is clear that a separate study is required to investigate the influence of the spatial resolution of the MHD model on the external magnetic field perturbations at the Earth's surface.
From our study, the reader may have a biased impression that the SECS-based current system is an ideal source candidate for rigorous modeling (and eventually forecasting) ground EM field due to space weather events. However, our vision of the problem is that each source setting discussed in this study has its own advantages and drawbacks.
The MHD-based approach is the only one out of the considered three, which allows researchers to forecast the space weather impact on ground-based technological systems. This is possible due to the fact that MHD simulations are run on the basis of the satellite solar wind data collected at the L1 Lagrange point. The solar wind velocity has a typical speed of 300-500 km/s and, thus, the geomagnetic disturbance observed at the Earth's surface is usually lagged compared to the L1 point in the range of 30-90 min (Cameron & Jackel, 2019). This time is, obviously, reduced for fast CMEs; the initial speed of one of the fastest recorded CMEs, which occurred on July 23, 2012 (but was not Earth-directed), was estimated as 2,500 ± 500 km/s (Baker et al., 2013). Another advantage of the aforementioned method is the ability to compute the equivalent current and, subsequently, the EM field for any point on the Earth. It is noteworthy that this method is not dependent on ground-based geomagnetic field observations. The drawback of the approach is that it is currently the least accurate among the considered modeling techniques. Moreover, significant computational resources (in terms of CPU time and memory) are required to carry out MHD simulations. In spite of the fact that these simulations are still rather far from reproducing actual ground geomagnetic disturbances (as is shown once again in this paper) there are continuing efforts to improve the predictive power of MHD models (e.g., Zhang et al., 2019).
The SECS-based approach uses ground magnetometers' data and, thus, does not have forecasting capabilities. However, it is the most accurate among all the considered approaches, but in order to properly capture the spatiotemporal evolution of the source, it requires a dense grid of continuous geomagnetic field observations in the region of interest.
The plane wave method is most probably an optimal choice for EM modeling (due to space weather events) in low-latitude and midlatitude regions provided MT impedances are estimated in these regions on as regular and detailed grid as practicable. The plane wave approach is the least computationally expensive among the three methods considered in this study, as MT impedances can be computed/estimated in advance and then convolved with the magnetic field which, again, requires a network of continuous geomagnetic field observations in the region. However, the violation of the plane wave assumption in high latitudes leads to significant differences between GEF modeled with the use of the SECS-based laterally varying source and the plane wave approximation.
Data Availability Statement
The SWMF model is available from the University of Michigan upon acceptance of license agreement, and SWMF results used here are available at NASA's Community Coordinated Modeling Center (CCMC: https:// ccmc.gsfc.nasa.gov/results/viewrun.php?domain=GM&runnumber=Naomi_Maruyama_011818_1). OMNI solar wind data were used as an input for this run (http://omniweb.gsfc.nasa.gov). We thank Toivo Korja, Maxim Smirnov, and Lisa Rosenqvist for providing the SMAP model. The SMAP model is available via the EPOS portal (http://mt.bgs.ac.uk/EPOSMT/2019/MOD/EPOSMT201_3D.mod.json). We thank the institutes that maintain the IMAGE Magnetometer Array: Tromsø Geophysical Observatory of UiT, the Arctic University of Norway (Norway), Finnish Meteorological Institute (Finland), Institute of Geophysics Polish Academy of Sciences (Poland), GFZ German Research Center for Geosciences (Germany), Geological Survey of Sweden (Sweden), Swedish Institute of Space Physics (Sweden), Sodankylä Geophysical Observatory of the University of Oulu (Finland), and Polar Geophysical Institute (Russia). In this paper, we also used magnetic field data collected at the geomagnetic observatory Saint Petersburg. We thank the Geophysical Center of the Russian Academy of Sciences that supports it and INTERMAGNET (www.intermagnet.org) for promoting high standards of magnetic observatory practice.
|
v3-fos-license
|
2018-04-03T04:46:04.947Z
|
2008-10-01T00:00:00.000
|
5175463
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.4103/1817-1737.42271",
"pdf_hash": "6ab7e4bfe4352076ab43fee1527fc9229afde04d",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44237",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b046e7d6c51370985b45616e0019401caf325baf",
"year": 2008
}
|
pes2o/s2orc
|
Evaluation of brain stem auditory evoked potentials in stable patients with chronic obstructive pulmonary disease
Though there are few studies addressing brainstem auditory evoked potentials (BAEP) in patients with chronic obstructive pulmonary disease (COPD), subclinical BAEP abnormalities in stable COPD patients have not been studied. The present study aimed to evaluate the BAEP abnormalities in this study group. MATERIALS AND METHODS: In the present study, 80 male subjects were included: COPD group comprised 40 smokers with stable COPD with no clinical neuropathy; 40 age-matched healthy volunteers served as the control group. Latencies of BAEP waves I, II, III, IV, and V, together with interpeak latencies (IPLs) of I-III, I-V, and III-V, and amplitudes of waves I-Ia and V-Va were studied in both the groups to compare the BAEP abnormalities in COPD group; the latter were correlated with patient characteristics and Mini–Mental Status Examination Questionnaire (MMSEQ) scores to seek any significant correlation. RESULTS: Twenty-six (65%) of the 40 COPD patients had BAEP abnormalities. We observed significantly prolonged latencies of waves I, III, V over left ear and waves III, IV, V over right ear; increased IPLs of I-V, III-V over left ear and of I-III, I-V, III-V over right side. Amplitudes of waves I-Ia and V-Va were decreased bilaterally. Over left ear, the latencies of wave I and III were significantly correlated with FEV1; and amplitude of wave I-Ia, with smoking pack years. A weak positive correlation between amplitude of wave I-Ia and duration of illness; and a weak negative correlation between amplitude of wave V-Va and MMSEQ scores were seen over right side. CONCLUSIONS: We observed significant subclinical BAEP abnormalities on electrophysiological evaluation in studied stable COPD male patients having mild-to-moderate airflow obstruction.
C hronic obstructive pulmonary disease (COPD) is a disease state characterized by airfl ow limitation that is not fully reversible. The airfl ow limitation is usually both progressive and associated with an abnormal infl ammatory response of the lungs to noxious particles or gases. COPD is a major public health problem and, currently, fourth leading cause of death worldwide. [1] A further increase in prevalence of, and mortality due to, the disease is predicted for the coming decades. COPD is presently regarded as a multi-system disorder. The associated peripheral neuropathy is well described in medical literature. [2,3] In addition, motor neuron involvement, encephalopathy, and derangement of cognitive function have been observed in patients with chronic respiratory insufficiency. Brainstem auditory evoked potentials (BAEP) are the potentials recorded from the ear and vertex in response to a brief auditory stimulation to assess the conduction through auditory pathway up to midbrain. BAEP in patients with COPD have been evaluated in previous studies, but the characteristics of included patients and study outcomes have been at great variation. [4][5][6] Kayacan et al. observed that smoking, airways obstruction, and longlasting COPD may not only cause peripheral neuropathy but may also affect the pontomedullary portion of the brain due to hypoxemia, hypercapnia, and respiratory acidosis. [4] Atis and co-workers studied BAEP in patients with severe COPD and concluded that eighth cranial nerve and brainstem functions were impaired in COPD. [5] Barbieri et al. reported that there was no signifi cant difference in BAEP in mild-ormoderate chronic respiratory insuffi ciency, apart from acidosis. [6] It appears the previous studies have included COPD patients having severe airfl ow obstruction or signifi cant hypoxemia/ hypercapnia. The present study is undertaken to fi nd out prevalence of BAEP abnormalities in stable patients with COPD having no clinical auditory dysfunction/impairment; and to analyze for possible correlation of BAEP abnormalities with patient characteristics, including age, duration of illness, quantum of smoking, spirometric indices, and Mini-Mental Status Examination Questionnaire scores.
Materials and Methods
The study was conducted in the departments of Respiratory Medicine and Physiology at Rohtak, India. This was a crosssectional study and was approved by the Institutional Board of Studies and by the ethical committee. All subjects were male and enrolled between November 2006 and October 2007. The COPD patients fulfi lling the criteria of the study, having age at least 40 years, attending the COPD clinic run at the Department of Respiratory Medicine, and who gave consent to complete the required investigations as per study protocol were included in the study. The diagnosis of COPD was based on the modifi ed criteria defi ned in the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines. [7] All the included COPD patients were smokers and had irreversible/partially reversible obstruction of airfl ow. The patients were included only if they had a stable course of their disease with regular follow-up during the preceding 1 year and no hospitalization for COPDrelated illness during the preceding 6 months. Patients with clinical evidence of any neurological defi cit/neuropathy or those having concomitant diabetes mellitus, chronic alcoholism, uremia, cystic fi brosis, sarcoidosis, leprosy, malignancy, any hereditary disorders involving peripheral nerves, history of intake of any neurotoxic drug, or history of any traumatic lesion possibly affecting brainstem functions were excluded from the study. The control group comprised of an equal number of agematched healthy volunteers having no risk factor that may lead to neuropathy. All healthy volunteers were nonsmokers. They were selected from medical/paramedical staff of our institute; some healthy attendants of the patients were also included in the control group.
Smoking pack years were calculated from mode of smoking (bidi, cigarette, or hookah), daily consumption, and the total number of years for which the patient had been smoking.
One pack year was 20 cigarettes smoked everyday for 1 year. [8] For bidi, cigarette equivalents were calculated by applying a factor of 0.5 to the number of bidis; [9] and for hookah, 12.5 g of loose tobacco was equivalent to one packet of 20 cigarettes. [10] The spirometry was carried out on Transfer Test Model 'C' (P. K. Morgan, Kent, UK). Inhaled short-acting bronchodilators were withheld for 6 hours before the test; long-acting β-agonists, 12 hours before the test; and sustained-release theophyline, 24 hours before the test. Spirometric indices were calculated using the best out of 3 technically satisfactory performances as per recommendations of the American Thoracic Society. [11] The following parameters were recorded: peak expiratory fl ow rate (PEFR), forced expiratory volume in the fi rst second (FEV 1 ), forced vital capacity (FVC), and FEV 1 /FVC%.
Electrophysiological studies were carried out on a computerized nerve conduction testing equipment: RMS EMG EP MARK II (Recorders and Medicare Systems Pvt. Ltd., Chandigarh, India); the settings were as shown in Table 1.
Procedure of BAEP
The patient was put at ease and was made to lie down with eyes closed, relaxed on a couch, in a soundproof and air-conditioned room. After thorough cleaning of the electrode recording sites on the scalp, electrolyte paste was applied on the recording surface of disk electrodes and then Ag/AgCl electrodes were affi xed at predetermined positions on the scalp according to 10/20 international system of electrode placement. [12] The signals were picked by electrodes and were fi ltered, amplifi ed, averaged, displayed on the screen of RMS EMG EP MK2, and recorded. Subsequently, interpeak latencies (IPLs) were calculated.
The normal BAEP recording consists of fi ve or more vertexpositive and vertex-negative waves [ Figure 1] arising within 10 milliseconds of auditory stimulus. [13] Latencies of waves I, II, III, IV, and V, together with interpeak latencies of I-III, I-V, and III-V, and amplitudes of waves I and V were measured from recordings.
Mini-mental status examination
All included subjects, including COPD patients and healthy volunteers, were analyzed for their mental status using the Mini-Mental Status Examination Questionnaire (MMSEQ). [14] Statistical analyses The data of healthy volunteers and COPD patients was analyzed by incorporating the same in two different groups. The data were examined for normal distribution, and transformations were made where appropriate. The group means and the standard deviations for each variable were calculated in healthy volunteers group and COPD group separately. The statistical signifi cance of difference between group means of various parameters between healthy volunteers group and COPD group was analyzed by using independent sample t test, and a P value <.05 was considered statistically signifi cant. Individual Two types of clicks were produced: one, moving the earphone diaphragm away from eardrum (rarefaction click); and the other, moving it in the opposite direction (condensation or compression click). In this study, stimulus with alternating polarity was used. Recording electrodes: The volume-conducted evoked responses are picked up from scalp by electrodes. Two reference electrodes were attached to left and right mastoids, designated as A1 and A2 respectively; one active electrode on vertex, labeled as Cz; and one as ground electrode to forehead, termed as Fz. All the electrodes were plugged to the junction box. Skin-to-electrode impedance was monitored and kept below 5 KΩ. Recommended montage for BAEP: Channel I: Cz-A1 Channel 2: Cz-A2 Ground: Fz COPD patients having BAEP abnormality beyond the range of 'mean ± 3' standard deviation from healthy volunteers were considered as having signifi cant BAEP abnormality. The BAEP abnormalities in COPD patients were correlated with patients` characteristics, including age, duration of illness, quantum of smoking, spirometric indices (FEV 1 , FEV 1 /FVC%, and PEFR), and the MMSEQ scores. The data obtained was statistically analyzed using Pearson's correlation. All statistical analyses were carried out with the help of SPSS (version 14.0), Chicago, software.
Results
We included 80 male subjects comprising of 40 COPD patients and 40 age-matched healthy volunteers. All subjects were aged 40 years or more. The characteristics of subjects included in the present study were as shown in Table 2. The COPD patients had a post-bronchodilator FEV 1 less than 80% of the predicted value, along with an FEV 1 /FVC% not more than 70%. They had an increase in FEV 1 less than 200 mL, or less than 12% of baseline value 20 minutes after 2 puffs of inhaled salbutamol given via a metered-dose inhaler using a spacer. The duration of symptoms in all patients with COPD was 5 years or more. All healthy volunteers were nonsmokers and had no symptoms suggestive of any disease. As expected, their spirometric indices were statistically different from COPD patients. Table 3 provides summaries [mean ± SD] of variables of BAEP wave patterns recorded over left ear and right ear separately in healthy volunteers group, comparing the same with those in COPD group. Over left ear, the latencies of waves I, III, and V in COPD patients were prolonged signifi cantly as compared to the healthy volunteers. The latencies of waves II and IV were also increased in COPD group but had no statistical signifi cance. Over the right side, there was signifi cant prolongation of the latencies of waves III, IV, and V in COPD group as compared to the healthy volunteers.
The interpeak latencies (IPLs) of III-V and I-V were signifi cantly prolonged in the COPD patients as compared to healthy volunteers over both ears; in addition, interpeak latency of I-III was signifi cantly prolonged in the COPD group over right ear.
Amplitude of the wave I-Ia in the COPD patients was signifi cantly decreased when compared to that in healthy volunteers, over both ears respectively. Similarly, amplitude of the wave V-Va in the COPD patients was signifi cantly decreased when compared to that in healthy volunteers, over both ears respectively.
Individual COPD patients who had any BAEP abnormality were also analyzed, and the details are shown in Table 4. The BAEP abnormality was considered to exist when there was Figure 2]. Similarly, the latencies of wave III over left ear correlated negatively with FEV 1 ; the correlation was statistically signifi cant [ Figure 3]. The correlation between amplitude of wave I-Ia recorded over left ear and smoking pack years was a negative one and statistically signifi cant [ Figure 4]. Other correlations were not statistically signifi cant.
The correlations between the variables of BAEP wave patterns recorded over right ear and the characteristics of COPD patients were as shown in Table 6. The correlation between amplitude of wave I-Ia and duration of illness was a weak positive one [ Figure 5]; the correlation between amplitude of wave V-Va and MMSEQ scores was a weak negative one [ Figure 6]; though both were statistically signifi cant. Other correlations between BAEP variables and the characteristics of COPD patients were not signifi cant.
Discussion
Before we discuss and compare the observations in our study with those of other studies, we feel it is worthwhile to consider signifi cant differences between characteristics of the study subjects included in our study and those of the subjects in other studies [ Table 7]. Kayacan et al. [4] included 32 patients with COPD having age 61 ± 8.8 years. They have not described the details of the inclusion and irreversibility criteria. Atis et al. [5] included 21 patients with severe COPD according to the criteria [15] of the American Thoracic Society (1987). Some of the patients included had clinical evidence of neuropathy. In our study, all COPD patients were signifi cant smokers and had irreversible/partially reversible airfl ow limitation, a defi ning characteristic of COPD. Other studies did not have conformity regarding the reversibility criteria as recommended in Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines, [7] which were taken into consideration in the present study. Moreover, quantum of smoking in our study was more despite a lower mean age of COPD patients when compared to that in previous two studies.
In our study, we included stable COPD patients with mildto-moderate airfl ow obstruction and with no clinical features suggestive of any neuropathy. Our objective was to assess the impaired brainstem auditory evoked potentials in stable COPD patients [and perhaps early in the course of their disease] with no clinical features of any neurological defi ciency -the COPD patients that are usually seen at the level of general clinical practice. This study group was not evaluated in previous studies. It is not reasonable to compare prevalence of peripheral neuropathy observed in our study with that observed in other previous studies due to differences in the characteristics of subjects included in various studies. The common BAEP abnormalities observed in COPD patients in our study and previous studies include prolongation of latencies of waves I, III, V; and interpeak latencies of I-III and III-V. In addition, our study found decreased amplitudes of waves I-Ia and V-Va. Though none of the COPD patients included in the present study had signifi cant hypoxemia or hypercarbia, the existing medical literature suggests that hypoxemia results in peripheral nerve damage by harming the vaso nervosum. In the early stages of ischemia, mechanisms to reduce peripheral neuropathy are activated, but these become insuffi cient over time and obvious neuropathy is inevitable in chronic hypoxemia. [16] It has been hypothesized that the abnormal BAEP fi ndings are due to brainstem hypoxia which increases with the severity of COPD. Sohmer et al. demonstrated depression of the auditory nerve-brainstem evoked response, as well as vestibular and visual evoked potentials during severe hypoxemia in cats. [17] In addition to chronic hypoxemia and hypercapnia, other associated factors in patients with COPD, including tobacco smoking; malnutrition; and drugs used in COPD treatment, like long-acting inhaled β2 agonists, inhaled anticholinergic agents, inhaled glucocorticoids, and sustained release theophyline, may be possibly associated with neuropathy seen in COPD patients. [16,18,19] Though none of our patients had signifi cant hypoxemia, they had longer duration of illness and more smoking pack years; so, whether severity of hypoxemia alone or the chronicity and severity of hypoxemia together contribute to development of peripheral neuropathy needs to be evaluated in future studies. As COPD patients in our study were heavy smokers, the possibility of the contents of cigarette smoke leading to BAEP abnormalities remains.
We could not fi nd any correlation between the BAEP parameters and pulmonary function test parameters, except BAEP latency of waves I and III with FEV 1 on left side. The poor correlation in spite of signifi cant BAEP abnormalities is probably due to the narrow range of patients' characteristics and pulmonary function parameters in our patients as we included relatively stable patients during the early course of COPD, having mildto-moderate airfl ow obstruction.
To conclude, in the present study, we observed signifi cant BAEP abnormalities on electrophysiological evaluation in 26/40 [65%] studied stable male COPD patients with mild-tomoderate airfl ow obstruction (and with no clinical neuropathy), and these patients had signifi cant smoking history with no signifi cant hypoxia or hypercapnia.
|
v3-fos-license
|
2018-12-28T07:08:11.175Z
|
2018-05-25T00:00:00.000
|
158580188
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/10/6/1730/pdf",
"pdf_hash": "e0d8d4de839341ea33fd6d5caa40523944311e24",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44238",
"s2fieldsofstudy": [
"Education"
],
"sha1": "354bcd1f8cbedbeb747b4b71a319af2989460295",
"year": 2018
}
|
pes2o/s2orc
|
A Nationwide Survey Evaluating the Environmental Literacy of Undergraduate Students in Taiwan
: The aim of this nationwide survey was to assess undergraduate students’ environmental literacy level in Taiwan. A total of 29,498 valid responses were received from a number of selected colleges and universities in Taiwan, using stratified random sampling method. A total of 70 items were used to assess the environmental literacy and the results revealed that undergraduate students had a relatively low level of environmental knowledge and behavior, while a moderate level of environmental attitudes was attained. The findings also indicated no significant correlations between knowledge and attitudes or between knowledge and behavior. However, a higher level of environmental knowledge correlated significantly with a higher degree of pro-environmental behavior, and a higher level of environmental knowledge correlated with stronger attitudes. The results also suggested that females outperformed the males in all categories. Results from this study could contribute towards further relevant policy discussion and decision-making, curriculum design and development to the improvement of environmental education in the higher education sector.
Introduction
The on-going environmental problems nowadays can be attributed to the increasing population, economic development and industrialization, pollution, urbanization, and resource depletion globally. The fundamentals of these on-going problems are predominantly associated with people's lifestyles and their extensive activities occurring in the natural surroundings [1,2], which gives rise to the importance of balancing the relationship between human and natural environments that was already recognized and supported by the World Commission on Environment and Development in 1987. The development of environmental awareness, knowledge and skills are considered as essential to help minimize environmental problems, and environmental education is seen as a key element in creating an environmentally literate society [3,4]. Through this, a responsible environmental behavior can be developed and help prevent and minimize environmental problems in a sustainable manner [5,6]. Current environmental education literature reveals a range of prior studies conducted that consistently related to the development of an environmentally literate citizenry [7,8]. Some of these areas of studies include: reviews of environmental education literature [9,10], definitions and frameworks [11,12], purpose and goals [13,14], and responsible environmental behavior [15,16]. However, there are limited studies conducted on a national scale and in the higher education context, which is the focus of this study.
Environmental Education in Taiwan
In Taiwan, the government has recognized the importance of environmental education and substantial efforts have been made to promote it in the past decades, with the intention to develop responsible environmental behavior [17]. In fact, environmental education has been taught in elementary, junior and middle level schools in the 1980s and 1990s, where the curricula incorporate basic environmental concepts that mainly seek to generate children's awareness about the environment and its related issues. The success of this has led to the extension in the curricula to include motivation and commitment elements that further enhance children's knowledge and skills in solving environmental problems. And consequently, environmental education was formally incorporated into the national curriculum framework in 2000. The government's support is evident by the environmental education development plan initiated by the National Science Council in 1993, which has recommended that more environmental behavior studies to be conducted in order to establish a responsible environmental behavior model appropriate for the Taiwanese. Despite this acknowledgement, there are still inadequate evidences in both practice and research in environmental education in Taiwan and this may be attributed to the early stage of environmental behavior research and development in the country itself [18]. Although there are numerous EE studies [15,19] conducted and responsible environmental behavior model developed but most of which have focused on Western developed countries and may not be appropriate in the context of Eastern countries, specifically for Taiwan.
Accordingly, formal education system plays an important role in environmental education efforts and in particular at the higher education level where transdisciplinary curriculum is offered that can help to further facilitate and enhance university students' environmental literacy that aims to develop their responsible environmental behavior [20,21]. Over the years, significant efforts had been made to the design and development of environmental education curriculum in Taiwan's higher education sector in order to create the necessary awareness, and develop critical knowledge and skills towards achieving a responsible environmental behavior. However, there has been inadequate evidence to provide greater insights about the success or failure of these efforts. Given that there is a general lack of existing empirical knowledge about the significant aspects of environmental literacy in Taiwan, this study aims to fill this gap by investigating the level of environmental literacy in university students whom have been exposed to transdisciplinary curriculum that can potentially further enhance the development of their responsible environmental behavior.
The significance of this study includes the provision of greater insights into the considerable efforts that the Taiwanese government has made towards environmental education over the years by assessing university students' environmental literacy on a national scale. This contributes towards further relevant policy discussion and decision-making, curriculum design and development to the improvement of environmental education, so that an environmentally literate society can be achieved. More importantly, the findings can also provide a benchmark for any future studies on environmental literacy at different levels in Taiwan and other countries, especially those that are developing and characterized by rapid industrialization and urbanization. Furthermore, this study contributes to the literature of environmental literacy as well as in the field of responsible environmental behavior.
Framework and Elements of Environmental Literacy
Although environmental literacy has been investigated in numerous research studies since 1960s, but there is still no one single agreed definition of it. Many of these prior studies had attempted to define environmental literacy by considering their scope of research and the context involved, and some of these definitions include: the possession of basic skills, understanding, and feelings for the human-environment relationship [21,22]; an understanding of the interaction between human beings and their natural environment in terms of living and non-living things [23,24]; the cognitive skills and knowledge needed at macro level for behavioral change towards a better environment [25,26]; knowledge of the environment that also involves values, attitudes, and skills that can be converted into actions [21,27]. For the purpose of this study, environmental literacy is regarded as an individual's knowledge and attitudes about the environment and its related issues, and through the acquired skills to help minimize and/or resolve environmental problems and remain an active participation that contributes towards an environmentally literate society [28].
The key environmental literacy variables to be investigated in this study are based on the environmental literacy framework developed by the Environmental Literacy Assessment Consortium and this framework has been used by researchers to undertake national assessments of environmental literacy in several countries such as South Korea [29], Israel [30], Turkey [31], and the United States [32]. This framework outlines three key elements that need to be considered when evaluating environmental literacy, and these include: (1) cognitive (knowledge and skills), (2) affective, and (3) behavioral.
The cognitive element refers to the ability to identify, investigate, analyze and evaluate environmental problems and issues based on the knowledge of ecological and socio-political foundations. This element also includes having the necessary knowledge and ability to develop and evaluate appropriate action strategies that seek to influence outcomes on environmental problems and issues. The key purpose of this element is to assess people's understanding of natural systems, environmental issues, and action strategies.
The affective element considers an individual's empathetic and caring attitude towards the environment who recognizes the values of environmental quality and is willing to take on appropriate actions to help prevent and resolve environmental problems and issues. This element seeks to evaluate people's environmental awareness and sensitivity, decision-making attitude on environmental issues and taking environmentally responsible action, and environmental values on ethical considerations and reflective thinking about the relationships between humans and the environment.
The behavioral element focuses on the belief of an individual or a group of individuals about their ability to influence outcomes of environmental problems and issues. There is also an assumption of personal responsibility to take reasonable actions that help influence the environment. These environmentally responsible actions are generally classified into five categories: (1) eco-management such as recycling, energy conservation, (2) economic/consumer action focuses on the use of monetary support or financial pressure such as financial donation to environmental groups, (3) persuasion to appeal to others to help minimize or resolve environmental problems/issues, (4) political action through voting, lobbying over concern for environmental problems/issues, and (5) legal action such as lawsuits, reporting pollution violations to authorities that aim to enforce existing laws. The key focus of this element is to investigate people's intentions to act upon environmentally friendly behaviors, environmental action strategies and skills to identify and evaluate environmental issues, and involvement in responsible environmental behavior.
Based on the above, the respective elements and components to be explored in this study are as shown in Table 1 below. The key purpose of this study is to examine the undergraduate students' environmental literacy level on a nationwide scale in Taiwan by considering the three key environmental literacy elements as outlined in the literature. Specifically, the following objectives are investigated.
•
To assess the level of environmental literacy of undergraduate students in Taiwan on the following elements: (1) cognitive (knowledge and skills), (2) affective, and (3) behavioral.
•
To identify any significant correlations in the undergraduate students' scores on the three elements.
•
To identify information sources from which undergraduate students gather environmental information.
Materials and Methods
This study was part of a nationwide survey assessment in Taiwan that was carried out with 32,321 undergraduate students by using an environmental literacy instrument developed based on the established environmental literacy framework. The large-scale survey adopted in this study was regarded as valuable in educational research domains, especially when "education policy debates are framed by questions about 'what works' and how 'big' the effects of specific educational practices are" on learning performance [33].
Participants
The participants of the study consisted of first-, second-, third-, and fourth-year undergraduate students in Taiwan. According to the annual report of Education Statistical Indicators published by the Ministry of Education, the target population was 1,077,396 students from 163 colleges and universities in Taiwan in the 2012 academic year when this survey was conducted. A sample of 57 colleges and universities were selected based on a stratified random sampling method with key considerations given to geographic (22 regions) and demographic (size and level of colleges and universities) stratum. With an average sampling rate of 3%, a total of 32,321 questionnaires were distributed and surveyed in the selected colleges and universities, of which 29,498 valid responses were received that represented a return rate of approximately 91.3%. To determine the representativeness of the sample, the chi-square (χ 2 ) test was used to test between sample and population demographics, which resulted in Pearson chi-square (χ 2 ) being 393.901, and a p value of approximately 0.000. Thus, the sample size of 29,498 was deemed to be representative of the population. The key demographic profiles of the respondents were briefly outlined in Table 2.
The findings revealed that both male and female were equally represented in this study with a respond rate of 49.1% and 49.6% respectively. In terms of the year level, first-year students accounted for 28.4% and this was followed by second-year (25.6%), third-year (23%) and fourth-year (22.4%) students. While majority of the students lived with their families (40%), other accommodation arrangements also included outside school rentals (30%) and school dormitory (27.5%). The most common type of family structure identified was nuclear family (65.9%) with the remaining being three generations (19.1%) and single-parent families (11.1%).
Instrument and Instrumentation
When designing these questionnaire items, the following steps were undertaken that acknowledged the differences of various assessment frameworks in existence and sociocultural contexts, which in this case the environmental literacy instrument was compiled with consideration given to the alignment of the contextual issues in Taiwan.
•
Step 1: More than 30 research papers and articles related to environmental literacy in Taiwan and abroad [27,31,32,34] were reviewed to establish the item pools to be considered.
•
Step 2: Using a similar process by Erdogan and Ok [31], items in the pool were selected in accordance to the research objectives guided by the definition of each elements and related components. The table of specifications, and the initially compiled questions items were subsequently prepared for panel review.
•
Step 3: This initially compiled question items were given to a panel of 10 experts from various areas of specialization, such as environmental education, earth science, geography, and urban planning, for their formal review and expert opinions. The experts were required to evaluate the items on their appropriateness, relevance and language used from which content validity has to be reached. Each question item had received at least 80% agreement by the experts.
•
Step 4: The instrument was revised based on the experts' opinions and feedback, and subsequently pre-tested with 20 randomly selected undergraduate students. The item analysis with regards to difficulty and discrimination for knowledge items and the factor analysis for scale items were conducted to determine the question items to be included in the final version of the questionnaire survey.
The final questionnaire survey consisted of two main sections; (1) demographic items, and (2) environmental literacy assessment items. Although there were 12 survey items in the demographic section, only some items were presented as variables for analysis as shown in Table 1. As for the environmental literacy assessment section, there were a total of 70 items used to assess the three main elements (i.e., cognitive, affective, and behavioral) as discussed in the literature.
Sixteen question items were developed for the cognitive element that aimed to assess undergraduate students' knowledge and understanding of natural systems, environmental issues, Sustainability 2018, 10, 1730 6 of 21 and action strategies. Of these 16 questions, nine were True-False questions and remaining seven were Multiple-Choice questions. Next, 23 items were included in the affective element which sought to assess undergraduate students' environmental awareness and sensitivity, values, and decision-making attitude on environmental issues. These 23 question items were designed in the form of a five-point Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree). The score was reversed when the question item was presented in a negative manner. Lastly, the behavioral element was designed to investigate undergraduate students' intentions to act, their action strategies and skills, and involvement in responsible environmental behavior. It consisted of 31 question items using a five-point frequency scale (never, rarely, sometimes, often, always) that focused on undergraduate students' environmentally responsible actions, including persuasion, eco-management, consumer and economic action, and legal and political action.
Data Collection and Analysis
The questionnaire survey was conducted face-to-face with undergraduate participants at the respective randomly selected colleges and universities in Taiwan. The study involved data collection, data analysis using descriptive statistics, item analysis, an independent sample t-test, a one-way analysis of variance (ANOVA), and structural equation models (SEMs). Each of these methods is further illustrated as follows.
Descriptive Statistics
The mean and standard deviation (SD) of the test were used to determine the distributions of the participants' background variables and environmental literacy in the cognitive, affective, and behavioral dimensions [35].
Item Analysis
Item analysis was used to determine a set of quality question items suitable for inclusion in the questionnaire. In this study, the critical ratio (CR) was analyzed and the item-total correlation (ITC) was corrected to measure environmental literacy in the three elements, namely cognitive, affective, and behavioral. Through these procedures, unsatisfactory items were eliminated and resulting in only those question items that were highly relevant to the alignment of the contextual issues in Taiwan.
Reliability Analysis
Cronbach's α coefficient was used to determine the reliability of the data obtained from the questionnaires at the pretest and actual survey stages. If the Cronbach's α coefficients of the individual factors were 0.70 or greater, then the factors have exhibited satisfactory reliability. If the Cronbach's α coefficients of the factors were 0.80 or greater on a total scale, then the questionnaire has achieved overall reliability.
Independent Samples T-Test
A t-test was used to examine if there were any differences between two groups. The implementation of the dichotomous variables at object background variables were examined and used to determine whether significant variations existed in each set of samples on their environmental literacy (i.e., cognitive, affective, and behavioral dimensions). Specifically, the gender and club participations were investigated in this study.
Structural Equation Modelling (SEM)
A multivariate statistical technique combining factor analysis and path analysis from a series of formulas were used to determine the causal relationships among the variables [36]. A series of formulas in structural equation modeling (SEM) includes a diverse set of mathematical formulas to report a selection of different fit measures, such as chi-squared test and confirmatory factor analysis. Of the 29,498 valid questionnaires received, only 27,249 were used in the SEM, since the remaining questionnaires were incomplete and therefore rejected.
General Descriptive Findings
The findings indicated that majority (85.7%) of the survey participants did not participate in any clubs or societies at the university/college, while only 13.2% were involved. Approximately three-quarter (75.1%) of the undergraduate students spent two-hour or less per week to obtain environmental related information, 8.8% spent two or more hours and the remaining 15.8% never spent time on it. Television news (61.5%), online learning (48.4%), and television programs (32.6%) were considered to be the three major sources for acquiring environmental related information. The top five-favored type of environmental education were outdoor experiential learning (45.8%), watch movies (43.7%), visiting museums or conservation centers (35.6%), online learning (29.3%), and attend lectures (23.4%). A more detailed descriptive statistics summary findings for each of the variables highlighted above is provided in Table 3 below.
Cognitive Element
In the cognitive element, the component on knowledge of natural systems was evaluated by four True-False questions (i.e., K1 to K4) and five Multiple-Choice questions (i.e., K5 to K9). The participants answered 58.1% of the questions correctly, indicating that their environmental knowledge of natural systems was generally inadequate. As shown in Table 4, only Question K7 on the "description of the function of tropical rainforests" exhibited a correct response rate of greater than 80%, whereas the correct response rates for Questions K1, K2, K5, K8, and K9 were below 60%. This indicated that the undergraduate students' knowledge about the natural systems was generally insufficient, particularly on the cognitive issues of biodiversity, greenhouse gases, natural disasters, and ecological conservation. The second component, knowledge of environmental issues in the cognitive element was assessed through three True-False questions (i.e., K10 to K12) and one Multiple-Choice questions (i.e., K13). The participants answered 64.9% of the questions correctly, suggesting that their environmental knowledge of social, cultural, political, and economic issues was at a moderate level. In particular, only Question K12 on "The wisdom pass down by our ancestors is adequate to help us cope with the current climate problems and environmental changes" achieved a correct response rate greater than 80%, while the correct response rate of Question K11 was below 50% (refer to Table 4). This indicated that the undergraduate students' environmental knowledge about the customary cultural issues (such as the Chinese tradition of drinking realgar wine during the autumn season to protect themselves from illness and stay healthy) was generally inadequate.
When assessing the knowledge of appropriate action strategies component in the cognitive element, two True-False questions (i.e., K14 to K15) and one Multiple-Choice question (i.e., K16) were used. The participants answered 61.5% of the questions correctly, revealing that their knowledge of appropriate action strategies was moderately inadequate. Although Question K16 on the "identification of environmental labelling" had attained a correct response rate greater than 80%, however, the correct response rate for Question K15 was below 60% (refer to Table 4). This indicated that undergraduate students' knowledge of appropriate action strategies was generally insufficient, particularly regarding the concern of the "cognitive awareness level of resource consumption ratio".
Affective Element
In the affective element, the environmental awareness and sensitivity component was evaluated with seven questions ranked on a five-point Likert scale. All of the questions were positive. The average score with the standard deviation was 3.75 ± 0.688 (a maximum of 5 points on the Likert-type scale). Question A6, "I have the initiative to learn environmental knowledge," elicited a relatively low average score (3.45 ± 0.844), whereas Question A3, "I believe that toxic emissions from anthropogenic waste can cause a negative environmental impact" had achieved a relatively high average score. A moderate degree of environmental awareness and sensitivity was evident in the undergraduate students.
There were eight questions ranked on a five-point Likert scale used to assess the environmental values component in the affective element. All of the questions were positive. The average score with the standard deviation was 3.95 ± 0.769 (a maximum of 5 points on the Likert-type scale). The average scores for Questions A10, A12, and A15 were 4 points or more, and the average scores of the remaining questions were 3.5 points or more. The items in the environmental values component demonstrated relatively high scores by the undergraduate students.
The component on decision attitude about environmental issues in the affective element involved eight questions ranked on a five-point Likert scale for evaluation. All of the questions were positive. The average score with the standard deviation was 3.71 ± 0.69 (a maximum of 5 points on the Likert-type scale). The average scores for all of the items ranged between 3.5 and 4. The items in the decision attitude about environmental issues component revealed relatively high scores in the affective element. However, the attitudes of "discussed with colleagues" (3.46 ± 0.90) and "advised misconducted behavior" (3.54 ± 0.897) indicated relatively low scores, which suggested that undergraduate students disregard the incentive of environmental justice and the altruistic perspective for making correct decisions. A summary finding of the affective elements is presented in Table 5.
Behavioral Element
In the behavioral element, the intentions to act component was evaluated through eight questions ranked on a five-point Likert scale. All of the questions were positive regarding whether the participants participated in private acts, shallow green behavior, or altruistic behavior. The average score with the standard deviation was 3.54 ± 0.679 (a maximum of 5 points on the Likert-type scale). The participants obtained relatively high scores for Questions BEH1 to BEH4. In contrast, the participants attained relatively lower scores for Questions BEH5 to BEH8, which indicated that students exhibited a certain responsibility and sense of mission and were willing to cooperate with government policy regarding the implementation of environmental actions. However, the undergraduate students' environmental behavioral intentions were relatively negative regarding participation in discussions on environmental concerns, provision of opinions, and initiatives to attend environmental activities.
The assessment of the environmental actions and skills component in the behavioral element consisted of 19 questions ranked on a five-point Likert scale. All of the questions were positive. The average score with the standard deviation was 3.16 ± 0.742 (a maximum of 5 points on the Likert-type scale). The participants obtained relatively low scores for all of the items in this component, indicating that undergraduate students maintained negative behavior and acted as bystanders regarding environmental action. Only items on the basic classification of garbage recycling exhibited higher scores than the other items did, and all of the other items received relatively low scores regarding learning from the environment, suggestions of environmental protection, idea communication, and action capabilities.
The responsible environmental behavior component in the behavioral element was measured by four questions ranked on a five-point Likert scale. All of the questions were positive. The average score with the standard deviation was 3.717 ± 0.878 (a maximum of 5 points on the Likert-type scale). On the scale, Questions R1 to R4 displayed averages scores between 3.74 ± 0.876 and 3.65 ± 0.882, and the results did not reveal any particular prominent signs for this element. Table 6 provides a summary of the results for the behavioral elements.
T-Test and Chi-Square Test
Gender and participation in clubs and societies at universities/colleges were investigated to determine their relationship with environmental literacy. According to the level of clustering, three categories were identified based on the average scores; (1) lower quartile group (i.e., the lowest 20% of the average scores from 0.99 to 2.99), middle quartile group (i.e., between 20% and 80% of the average scores from 3.00 to 3.73), and top quartile group (i.e., top 20% of the average scores from 3.74 to 4.86). The test of independence in calculating the chi-square (χ 2 ) was then used to determine the associations between gender (χ 2 = 393.901, df = 2, p = 0.000; likelihood ratio tests = 396.577, df = 2, p = 0.000) and participation in clubs and societies at universities/colleges (χ 2 = 233.102, df = 2, p = 0.000; likelihood-ratio tests = 232.369, df = 2, p = 0.000) on environmental literacy. The test results indicated that gender and participation in clubs and societies were positively associated with undergraduate students' environmental literacy.
Female undergraduate students scored higher than the male students in seven of the nine components of environmental literacy investigated in this study, namely (1) knowledge of natural systems, (2) knowledge of environmental issues, (3) knowledge of appropriate action strategies, (4) environmental awareness and sensitivity, (5) environmental values, (6) decision-making attitude on environmental issues, and (7) intentions to act. In contrast, male students performed better in the remaining two components (i.e., environmental action strategies and skills, and involvement in responsible environmental behavior). Table 7 below provides a summary result of the t-test between gender and environmental literacy. As shown in Table 8, undergraduate students who participated in clubs and societies at the university/college exhibited a higher level of environmental literacy in all nine components than their counterparts who did not.
Confirmatory Factor Analysis
To test the integrity of the measurement model, confirmatory factor analysis (CFA) was conducted. The following conventional goodness-of-fit criteria were recorded for evaluating this model [37,38] 27,249 in this study could be regarded as vulnerable to the overestimation of the significant differences [39].
The composite reliability (CR) of cognitive element was considerably low (ρ = 0.273) (knowledge of natural systems, ρ = −0.025; knowledge of environmental issues, ρ = 0.356; and knowledge of appropriate action strategies, ρ = −0.153). This low reliability could potentially be explained by the nature of the dichotomous (i.e., True-False questions), and Multiple-Choice questions, and also a lack of adequate number of items, which prevented the removal of unrepresentative items that can result in improving the reliability. On the other hand, the reliability of the affective element was considerably high (ρ = 0.957) with the respective components as follow: environmental values (ρ = 0.920), decision-making attitude on environmental issues (ρ = 0.883), and environmental awareness and sensitivity (ρ = 0.870). The reliability of the behavioral element was also high (ρ = 0.903), and this included the following components: involvement in responsible environmental behavior was high (ρ = 0.921), as were the items of the environmental actions and skills (ρ = 0.928), intentions to act (ρ = 0.886), and environmental action experience (ρ = 0.797).
From Figure 1, the standardized regression weights, which indicate convergent validity were calculated for knowledge of appropriate action strategies (0.386), knowledge of environmental issues (0.427), and knowledge of natural systems (0.398) within the cognitive element. The standardized regression weights for the affective element (please refer to Figure 2) were relatively high in the decision-making attitude on environmental issues (0.927), environmental values (0.893), and environmental awareness and sensitivity (0.875) components. In terms of the behavioral element (as shown in Figure 3), the standardized regression weights for the involvement in responsible environmental behavior (0.710), environmental actions and skills (0.491), and intentions to act (0.980) components were relatively high value. environmental awareness and sensitivity (0.875) components. In terms of the behavioral element (as shown in Figure 3), the standardized regression weights for the involvement in responsible environmental behavior (0.710), environmental actions and skills (0.491), and intentions to act (0.980) components were relatively high value.
Discussion and Implications
This study was a nationwide assessment conducted to evaluate the Taiwanese undergraduate students' level of environmental literacy, specifically in three key elements (i.e., cognitive, affective, and behavioral). On the whole, the undergraduate students' environmental literacy indicated a relatively low level of environmental knowledge and behavior, while a moderate level of environmental attitudes was attained. The findings also revealed no significant correlations between knowledge and attitudes or between knowledge and behavior. However, a higher level of environmental knowledge correlated significantly with a higher degree of pro-environmental behavior, and a higher level of environmental knowledge correlated with stronger attitudes. environmental awareness and sensitivity (0.875) components. In terms of the behavioral element (as shown in Figure 3), the standardized regression weights for the involvement in responsible environmental behavior (0.710), environmental actions and skills (0.491), and intentions to act (0.980) components were relatively high value.
Discussion and Implications
This study was a nationwide assessment conducted to evaluate the Taiwanese undergraduate students' level of environmental literacy, specifically in three key elements (i.e., cognitive, affective, and behavioral). On the whole, the undergraduate students' environmental literacy indicated a relatively low level of environmental knowledge and behavior, while a moderate level of environmental attitudes was attained. The findings also revealed no significant correlations between knowledge and attitudes or between knowledge and behavior. However, a higher level of environmental knowledge correlated significantly with a higher degree of pro-environmental behavior, and a higher level of environmental knowledge correlated with stronger attitudes. environmental awareness and sensitivity (0.875) components. In terms of the behavioral element (as shown in Figure 3), the standardized regression weights for the involvement in responsible environmental behavior (0.710), environmental actions and skills (0.491), and intentions to act (0.980) components were relatively high value.
Discussion and Implications
This study was a nationwide assessment conducted to evaluate the Taiwanese undergraduate students' level of environmental literacy, specifically in three key elements (i.e., cognitive, affective, and behavioral). On the whole, the undergraduate students' environmental literacy indicated a relatively low level of environmental knowledge and behavior, while a moderate level of environmental attitudes was attained. The findings also revealed no significant correlations between knowledge and attitudes or between knowledge and behavior. However, a higher level of environmental knowledge correlated significantly with a higher degree of pro-environmental behavior, and a higher level of environmental knowledge correlated with stronger attitudes.
Discussion and Implications
This study was a nationwide assessment conducted to evaluate the Taiwanese undergraduate students' level of environmental literacy, specifically in three key elements (i.e., cognitive, affective, and behavioral). On the whole, the undergraduate students' environmental literacy indicated a relatively low level of environmental knowledge and behavior, while a moderate level of environmental attitudes was attained. The findings also revealed no significant correlations between knowledge and attitudes or between knowledge and behavior. However, a higher level of environmental knowledge correlated significantly with a higher degree of pro-environmental behavior, and a higher level of environmental knowledge correlated with stronger attitudes.
Relationships between Environmental Knowledge, Attitudes and Behavior
There had been substantial debates over the years about environmental literacy and the relationships among knowledge, attitudes, and behavior [30,31,40]. Through the SEM technique used in this study, it was observed (as shown in Table 9) that there were no significant correlations between cognitive (r = 0.215) and behavioral elements, and cognitive and affective elements (r = 0.385). However, the results determined that affective and behavioral elements were highly correlated with r value of 0.76 [32,41]. The results suggested that the possession of only environmental knowledge and awareness of environmental issues could not always be successfully transformed into environmental action [42]. While some studies [2,30,43] had reported no positive relationship between environmental awareness and knowledge and pro-environmental behavior, but researchers have continued to be inspired to investigate the relationships among intrinsic, self-determined, and self-esteemed motivation induced by personal behavior since personal self-efficacy is directly associated with pro-environmental behavior [44,45]. In this study, the highest average score observed was in the attitude element (M = 3.73, SD = 0.631), followed by the behavioral element (M = 3.09, SD = 0.546), with the lowest average value being the knowledge element (M = 3.08, the average correct response rate = 60.5%). The correlation between students' environmental attitudes and environmental behaviors was moderate, and the value of this correlation was low in the knowledge element. Overall, the undergraduate students' environmental literacy indicated low environmental knowledge, moderate environmental attitudes, and low environmental behavior. Further analysis revealed that these students could be broadly divided into two groups according to the following continuum of the states of environmental literacy: ecocentric and egocentric [46,47].
Ecocentric engaged students accounted for about 60.2% of the entire sample in this study. A high ratio of these ecocentric students were female, lived in school dormitories, enjoyed searching environmental information, and participated in clubs and societies. The results showed that these students had a moderate level of environmental knowledge (M = 3.35, SD = 0.455), strong environmental attitudes (M = 4.07, SD = 0.398), and moderate environmental behavior (M = 3.03, SD = 0.493). In contrast, the remaining 39.8% of the sample that represented the egocentric engaged students were mainly male who lived with their families or rented apartments, did not enjoy acquiring environmental information, and limited participation in clubs and societies. Findings suggested that these students had a low level of environmental knowledge (M = 2.82, SD = 0.731), moderate environmental attitudes (M = 3.31, SD = 0.581), and a low value of environmental behavior (M = 2.86, SD = 0.512).
From Table 10 above, the results revealed that there were no significant correlations between cognitive (r = −0.261) and affective elements, or affective (r = −0.392) and behavioral elements for egocentric engaged students. However, the findings indicated a moderate relationship between cognitive (r = −0.552) and behavioral elements whereby egocentric engaged students might have the environmental knowledge and awareness, but did not necessarily converted into environmental actions.
Gender Comparison on Environmental Literacy
Overall, female undergraduate students exhibited a more satisfactory fit in environmental literacy than male undergraduate students did, which is consistent with previous studies [48,49]. The females in the top 20% with excellent environmental literacy accounted for approximately 56%, while the remaining 44% were males. On the other hand, about 38.5% of the females and 61.5% of the males in the bottom 20% were regarded as having poor environmental literacy. The higher level of environmental literacy attained by females could be explained by their social status and norms expectation in the Eastern society. For example, females in Taiwan have traditionally been taught to love and maintain cleanliness, and their home caring role of being responsible for household cleaning have also contributed to this effect. A brief summary of the findings is provided in Table 11 below.
Relationship between Clubs and Societies Participation and Environmental Literacy
Undergraduate students who participated in clubs and societies at universities/colleges generally performed better in their environmental literacy than those who did not. This could be explained with the fact that being involved in activities through clubs and societies provide more opportunities for creative thinking, problem solving, leadership, and prosocial behavior than regular courses do [50]. In addition, participants who had pro-environmental experience in these activities gained considerably self-respect, self-esteem, and self-confidence [51][52][53]. Majority (91%) of the undergraduate students in the bottom 20% who had poor environmental literacy did not participate in clubs and societies while the remaining 9% had participation. Table 12 below provides a summary of the findings.
Sources of Environmental Information
In this study, television news (61.5%), online learning (48.4%), and television programs (32.6%) were determined to be the three major sources from which undergraduate students acquire environmental knowledge. These results were similar to those discovered in a prior study [54] about the perception of environmental problems by young people at the University of Maria Curie-Sklodowska (UMCS) and the Technical University (TU) in Poland whereby television (53.5% at the UMCS and 70% at the TU) and newspapers (52% at the UMCS and 32% at the TU) had been identified as key sources to acquire environmental information. These results indicated that the majority of students perceived television as the mainstream medium for gathering environmental knowledge.
The findings in this study revealed that only a minority (11.5%) of the undergraduate students acquired their knowledge from ecological textbooks, and this was consistent with Pawlowski's study [54]. This suggested a need to further investigate the appropriateness of using the textbook as a medium to disseminate environmental knowledge. With the increasing ease of access and vast amount of information available via the internet, online learning has become an important environmental knowledge source and this was supported by approximately 48.4% of the undergraduate students who participated in this study. Given this, a focus on utilizing the online channel as a source of environmental knowledge is deemed to be critical since students nowadays are more internet savvy than ever.
Study Implications
Firstly, environmental education in Taiwan primarily emphasizes on knowledge and cognitive memory; therefore, developing students' capabilities to explore environmental issues and engage in environmentally friendly behavior and responsible environmental behavior (attitudes, personal investment, locus of control) have been neglected [8,55,56]. Relevant studies in the past have also suggested that environmental knowledge is acquired from the teaching materials offered by the university/college and learners automatically generate positive environmental attitudes and behavior in Western societies. However, the results of this study suggested that environmental knowledge did not create positive environmental behavior and skills; the association between the environmental literacy elements was considerably low. Simply focusing on teaching environmental knowledge does not fully achieve the goal of environmental education. As such, environmental knowledge based on science itself is insufficient for eliciting attitudes, values, and behavior, which constitute a substantial part of environmental literacy [57]. Therefore, it is recommended that environmental attitudes should be enhanced through interaction with the environment, which enables students to learn useful skills, develop a sense of responsibility, and increase a personal and collective sense of competence for promoting responsible environmentally friendly behavior [15].
Next, understanding students' perceptions and interpretations of processes, as well as the reasons behind certain behavior is crucial in assessing sustainable education perspectives [58]. Thus, it is important to understand how knowledge can be converted into a person's actual attitude, emphasizing altruism in all civic actions, and focusing on learners' affective areas of goals are current challenges facing environmental education in Taiwan. It is recommended to strengthen the curriculum development specifically in the areas of environmental education in students' learning materials to enrich their learning content. For example, students are encouraged to explore the outdoors, maintain correct perceptions, and obtain environmental information from the nature. Environmental pedagogy does not always provide accurate answers, but should provide opportunities for students and encourage them to investigate the causes for and solutions to problems [30], recognize lifelong environmental subjectivity, and appreciate multiple perspectives [59,60].
Lastly, the university curriculum can be designed to teach students how to differentiate between issues of fact from those of value, and how to study various levels of uncertainty based on paradoxical information in a chaotic world [61,62]. Higher education can be facilitated through an educational strategy that enhances environmental stewardship through a "greening outdoor curricula", referring to science-based actions for sustainability [63]. In addition, students are encouraged to participate and form environmental-related organizations and activities (e.g., the International Youth Conference on the Environment, green camp, nature exploring) to enhance life experiences towards a dynamic and synthetic essence and learning opportunities.
In conclusion, this built-in framework addresses practical pedagogies for hands-on experience and is crucial to reforming environmental education for undergraduate students in Taiwan, as well as serving as a reference point for other similar investigations in the future.
Study limitations
This study has three key limitations. The first being the research focused on a huge sample from Taiwan [64]. The results are based on the current values and skills covered in the national curriculum and mainly considered the major environmental threats affecting Taiwan. Therefore, the identified components and corresponding items were more closely related to local issues and dimensions in the Taiwanese context. However, it is recommended that the results in this study be used as a benchmark for comparison with other similar studies conducted in other countries, especially those that are developing and characterized by rapid industrialization and urbanization.
Next, prior studies investigated the predictions about pro-environmental behavior based on moral or ethical elements have found that personal norms played a critical role. However, since this study did not explore the effects of the moral and ethical elements, therefore, conclusions could be provided in this aspect. Thus, it is recommended that further detailed studies about the impact of moral and/or ethical norms on environmental literacy could be conducted in Taiwan in the future, in order to gain further insights and understanding in this field.
Lastly, the low composite reliability of the cognitive element was an indication that some items were not necessarily representative but because of the limited number of items, therefore the removal of some odd items was deemed unsuitable. Thus, it is recommended to increase the items (in interval measurement Likert-type scale format) in the questionnaire to improve the reliability.
|
v3-fos-license
|
2019-07-18T19:11:16.782Z
|
2019-09-01T00:00:00.000
|
197436275
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.energy.2019.05.218",
"pdf_hash": "300ed37eac8172dbedd2e6d42d2b656033822eda",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44239",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "aef5ef33120e9661412453f93435e01a30e830a5",
"year": 2019
}
|
pes2o/s2orc
|
Practical heat pump and storage integration into non-continuous processes: A hybrid approach utilizing insight based and nonlinear programming techniques
This paper focuses on industrial heat pump (HP) integration in non-continuous processes. To achieve the necessary time-wise process decoupling of the HP system, heat recovery loops (HRLs) with strati fi ed storages are used. This design type can be modeled as a mixed integer nonlinear programming problem which often results in expensive mathematical formulations. The challenge is addressed by a practical method that combines the insight based approach of Pinch Analysis with mathematical programming techniques to give the engineer more fl exibility for the application of the method and to avoid long computation times. By the use of the insight based methods, the solution space of the mathematical formulation is restricted, and thus its complexity is reduced to a nonlinear programming problem optimizing the temperature levels in the HP-HRL system. As an objective, total annual costs (TAC) of the HP-HRL system are minimized. The developed hybrid method is applied to a dairy site and compared in terms of approach temperatures, temperature lift of the HP, TAC, and greenhouse gas emissions to the existing methods. It is shown, that the hybrid method provides realistic approach temperatures in contrast to the existing insight based method. © 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Introduction
In many industrial processes, energy efficiency and a reduction in greenhouse gas (GHG) emissions can be achieved by the integration of heat pumps (HPs).The key is the increased heat recovery (HR) that results by upgrading process waste heat with the HP to a higher temperature level in order to reduce process heating demand.Despite the advantages of HP integration, such as the reduction of operating costs, high investment costs are often a deterrent.
An additional deterrent is the high operational costs that can arise due to non-continuous process behavior [1].In many industrial sites differences in the process schedules of individual plants or down time due to cleaning purposes make the integration of a HP a serious challenge.In particular, changes in the operating conditions for the compressor have a significant influence on its efficiency.This leads to higher operating costs, which further hampers the advantageous integration of HPs.
The next sections provide a brief review of existing methods for HP integration into continuous and non-continuous processes and details the research objectives aimed at alleviating the above noted deterrents.Owing to its importance and broad application in industry, the focus of this work lies on closed-cycle HPs equipped with mechanical compressors.The literature review is subdivided into insight based methods based on Pinch Analysis and mathematical programming methods.
Insight based methods
Several key papers from literature focusing on the proper integration of a HP based on pinch analysis have been published.The first approach for HP integration using insight based methods was published by Townsend and Linnhoff [2].Based on the Grand Composite Curve, the authors defined rules for HP integration in continuous processes.It was concluded, that only a HP which is integrated across the pinch temperature is thermodynamically beneficial.In order to address economic influences, Ranade [3] developed a formulation to determine a maximum temperature rise, which is identified as a tradeoff between compressor investment costs, operating costs and heat transfer investment costs of the heat exchanger network.A different approach developed by Wallin et al. [4] utilizes the Composite Curves to determine the optimal temperature level of the HP as well as the optimal HP size and type.
Insight based approaches as mentioned above are applied to several case studies such as a whiskey production process [5], a cheese factory [6], and a biomass gasification process by Pavlas et al. [7].Olsen et al. [8] applied this graphic methodology to candy production to show how to handle discontinuities.
Mathematical programming methods
The first approach to HP integration through mathematical optimization was published by Shelton and Grossmann [9,10], using a mixed integer linear programming (MILP) formulation to highlight the economic potential of correctly integrated HP and cooling system networks.Based on the insight based approach from Townsend and Linnhoff [2], Colmenares and Seider [11] proposed a nonlinear programming (NLP) model for integration of cascaded HP cycles.Holiastos and Manousiouthakis [12] introduced an analytical approach for determining the optimal temperature levels of a reversible HP.The work was extended by a transportation formulation to determine a heat exchanger (HEX) and HP system [13].Oluleye et al. [14] have developed a MILP approach for the integration of mechanical and absorption HP technologies as well as absorption heat transformers in combination with cogeneration systems for determining optimal temperature level, heat duties, as well as GHG emissions by minimizing total annual costs (TAC).Further, a framework for selecting the best HP technology using simplified HP models based on a correlation between real and ideal thermodynamic cycles was published by Oluleye et al. [15] and was extended to minimize exergy degradation [16].A mixed integer nonlinear programming (MINLP) formulation is published by Wallerand et al. [17] whereby detailed HP models are developed considering effects such as sub-cooling, multi-stage phase changes, and refrigeration selection.
Mar echal and Favrat [18] developed a method for optimal energy conversion unit integration by including exergy factors.This work was continued by considering multiple utility pinches by Becker et al. [19].A multi-objective MILP optimization on operating and investment cost for single, multi-stage and combination of multiple HPs is applied including refrigeration selection by Becker et al. [20].Also, the model was extended to non-continuous processes [21] by using the Time Slice Model and a cascaded thermal energy storage (TES) system.This model was applied to a case study of a cheese factory [22].
Methodologies for HP integration considering non-continuous processes are applied for several case studies such as residual wastewater upgrading of a shower [23].Further, Miah et al. [24] developed a methodology for HP integration in simple and complex factories and was applied to a large non-continuous confectionery factory with multiple production zones.
Research objectives
HP integration in non-continuous processes is usually tackled by sophisticated mathematical programming formulations.Thus such approaches are usually formulated as MINLP problems they tend to be expensive in terms of computing time and resources and limit the flexibility of the method by its application.In practical applications, the optimization step should be as short as possible to prevent long interruptions in the workflow.In order to address the challenge of a practical HP integration in non-continuous processes the following objectives have to be fulfilled: (1) Practicality: having a high degree of flexibility by the application of the method (2) Optimality: finding an optimal solutions (3) Applicability: having a short computation time Objective (1) is usually achieved using insight based methods.Therefore, in a previous work, an insight based approach [25] using heat recovery loops (HRLs) [26] for time-wise decoupling of the process was developed.The introduced COP Curves, the Time Slice Model [27] and the Supply & Demand Curves [28] serve as graphic design tools for HP and TES design.However, the resulting HP-HRL system tends to have large approach temperatures in the condenser and evaporator which is not realistic and thus does not fulfill objective (2).This can be attributed to the fact that the insight based method does not consider all economic aspects in the optimization process.Therefore, further work was performed using a nonlinear programming (NLP) formulation for the optimization of the condensation, evaporation and the storage temperatures levels by minimization of TAC [29].
Objective (3) can only be addressed by avoiding the use of expensive mathematical programming methods.Therefore, in this paper, a hybrid method is developed by unifying the existing insight based [25] and mathematical programming [29] approaches.By utilization of insight based method for decision making, the original MINLP problem can be reduced to NLP problem which reduces computation time sharply.The NLP formulation is also extended with rebalancing utility within the HP-HRL system to cover changes in condenser and evaporator heat flow caused by changing coefficient of performance of the HP.To demonstrate the applicability of the hybrid method, the production of raw materials for butter production in a large dairy site is analyzed as a case study using a Total Site Heat Integration approach.
Methods
In this section, a practical method for industrial heat pump (HP) integration into non-continuous processes is introduced.The method consists of seven steps and an overview is shown in Fig. 1.In particular, steps 3 to 5 are based on pinch techniques and step 6 involves the nonlinear programming (NLP) formulation.
2.1.
Step 1 e extraction of process data First, the process requirements have to be identified.Therefore, process stream data has to be analyzed to identify heating and cooling requirements.By investigating non-continuous processes, the scheduling of the process is crucial for the identification of the energy targets.In addition, for an economic evaluation investment costs for equipment, operating costs as well as economic coefficients such as the interest rate and investment period have to be identified.
Step 2 e identification of energy targets
By the use of the Time Slice Model non-continuous processes are broken down into time slices (TSs).Each of these TSs represents a time duration in which the process has a continuous behavior.A change in heating or cooling requirements indicates the end of the actual TS and the start of the next TS.For the identification of the energy targets the heat cascade for each TS is applied.By selecting an overall DT min , a specific Grand Composite Curve (GCC), as well as the pinch temperature is obtained for each TS.
Step 3 e integration of heat pump
Typically, the change in process requirement, as identified in step 2, would lead to varying operation conditions for the HP resulting in lower compressor efficiency.In order to overcome this challenge, constant condensation and evaporation temperatures are searched at which the HP can run throughout the entire process duration.
For the integration of the HP, the GCC is used.Thereby, the rule integrating a HP across the pinch defined by Townsend and Linnhoff [2] is also valid for non-continuous processes.To cover variations in process requirements, on both sides of the HP, heat recovery loops (HRLs) including stratified thermal energy storages (TESs) are used to transfer the heat from or to the process.As a result, additional heat exchangers (HEXs) are required since the HP evaporator and condenser are not directly connected to the process.This results in a higher temperature difference DT between process stream and condensation or evaporator temperature as given by represented as the sum of the contributed temperature differences DT cont of all included streams.Thereby, different contributed temperature differences are defined for each stream of the process(es) (subscript s), the HRL, and the condensation, or the evaporation of the HP.Thereby, the shifted temperature of a stream is given by T Ã ¼ T À DT cont for hot streams and (2) for cold streams: Usually, the contributed temperature difference is given by DT cont ¼ DT min =2.Thereby, DT min represents the minimum temperature difference at the pinch.According to Kemp [30], temperature contributions can be adapted individually for each stream having an unusually high or low film heat transfer coefficient.In this work, the temperature contributions are applied as follows: Process streams: normal heat transfer coefficients are assumed, and therefore the standard temperature contributions of DT s ¼ DT min =2 are applied.
HRL: the media in the HRL is water and therefore also the standard temperature differences of DT HLR ¼ DT min =2 are applied.
Condensation and evaporation in HP: due to the high film heat transfer coefficients during the phase change of the refrigerant the temperature contributions are reduced to DT HP ¼ DT min =4 .
Therefore, the resulting temperature difference between process streams and HP as given in Eq. ( 1) is resulting in DT min : (4) Thus, in the GCC condensation and evaporation temperatures of the HP are already shifted in respect to the process stream by ±DT min =2, the relation between shifted and real temperatures is given as follows: DT min DT min (6) In order to identify optimal operating conditions for the HP, it is first investigated how the absorbed and emitted heat flows at constant condensation and evaporation temperatures match the heating and cooling requirements of the process in each TS.Therefore, a shifted overall condensation temperature T Ã co;sel has to be selected, whereby the heat flow emitted by the HP is as high as possible (maximize heat recovery) and the condensation temperature as close as possible to the pinch temperature T P (minimize the temperature lift of the HP).To prevent the increase of utility need, self-sufficient pockets must not be destroyed as shown in Fig. 2.
To find the optimal evaporation temperature T ev;opt , the socalled COP Curves are used.These curves describe the absorbed heat flow of the HP _ Q a;l as a function of the temperature levels of condensation T co and evaporation T ev .The formulation is based on the coefficient of performance (COP) and the energy balance of the HP.The HP cycle is formulated as a Carnot cycle using a 2nd law efficiency z for each TS (subscript l) as follows: To ensure practicality, the value of the 2nd law efficiency is set to a conservative value of 0.35.Considering an open type compressor, the energy balance for the HP is given by (8) where h drive describes the drive efficiency of the compressor with a value of 0.9 [31].By the use of Eq. ( 7) and Eq. ( 8), for a selected condenser temperature T Ã co;sel the resulting absorbed heat flow by the evaporator _ Q a;l can be calculated as a function of T Ã ev between the lowest temperature of the GCC T Ã GCC;min ¼ T Ã ev;min and the pinch temperature T P ¼ T Ã ev;max .In Fig. 2, the resulting COP Curve is plotted as a blue dotted curve (each dot represents a computed point) for each TS.
At the intersection between the COP Curve and the GCC, the cooling requirements of the process and the absorbed heat flow by the HP coincide with each other.If the evaporator temperature is below the intersection, not enough heat flow can be absorbed by the HP to cover the cooling demand of the process, and TESs have to be included.Therefore, the heat surplus of the process shown in Fig. 2 (a) has to be transferred to the TES.If the evaporator temperature is above the intersection, more heat flow is absorbed by the HP as provided by the process.Therefore, additional heat from the TES D _ Q À TES;l has to be provided to the HP.By solving where Dt l represents the duration of a TS, the optimal shifted evaporation temperature level T Ã ev;opt is found where heat surpluses and heat deficits of all TSs (l ¼ 1…L) are equal.The resulting condensation temperature T co;sel and evaporator temperature T ev;opt are used in step 6 as initial conditions for the thermo-economic optimization.
Step 4 e design of heat pump and heat recovery loop system
In order to define the required HP and TES sizes, the Supply & Demand Curves developed as a part of the Time Pinch Analysis by Wang and Smith [28] are used.By the use of these curves, the optimal utility system can be designed by minimizing the supplied heat flow.
In contrast to the Time Slice Model, the Supply & Demand Curves consider time constraints first and then temperature constraints and thus no heat transfer backward in time is possible.In Fig. 3, the optimization procedure for designing the HP-HRL system using the Supply & Demand Curves in the DQ; t-chart is shown.
Thereby, transferred heat between the HP and the process streams over time is shown.The supply curve represents what is provided by the HP and the demand curve what needs to be provided by the HP-HRL system to the process.For the design of the HP-HRL system both sides of the HP has to be analyzed, and therefore this procedure has to be applied to the emitted heat by the condenser and the absorbed heat by the evaporator of the HP.
As an example in Fig. 3, the maximal demand from the process is 40 kW in the time interval 8e16 h.If there is no TES, the HP has to be designed for this heat flow as shown as, "Supply without TES".As a result, the demand for transferred heat can be provided in 8 h, whereby the process runs for 24 h.This results in an oversized HP and thus in higher investment costs.
By increasing the operation duration of the HP for the same amount of transferred heat and introducing TESs, the supply heat flow can be reduced till a time pinch occurs as supply and demand curve touch.A further reduction in supply would lead to a lack of transferred energy to the process, and thus heat cannot be transferred backward in time resulting in an infeasible solution.
As shown in Fig. 3 the minimal possible supply for a 24 h operation duration of the HP is 17 kW, resulting in a starting time 6 h before the process (À6 h at x-axis).Therefore a TES with 198 kWh capacity has to be integrated which has to store the heat surplus in the time interval À6 to 8 h and release the heat in the time interval 8e16 h to cover the insufficient heat flow provided by the HP.
The resulting emitted heat flow, the absorbed heat flow, and the capacities of the TESs are used in step 6 as input values for the Fig. 2. GCC of two TSs including the emitted heat flow _ Q e;l by the condenser, the absorbed heat flow by the evaporator _ Q a;l , and the COP Curves (blue dotted curve) for the identification of the optimal shifted evaporator temperature T Ã ev;opt .Fig. 2 (a) represents a TS with a higher cooling demand than the HP can provide (the heat surplus from the process D _ Q þ TES;l has to be transferred to a storage) and Fig. 2 (b) represents a TS with a lower cooling demand than the HP can provide (the heat deficit of the process D _ Q À TES;l has to be provided by a storage to the HP).(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)thermo-economic optimization.
Step 5 e design of heat exchanger network
The HEN design is performed using insight based methods as well.Therefore, the Pinch Design Rules presented by Linnhoff and Hindmarsh [32] are used to create the HEN.By using the commercial software SuperTarget™ it is allowed to relax these rules within the design process.For inter-plant heat recovery additional HRLs are added to ensure the flexibility of the individual processes.
2.6.
Step 6 e thermo-economic optimization After step 5, the HEN structure is defined.To avoid the unrealistic large approach temperatures in the condenser and evaporator, a thermo-economic optimization is performed.Thereby, temperature levels for the condensation, the evaporation, and the HRLs are optimization variables.In addition, loading and unloading profiles of both storages are determined.As the objective, the total annual costs (TAC) are minimized, resulting in an NLP formulation.In Fig. 4, the superstructure of the HP-HRL system is shown.
Formulation of objective function: total annual costs minimization
As objective function, the TAC are minimized by where C inv;a represents the annual investment costs and is estimated by the sum of all equipment costs C E and annualized using the interest rate i r and the investment period n as follows: The equipment costs are estimated using factorial methods.To calculate the investment costs of an equipment, main plant item costs MPIC E are multiplied by Lang factors F E [33] as follows: The Lang factor includes installation, piping, control system, building, site preparation, and service facility costs.To consider inflation and deflation, the cost function is adjusted using the plant, machinery, equipment group index I PMEI of the capital goods price index [34].For an equipment, the MPIC E is defined by where Q represents the capacity of the equipment such as its area A for a HEX and f E;d the degression factor.With the module cost for the HP-HRL system C HPÀHRL is estimated by summation of the individual equipment costs of the HP (compressor C comp , engine C en , condenser C co , evaporator C ev , hot TES C TES;H and cold TES C TES;C ).The annual operating cost is given by where d a represents the annual operating days,Q : , the needed hot utility (HU, m) and cold utility (CU, n), c their corresponding specific cost, P el the constant electrical power demand by the compressor, and c el the specific electricity cost.
Formulation of heat transfer
The area for a HEX, which is the driving force for its costs is defined by where the subscript s represents either a hot i or cold j process stream.By the use of Chen's approximation [35], the logarithmic mean temperature difference (LMTD) in HEX is estimated by In contrast to the exact formulation, using this Chen's approximation has the advantage that the singularity problem of the LMTD arising if DT 1 ¼ DT 2 is omitted.The resulting LMTD is slightly underestimated, and thus the HEX area and its costs is slightly overestimated.For all HEX between hot process streams and the HRL (s ¼ i), the temperature differences are given by where T i;l;S and T i;l;T represent the supply and the target temperature of the relating hot process stream.The hot and cold layer temperatures of the cold storage (C) are given by T C;h and T C;c .For all HEX between cold process streams and the HRL (s ¼ j), the temperature differences are given by DT 2 ¼ T H;c À T j;l;S cj; cl whereby T H;h and T H;c represent the hot and cold layer temperatures of the hot storage (H).The overall heat transfer coefficient is defined by To guarantee feasible heat transfer below the pinch (see Fig. 4), the supply, as well as the target temperature of the hot process streams, have to be higher than the associate storage layer temperatures of the cold storage which is given by Feasible heat transfer above the pinch is ensured by where the supply and target temperatures of the process streams have to be lower than the associated layer temperature of the hot TES.Heat transfer between HRLs and HP are guaranteed by where the temperature level of evaporation has always to be lower than the temperatures of the cold storage as well as the temperature level of the condensation always has to be higher than the temperatures of the hot storage.
Formulation of heat recovery loop
The needed storage volume which is the driving force for its costs is given by where M represents the mass inventory of the related temperature layer in the storage and r HRL the density of the storage media.The total mass inventory is constant over time and therefore only needs to be defined for the first TS (l ¼ 1).With the mass balance for both HRLs is applied, whereby with it is ensured that the mass inventory is never negative.The mass which has to be transferred in each TS to cover the heating and cooling demand is defined by where c p;HRL is the specific heat capacity of the storage media.Further with the transferred mass through evaporator and condenser is given.
Formulation of heat pump cycle
The resulting heat flow which is absorbed by the evaporator of the HP is determined by fulfilling the energy balance of the cold HRL given by where _ Q n represents rebalancing CU which is needed if the temperature lift DT lift is increased in the optimization and thus the absorbed heat flow _ Q a is smaller than the cooling demand of the process.The emitted heat flow by the condenser is determined by fulfilling the energy balance of the hot HRL given by where _ Q m is the rebalancing HU which is needed if the temperature lift is decreased in the optimization and thus the emitted heat flow _ Q e is smaller than the heating demand of the process.By COP ¼ z the HP cycle is approximated using the 2 nd law efficiency.With the energy balance of the HP the resulting electrical power demand of the HP is given which is the driving force for the compressor and engine costs.
Step 7 e calculation of greenhouse gas emissions
Using the resulting electrical power and utility demand with the annual greenhouse gas emissions are determined.
3. Case study: butter production on a dairy site
Process description
The analyzed process represents two plants which are used for producing raw materials for butter production on a large dairy site.The first plant produces anhydrous milk fat (AMF) and the second plant is used for cream treatment (CT).Fig. 5 (a) and Fig. 5 (b) are showing the state-of-the-art how such plants are usually designed.Further, the cleaning-in-place (CIP) system for both plants is included in the analysis (Fig. 5 (c)).
In a first step, the cream is heated from 8 C to 68 C and concentrated in the cream concentrator to 72:00 %.In the homogenizer, mechanical energy is used to produce smaller oil globules by breaking the surface tension.After this procedure, the fat content is increased by separation in the oil concentrator to 90:00 %.The resulting oil is heated to 95 C for pasteurization purposes.For refining purposes, the oil is washed with water in the polisher.Thereby, the water is added at the same temperature as the oil.This procedure removes water-soluble substances such as proteins.Thereby, the oil concentration is increased to 99:00 %.For final refining, the oil is fed to the deodorizer.By stripping the oil with steam, disruptive aromas, odors, free fatty acids, and other minor impurities are removed.Finally, the resulting AMF with a concentration of 99:90 À 99:95 % is cooled to 40 C.
For the by-product buttermilk, the fat content in the side stream of the cream concentrator is further reduced in the buttermilk separator from 0.33% to 0.24%.In a next step, the buttermilk is thermalized by heating to 80 C. Finally, the buttermilk is cooled to 8 C.
The side stream of the oil concentrator and the buttermilk separator are collected and fed to the beta serum separator.As a result, beta serum with a concentration of 2:00 % and oil with a concentration of 41:00 % is produced.The beta serum is cooled to 8 C, and the oil is fed back to the oil concentrator.In the CT plant, cream with a fat mass fraction of 40 % is pasteurized.Further, by using a spinning cone column (SCC), undesired flavors are removed from the cream.Finally, the cream is cooled to the starting temperature.Heat recovery (HR) is achieved by using an intermediate hot water loop.
After production, both plants must be cleaned.A CIP system is installed for this purpose.The advantage of such systems is that they do not require disassembly before cleaning.In order to provide the needed hot rinse water, an alkaline solution, nitric acid, and cold rinse water, tanks are installed as shown in Fig. 5 (c).The following procedure is applied to clean the plant [39]: water (À5 Ce0 C) produced in a refrigerator unit are used.In Table 1, the process stream data is given.Thereby, the process streams are assigned to the respective process.In Table 2, the used utilities including their price, greenhouse gas (GHG) emissions as well as the additional CO 2 levy are listed whereby the levy for electricity is already included in its price.In Table 3, the cost factors for used equipment are listed.Thereby, the correction using Lang factors F E , index ratios I PMEI , material f m , and pressure factors f p are already applied to the cost factors C 0 , C 1 , and C 2 .As amortization parameters, an investment period of 12 years and an interest rate of 7% is given.
Integration of heat pump and heat recovery loop system
Using the Time Slice Model, five TSs per day can be identified.Although only TS 2 -TS 4 have heating and cooling demand.TS 1 and TS 5 are still considered giving the heat pump (HP)-heat recovery loop (HRL) system the possibility to continue operating and store or release heat from the HP in both thermal energy storages (TESs).The energy targets are determined in a next step using an overall DT min of 10 K for each TS.In Fig. 6, the heating and cooling requirement, resulting in Grand Composite Curves (GCCs) of TS 2 to TS 4 , are shown.
The selected overall initial condensation temperature is determined by searching for the highest possible HR with a small as possible temperature lift DT lift .Thus, no self-sufficient pockets are allowed to be destroyed a shifted overall condensation temperature of T Ã co;sel ¼ 33:8 C is selected.The corresponding optimal shifted overall evaporator temperature is T Ã ev;opt ¼ 8:3 C. Yielding Supply & Demand Curves are shown in Fig. 7.It is conclude that by the integration of two storages, one with 0:68 MWh to support the evaporator of the HP and another one with 1:35 MWh to support the condenser of the HP, heat flow demand can be reduced by around two thirds.
The optimal temperatures of condensation, evaporation, and temperatures of the storage layers found by the thermo-economic optimization are shown in Fig. 8. Due to the optimization of the temperature levels, an additional rebalancing hot utility (HU) is needed to compensate for the reduced emitted heat flow by the HP caused by the lower temperature lift.
Total Site Heat Integration
To optimize the overall process, a Total Site Heat Integration (TSHI) approach using HRLs for inter-plant HR is chosen due to the following reasons: By designing a heat exchanger network (HEN) for each plant separately, the plants are not depending on each other.During downtimes or maintenance of one plant, the others are able to run independently.By designing separate HENs for each plant thus the continuous intra-plant operation, just the HRLs have to deal with the noncontinuous behavior of the process.
In the GCCs of the single plants, it can be seen (Fig. 9), that due to the integration of the HP-HRL system a pocket in the AMF production plant is destroyed (see Fig. 9 (a)) which would increase the cold utility (CU) demand.As a result of this, a low-temperature hot water (LTHW)-HRL with a temperature range of 62-40 C is needed which transfers heat from the AMF production plant to the CIP system.Further inter-plant HR is achieved using a hightemperature hot water (HTHW)-HRL with a temperature range of 98-70 C as shown in the Fig. 9.
The HTHW-HRL is used to reduce the HU demand for the AMF production and the CIP system.Due to the fact that their heating demand is higher than the supplied heat from the CT process, there is an additional HU need which is supplied from a hot water boiler.
Results and discussion
To evaluate the time-wise process decoupling, in Fig. 10, the annualized costs of the HP-HRL system are plotted as a function of the daily operating hours of the HP.It is important to notice that the daily energy supplied by the HP-HRL system does not change as a result of the reduction in the daily operating time.Reducing the operating hours of the HP does not influence the utility demand.Therefore, utility costs are not displayed in Fig. 10.In Fig. 10, it can be seen, that the annualized costs of the HP-HRL system is minimized at a daily operation duration of 24 h.Thereby, the main driver is the compressor size which is increased by providing the same daily energy within a reduced operation time.
In a next step, the new hybrid method is compared with the not optimized plant, the optimized plant using the described TSHI approach, and the existing insight based method [25].Therefore, it is important to mention, that the thermo-economic optimization only influences the HP-HRL system and thus not a large improvement for overall total annual costs (TAC) is expected.However, by optimizing the storage temperatures as well as the condenser and evaporator temperatures, realistic approximate temperatures are obtained in condenser DT co;apr and evaporator DT ev;apr with a reduced temperature lift for HP DT lift shown in Table 4.
For an economic evaluation the methods are compared with the not optimized process on the dairy site (see flowchart in Fig. 5) and the TSHI approach.The optimized HENs are designed using the software SuperTarget™ and are provided in the Appendix.To ensure a common base for comparison, the optimized HEN is modified as little as possible to integrate the HP-HRL system.Resulting utility demand, corresponding GHG emissions, and the costs for each method are shown in Table 5.Further, in Table 6 the resulting investment costs are compared.
It can be seen that already the TSHI approach is resulting in a reduction of operating costs by 61% and 52% in GHG emission.By using the insight based method for the integration of the HP-HRL [38].
Equipment system, operating costs and GHG emissions are reduced further by 66% and 65%.The hybrid method operating costs are slightly more reduced by 68%.In contrast to this, GHG emissions are only reduced by 63%.The higher GHG emissions are caused by the utility rebalancing needed due to the smaller temperature lift of the HP.Since GHG emissions are not a objective in the optimization, there is no tradeoff between temperature lift and needed rebalancing utility.By the reduction of operating costs, investment costs are increased due to the usual tradeoff between them (see Table 6).By using the hybrid instead of the insight based method, investment cost for the HP-HRL system are reduced by 29%.Due to smaller temperature differences in heat exchanger (HEX) connected to the HP-HRL system costs for the different HEN designs are increased.Further due to the utility rebalancing a larger water boiler is needed.TAC (rounded values) are reduced using the TSHI approach from 598; 000 NZD =y to 430; 000 NZD =y.By the use of the insight based method TAC can be further reduced to 413; 000 NZD =y and with the hybrid method slightly more to 412; 000 NZD =y.
The resulting operating costs are strongly depended on the natural gas and electricity price.In order to analyze their influence, a sensitivity analysis is performed to show for which natural gas and electricity prices it is most beneficial to optimize the process and when to integrate the HP-HRL system.In Fig. 11 the results are shown as regions for each option, where the corresponding TAC are the lowest compared to the other designs.Thereby, an uncertainty of 30% for investment costs is considered (dotted and dashed line).It can be seen that the not optimized process is not competitive for realistic natural gas and electricity price.
The actual price range is clearly in the region where it is most beneficial to optimize the process as well as integrating the HP-HRL system.Future price trends are predicted to tend to a larger price increase for natural gas than for electricity.Therefore, in the future the integration of a HP-HRL system tends to become even more beneficial.
The goal of not being expensive in terms of computing time and resources is achieved by solving the NLP formulation within an average computing time of 0.303 s over 30 runs of the optimization solver Ipopt (Interior point optimizer) 3.12.4[41] on an Intel Core i7-7600U processor and 16 GB RAM.
By its application on the dairy site case study, the method has proven its practicality.Thus the manual HEN design, the decisions if a process stream should be connected to the HP-HRL system or not is up to the experienced engineer, which increases the flexibility by the application of the method.Further, optimal results regarding the Fig. 9. GCCs of the plants including heat flows of the HP-HRL system, the LTHW-HRL, and the HTHW-HRL: (a) AMF production plant (TS 3 and TS 4 ), (b) CT plant (TS 3 to TS 4 ), and (c) CIP system (TS 2 to TS 4 ).Fig. 10.Annualized costs for the HP-HRL system excluding operating costs for HU and CU.
Table 4
Comparison of temperature differences for different methods.
Temperature difference (K)
Insight based method Hybrid method DTco;apr reduced search space are obtained with short computation times.
In the method, an overall condensation and evaporation temperature for all TSs is chosen having a total time-wise decoupling of the HP.Therefore, it is possible for the HP to operate continuously without changing the operation condition for the compressor.Thus the compressor can operate at its design point having the highest efficiency.For the analyzed case study, which has a change in the pinch temperature over time of 12 K, this approach applies well.However, if the change of the pinch temperature over time increases, the presented method will lead to a larger temperature lift for the HP, which reduces the efficiency of the HP.Therefore, the current approach does not apply well to processes with extensive changes in the pinch temperature over time.
Conclusions
Applying the hybrid insight based and mathematical programming approach to a non-continuous dairy site showed, that the time-wise process decoupling of the heat pump (HP) is successful in terms of greenhouse gas (GHG) emissions and total annual costs (TAC) reduction.By comparing the results on a global perspective the ratio of electrical price to natural gas price is a good indicator.
The actual ratio in New Zealand is c el =c NG ¼ 1:7.With a ratio of 1.3 in Switzerland [8], the integration of a HP-HRL into dairy process tends to be even more beneficial.In contrast to this, in many other industrial nations such as Germany, a large proportion of industry do not have a renewable energy levy that has led to a ratio between 2 and 3 [42] resulting in the integration of a HP-HRL system into a dairy process being less beneficial.
By using the hybrid method instead of the insight based method TAC is just slightly more reduced and GHG emission even increased.Nevertheless, the hybrid method achieves realistic approach temperatures in condenser and evaporator in contrast to the insight based method.The increased GHG emissions are caused by the utility rebalancing in the HP-HRL system.In some cases, it is possible to integrate the needed rebelancing duty back into the process for further utility reduction.This is not possible for the analyzed case study.
By extending the insight based approach with the nonlinear programming formulation high flexibility in the application of the method is preserved and execution time for the method is just increased slightly.In terms of project costs, both the insight based and the sophisticated mathematical programming method require engineers with experience in their field, either insight based Fig. 11.Sensitivity analysis of TAC concerning natural gas prices and electricity price including an uncertainty of ±30 % for investment costs [40].Predicted utility costs are estimated from the predicted costs increase of wholesale prices [36].
process integration (e.g.Pinch Analysis) or mathematical programming techniques.In contrast to the expensive mathematical programming methods the hybrid method does not require large computation power which is beneficial in terms of project costs.
The method needs to be developed further, to have an alternative for processes with a large change in the pinch temperature in order to prevent having an extensive temperature lift for the HP.To have a HP that changes its operating condition and characteristics would entail the need for a more detailed model.Still having short computation time for this kind of model is a challenge.
Further work, which is already in progress is addressing the operability of the HP-HRL system.Therefore, a control approach needs to be developed for assuring the demanded condensation and evaporating temperatures.
Fig. A. 14 .
Fig. A.14. Optimized HEN of the CIP system without HP-HRL system.
Fig. A. 15 .
Fig. A.15. Optimized HEN of the AMF production plant with integrated HP-HRL system.
Fig. A. 16 .
Fig. A.16. Optimized HEN of the CT plant with integrated HP-HRL system.
Table 1
Process requirements.
Table 2
Utility data a .
Table 5
Comparison of utility demand of the different optimization approaches.
Table 6
Comparison of investment costs (NZD) for the different optimization approaches.
|
v3-fos-license
|
2021-10-18T13:44:53.916Z
|
2021-10-18T00:00:00.000
|
239013339
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10334-021-00965-6.pdf",
"pdf_hash": "a6a54d2f65deeae0864b7d94b066e7971a6f2e3f",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44242",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "a6a54d2f65deeae0864b7d94b066e7971a6f2e3f",
"year": 2021
}
|
pes2o/s2orc
|
Reproducibility of MRI-based white matter tract estimation using multi-fiber probabilistic tractography: effect of user-defined parameters and regions
Objective There is a pressing need to assess user-dependent reproducibility of multi-fibre probabilistic tractography in order to encourage clinical implementation of these advanced and relevant approaches. The goal of this study was to evaluate both intrinsic and inter-user reproducibility of corticospinal tract estimation. Materials and methods Six clinical datasets including motor functional and diffusion MRI were used. Three users performed an independent tractography analysis following identical instructions. Dice indices were calculated to quantify the reproducibility of seed region, fMRI-based end region, and streamline maps. Results The inter-user reproducibility ranged 41–93%, 29–94%, and 50–92%, for seed regions, end regions, and streamline maps, respectively. Differences in streamline maps correlated with differences in seed and end regions. Good inter-user agreement in seed and end regions, yielded inter-user reproducibility close to the intrinsic reproducibility (92–97%) and in most cases higher than 80%. Discussion Uncertainties related to user-dependent decisions and the probabilistic nature of the analysis should be considered when interpreting probabilistic tractography data. The standardization of the methods used to define seed and end regions is a necessary step to improve the accuracy and robustness of multi-fiber probabilistic tractography in a clinical setting. Clinical users should choose a feasible compromise between reproducibility and analysis duration.
Introduction
The use of magnetic resonance imaging (MRI) for preoperative assessment and guidance during surgical procedures is becoming increasingly widespread in modern clinical settings. In the context of surgical interventions close to eloquent areas of the brain, functional magnetic resonance imaging (fMRI) combined with tractography are particularly valuable [1]. fMRI detects the areas of the brain activated during specific tasks, while tractography exploits the effect of tissue microstructure on the diffusion of water molecules to depict white matter fibers [2]. Tract estimation can be performed using different types of mathematical models, and with deterministic or probabilistic approaches [3,4]. The classic diffusion tensor model [5] considers a unique diffusion direction for each voxel, and therefore makes the crude approximation that all fibers within a voxel are oriented in the same direction. Multi-fiber approaches can account for the presence of multiple fiber orientations within each voxel and thus yield a more accurate estimate of white matter tracts, especially in regions of complex fiber architecture [6][7][8]. Furthermore, probabilistic algorithms are able to account for the inherent uncertainty in the estimate of fiber orientation and provide superior sensitivity for reconstructing fiber bundles [9][10][11]. Neurosurgeons would benefit from more sensitive reconstructions to conservatively estimate the tract extent and proximity to pathology, as well as the degree of tract infiltration and involvement. This would serve as additional information to locate areas of risk that need attention or additional probing during surgery.
Commonly adopted presurgical planning systems almost exclusively implement diffusion tensor based deterministic tractography [9,12] arguably because this is a more straightforward approach to the analysis, particularly for a clinical user.
Advanced preoperative evaluations with more sophisticated tractographic reconstructions are resource intensive and require collaboration between different clinical roles and expertise. At our neuroimaging centre, the complex image processing is performed by a medical physicist, and can take up to a few hours per case. The results are then reviewed with a radiologist, who reports the images, and the preoperative findings are finally discussed with the neurosurgeon before planning the surgery (and ideally are reviewed postoperatively). These logistics put time constraints on the process, with often a timescale of less than a week between image acquisition and surgery.
The robustness of the tractographic evaluation is a vital aspect in this process. Currently, there is a lack of published data on the user-dependent reproducibility of clinically applied probabilistic tractography, and a pressing need to assess this aspect in order to encourage clinical implementation of these advanced and relevant techniques. Probabilistic approaches both contain intrinsic variability at each run, due to the statistical nature of the analysis and the results, and multiple user-dependent decisions which can influence the final streamline distribution. The tractography seed and inclusion regions can be manually defined using anatomical landmarks, and in some cases can be informed by fMRI data [13,14]. The use of fMRI data for the definition of seed and inclusion regions has been shown to increase the accuracy of tractography analysis [15,16] and to allow separation of different tract components such as the hand and foot fibers of the corticospinal tract [1]. However, the variability of fMRI data is an additional factor to consider for the reproducibility of the tractography analysis.
In this work we evaluate both the intrinsic (run/re-run) and the inter-user reproducibility of corticospinal tract (CST) estimation using multi-fiber probabilistic tractography. We consider several factors influencing the reproducibility: the number of streamlines, the streamline density threshold used to determine the final streamline map, and the definition of the seed region and fMRI-based end regions.
Subjects and MRI sequence protocol
Retrospective analysis of patient examinations was carried out with the approval of the institutional Clinical Audit Committee. Six clinical datasets acquired before tumor (N = 3) or epilepsy (N = 3) surgery close to the motor cortex were employed. Images were acquired at 1.5 T on a Siemens Magnetom Aera scanner (Siemens AG, Erlangen, Germany) using a 20-channel head/neck receive coil. The MRI sequence protocol consisted of a 3D T1-weighted MPRAGE for anatomy (TE/TR = 3.02/2200 ms, voxel size = (1 mm) 3 , FA = 8°, parallel imaging acceleration GRAPPA = 2), a gradient echo EPI sequence for fMRI (TE/TR = 40/3000 ms, voxel size = 2.5 × 2.5 × 3 mm 3 ), and a spin echo EPI sequence for diffusion tractography (TE/TR = 86/9500 ms, voxel size = (2.5 mm) 3 , 6 baseline images at b = 0 s/mm 2 and 64 diffusion directions at b = 1500 s/mm 2 ). fMRI data were acquired for 6 cycles of alternating rest and activation periods of 30 s each, during the following motor tasks: finger tapping, foot rocking and lip pouting.
Data analysis
The data were first visually checked for motion and Gibbs ringing artefacts, which were found to be limited. The analysis workflow made use of publicly available, advanced software packages and included recommended options and standards. fMRI data were processed using an in-house developed batch processing pipeline based on SPM12 (Wellcome Trust Centre for Neuroimaging, University College London, UK). Image pre-processing of fMRI data consisted of small motion correction (rigid body spatial transformation and least square algorithm, SPM12), non-linear co-registration with the anatomical volume (mutual information, SPM12) and isotropic Gaussian kernel smoothing (8 mm full width at half maximum). Diffusion data were reconstructed using constrained spherical deconvolution [17] (CSD) and probabilistic tractography in MRtrix3 [18] (version 0.3.14, http:// www. mrtrix. org/). A single b-value (single-shell) response function of single-fiber white matter was computed [19] and a secondorder integration over fiber orientation distributions (iFOD2) streamline generation algorithm was employed [20].
Three users (medical physicists) with different experience in MR image processing and tractography analysis (user A: in training, less than a year for both; user B: 9 and 8 years, respectively; user C: 10 and 2 years, respectively) performed a blind and independent data analysis with the following instructions: 1) For each motor fMRI task, apply a threshold to the activation map, in order to spatially isolate the area corresponding to the highest activation in the relevant cortical area and convert it into an activation mask (Fig. 1a). 2) Combine the masks from all motor tasks to form a single mask encompassing the motor activation area. 3) On an axial slice of the fractional anisotropy map, manually draw the CST seed region on the posterior limb of the internal capsule (PLIC) in the hemisphere of interest, including only voxels with predominant diffusion in a superior-inferior direction (Fig. 1b). Additionally, define an exclusion region by positioning a sagittal plane through the inter-hemispheric fissure (midline), to avoid streamlines crossing over to the opposite hemisphere. 4) Generate streamlines using only the seed region (unrestricted streamlines) and using both the seed region and the fMRI-based activation mask as end region (restricted streamlines) [21] (Fig. 1c).
The algorithm generated streamlines from the seed region until 100,000 streamlines had been selected considering both inclusion and midline exclusion regions. User-dependent Fig. 1 Workflow for generation of streamlines using fMRI and diffusion MRI data (a-c) and subsequent inter-user comparison (d) inputs which can affect the final streamline distribution included the definition of the seed and end regions. The spatial extent of the end regions is affected by both the choice of threshold and the selection of the activated area in the fMRI maps. Figure1d shows the region and streamline results obtained by two different users for the same patient.
Reproducibility assessment
Streamline density maps (tractograms representing the number of streamlines per voxel) were generated using the streamlines produced by the probabilistic tractography analysis. To assess the reproducibility of probabilistic tractography at a set streamline density, the streamline density maps were thresholded and binarized by applying a density threshold [22]. Pairs of binarized maps (α and β) were then compared using the Dice index (DI) [23]: Initially, the streamline density threshold was set to 2 × 10 -4 (i.e. 20 over 100,000 selected streamlines for a voxel size of 1mm 3 ), matching the value used clinically at our institution.
For the intrinsic reproducibility resulting from the probabilistic nature of the tract estimation (run/re-run), four analysis runs with identical parameters and specified regions were performed for each patient-user combination (Fig. 2a). Results were then compared across the four runs, resulting in a total of six DIs, which were averaged to yield a mean DI for each patient-user combination.
To evaluate the inter-user reproducibility, the binarised streamline density masks obtained by the three different users were compared pair-wise for each patient (Fig. 2b). DI analysis was also applied to the seed region and fMRIbased activation masks, in order to evaluate the influence of these components separately. Pearson's correlation coefficients (PCCs) were calculated to investigate the correlation between different sets of DIs. Multiple linear correlation was also performed using the seed and end regions' DIs as predictors for the DIs of restricted streamlines in order to determine the combined effect of seed and end regions. Finally, run/re-run reproducibility analysis was repeated for all patients and users varying both the streamline density thresholds (range 1-40 × 10 -4 ) and the number of selected streamlines (range 10,000-250,000), to assess the dependence of the intrinsic reproducibility on these parameters as well as the duration of the analysis in different conditions. Intrinsic reproducibility DIs were also simulated using downsampled data generated by extracting subsets randomly choosing a selection of streamlines from a 1,000,000 streamline dataset in a representative case (patient 1, user A).
All correlations were calculated in MATLAB Version R2018a (The MathWorks Inc, Natick, Massachusetts) and the significance level was set to p < 0.05.
Results
Intrinsic reproducibility (run/re-run, 100,000 streamlines, 2 × 10 -4 threshold) DIs obtained for each patient-user combination ranged 0.92-0.94 and 0.92-0.97 for the unrestricted and restricted streamline maps, respectively (Fig. 3). Intrinsic reproducibility was generally higher for restricted streamline maps than for unrestricted ones, probably due to the fact that additional constraints limit the number of possible pathways.
Inter-user reproducibility (100,000 streamlines, 2 × 10 -4 threshold) DIs obtained for each user pair across all patients are shown in Fig. 4 for seed region, fMRI-based end region, unrestricted and restricted streamline maps. DIs ranged 0.41-0.93 for the seed region, 0.29-0.94 for the fMRIbased end region, 0.58-0.92 for the unrestricted streamline Figure 4 shows that the DIs had a spread of values across patients, and this was dependent on the user pair. Comparisons between users B and C have DIs with higher value and smaller variation, indicating greater overlap and consistency. Larger variations in seed and end region DIs ( Fig. 4a and b) corresponded to larger variations in streamline DIs (Fig. 4c and d). This suggests that smaller differences in user-defined regions lead to improved reproducibility.
Unrestricted streamline maps showed a high correlation with seed regions (PCC = 0.90). For restricted streamline maps, there was a significant correlation for both seed and end regions individually (PCCs were 0.73 and 0.57, respectively) and the correlation found with the multiple linear regression model, which considered both seed and end regions, was also significant (PCC = 0.87). This verifies that the inter-user reproducibility of restricted streamline maps has a dependence on the definition of seed and end regions. Regarding the seed regions, only the two-dimensional overlap between the regions defined by different users was considered. It was found that the seed regions were defined in different slices in four out of six patients, a factor which might contribute to reduced overlap. The correlation between DIs and the difference in seed region position (defined as the number of slices separating the two regions) was investigated, and a significant negative correlation for seed regions, unrestricted, and restricted streamline maps was found (PCCs were − 0.73, − 0.75, and − 0.72, respectively). This indicates that a larger separation between seed region slices yielded a lower inter-user reproducibility.
Intrinsic reproducibility as a function of number of streamlines and density threshold
The DI curves in Fig. 5a were averaged over all patients and users and show the dependence of DIs on the streamline density threshold for different number of selected streamlines. In all curves, DIs increased for threshold values between 1 and 10 × 10 -4 , while at higher thresholds they either reached a plateau or showed a slow decrease for values higher than 30 × 10 -4 (blue and yellow curves in Fig. 5a). In all conditions, restricted streamline maps yielded higher DIs, as also found in Fig. 3. For low thresholds and number of streamlines, DIs can reach suboptimal values (< 0.75). Increasing the number of streamlines increased analysis time and reproducibility, and decreased differences across users and patients (represented by the standard deviation). The relationship between number of streamlines, reproducibility, and analysis duration is shown in Fig. 5b and c using the simulated data. Figure 5b and c illustrate the fact that, for a chosen threshold, users need to find a clinically feasible compromise between reproducibility and analysis time.
Discussion
Accurate and reproducible tract estimation obtained from advanced image processing such as tractography is crucial for presurgical planning. In this work we investigate the factors influencing the reproducibility of white matter tractograms generated using multi-fiber probabilistic tractography. In particular, we highlight that this analysis requires two considerations: 1) Different runs of the same analysis with identical seed and end regions will produce similar, but not identical, streamline density maps. The degree of similarity between the produced maps is affected by the chosen minimum number of streamlines selected.
2) The appearance of the streamline density map that a clinical user (e.g. neuroradiologist, neurosurgeon) sees, and particularly its spatial extent, depends on the lowest density displayed (density threshold). The reproducibility of the maps thus has a dependence on the streamline density threshold. Although higher streamline densities indicate higher confidence in the presence of a fiber bundle and are more reproducible (Fig. 5), in a clinical setting the map provided to the neurosurgeon should have conservatively larger margins (i.e. it should contain voxels at lower density). In CST evaluations at our institution we set this threshold to 2 × 10 -4 . This provides a good compromise between removing false positive streamlines whilst still minimizing any false negative pathways [12] which can have serious clinical implications [13].
The intrinsic reproducibility of probabilistic tractography is purely due to the statistical nature of the analysis, and represents the upper limit of reproducibility. Although increasing the number of selected streamlines progressively reduces statistical variability [24], users should find a good compromise between the duration of the processing and the achievable reproducibility (Fig. 5). For the values adopted in this analysis for the CST (100,000 streamlines, 2 × 10 -4 threshold), the intrinsic reproducibility is > 92%, but it can be as low as 75% for 10,000 streamlines. The analysis with 100,000 streamlines requires 20 min per case when including both restricted and unrestricted streamlines (iMac, Processor 3.3 GHz Intel Core i5, RAM 32 GB 1867 MHz DDR3), which is a sensible trade-off when results need to be both accurate and prompt. For interactive modifications of the analysis 10,000 streamlines (2 min, 75%-90% reproducibility) can facilitate a dynamic discussion whilst still maintaining acceptable streamline map reproducibility. Such discussions would not be possible for 1,000,000 streamlines (3 h, > 97% reproducibility) particularly since imaging is often undertaken just days prior to surgery.
Although increasing the number of streamlines (keeping all other conditions constant) also generally increases interuser reproducibility, this is significantly affected by the definition of seed and end regions (Fig. 4), with values as low as 58% for unrestricted streamline maps and 50% for restricted ones. However, where users have good agreement (users B and C), the inter-user reproducibility is higher than 80% and close to intrinsic reproducibility. Users B and C are more experienced in tractography processing and this might have led to more coherent choices, highlighting the importance of standardizing the region definitions.
In our analysis, the comparison between unrestricted and restricted streamline maps shows the effects of further inclusion criteria on intrinsic and inter-user reproducibility. We have considered both unrestricted and restricted streamlines as they provide complementary information in clinical evaluations: while restricted streamlines are able to differentiate hand, foot, and lips CST branches, unrestricted streamlines are less specific but very sensitive for investigation of potential infiltration by the tumor. However, the unrestricted streamlines include all possible streamlines originating from the seed region, and, therefore, an expert user is needed to disregard potentially spurious streamlines, especially in clinical evaluations.
In this paper we show that several factors affect the final streamline map, however, uncertainty remains regarding which generated tractogram represent the ground truth as we did not compare results to a gold standard. Complete validation via subcortical mapping [9] or comparison to cytoarchitectonic maps [12] was outside the scope of this research.
Previous studies have looked into the inter-user reproducibility of tractography, noting that user performance is an important limiting factor. However, most studies assessed reproducibility in relation to apparent diffusion coefficient, fractional anisotropy, mean diffusivity, turning angle threshold, and streamline or voxel count [25][26][27][28][29]. More recently, a study looked at optimizing the number of selected streamlines through mathematical models aiming to reduce the reproducibility of probabilistic tractography but did not consider the influence of user-dependent choices [30]. Another recent work looked at whole brain streamline reproducibility between users by comparing binarized streamline volumes, but using deterministic and atlas-driven approaches [31]. Automated pipelines for generation of white matter fibers have been proposed [32,33]. These approaches based on automatic segmentation do not require user input, and therefore do not suffer from intra-or inter-user variability. However, they mostly rely on image registration and predetermined atlases, and are therefore strongly affected by abnormal anatomy (e.g. the presence of a large lesion) [34]. As a result, they are generally applied to cases where structural alteration due to pathology is absent or minimal [24]. Furthermore, such approaches require tools and resources that are normally not available in a clinical setting. Another recent study assessed the reproducibility of bundle segmentation and found a large variability across and within protocols, highlighting the lack of standardization in this type of analysis [10].
Our paper considers the practical choices that a clinical user makes to reconstruct the CST in a typical setting, and focuses on assessing how these choices influence the variability of the visual information presented for presurgical planning. Our results demonstrate that this variability can be quite substantial, but also that it can be minimized with appropriate choices.
Limitations of this study are the small number of clinical datasets for which the analysis from three users was available, and evaluation of the CST only. Nevertheless, this pilot dataset is sufficient to illustrate how streamline map reproducibility is affected by a range of parameters in a representative number of situations. Furthermore, as we have shown for the intrinsic reproducibility, it is possible to extend the analysis to any streamline density threshold or number of streamlines. However, for clarity, in this paper we have only reported the inter-user reproducibility values for the parameters we are currently using in our clinical setting [12]. No pre-processing steps (denoising, eddy current distortion correction, Gibbs ringing artefact reduction) were applied to the data presented here as not available in the software packages used at the time of analysis. Users are encouraged to apply these steps to improve the quality of the data. Finally, when correlating the Dice indices, we did not correct for multiple comparisons as it is not trivial in this setting. Although the single tractograms/regions are used more than once in the analysis, each Dice index quantifies the overlap between two of them, and is therefore unique.
fMRI-based end regions substantially contribute to the final streamline distribution, and their spatial extent is affected by arbitrary user choices, like the statistical activation threshold and the identification of the activated areas. Future work should aim at standardizing, or, ideally, automating these choices, as this would significantly improve the reproducibility of the restricted streamline maps. Furthermore, we plan to extend this work to larger cohorts and other tracts, as well as evaluate the impact of reproducibility on surgery-related decisions and use intraoperative findings for validation.
Conclusions
In this work, we assessed the inter-user and intrinsic reproducibility of white matter tract estimation using multi-fiber probabilistic tractography. This work emphasizes that interuser differences in seed and end regions should be minimized to improve the reproducibility of the estimation of unrestricted and restricted streamline maps. However, despite the influence of these factors, it was shown that in most cases streamline map reproducibility was higher than 60% and it was possible to reach optimal reproducibility (70-90%) between users for good agreement of seed and end regions. Furthermore, this paper, through representative examples, offers guidance towards reaching a feasible compromise between duration of analysis and achievable reproducibility in a clinical setting.
This study demonstrates that the uncertainties related to the user-dependent choices (threshold for fMRI activation mask and position of seed region), the streamline density threshold chosen to visualize the streamline maps, and the probabilistic nature of the analysis should be considered when interpreting probabilistic tractography data. The standardization of the methods to define the seed region (particularly the slice chosen) and the fMRI end regions is a necessary step to improve the robustness of the visual information provided by multi-fiber probabilistic tractography for presurgical planning in the clinical routine. Funding This work was carried out at, and supported by the Department of Neuroradiology at King's College Hospital NHS Foundation Trust. Enrico De Vita is supported by the Wellcome EPSRC Centre for Medical Engineering (WT 203148/Z/16/Z). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
Conflict of interest
The authors have no relevant conflicts of interest to disclose.
Ethics approval Retrospective analysis of patient examinations was approved by the institutional Clinical Audit Committee.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
v3-fos-license
|
2017-11-18T10:08:11.099Z
|
2017-10-01T00:00:00.000
|
3773858
|
{
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005786&type=printable",
"pdf_hash": "63150f5fa5e65b3d22f6ddc18d4b2dc725b1aa4f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44244",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"sha1": "63150f5fa5e65b3d22f6ddc18d4b2dc725b1aa4f",
"year": 2017
}
|
pes2o/s2orc
|
Machine learning to design integral membrane channelrhodopsins for efficient eukaryotic expression and plasma membrane localization
There is growing interest in studying and engineering integral membrane proteins (MPs) that play key roles in sensing and regulating cellular response to diverse external signals. A MP must be expressed, correctly inserted and folded in a lipid bilayer, and trafficked to the proper cellular location in order to function. The sequence and structural determinants of these processes are complex and highly constrained. Here we describe a predictive, machine-learning approach that captures this complexity to facilitate successful MP engineering and design. Machine learning on carefully-chosen training sequences made by structure-guided SCHEMA recombination has enabled us to accurately predict the rare sequences in a diverse library of channelrhodopsins (ChRs) that express and localize to the plasma membrane of mammalian cells. These light-gated channel proteins of microbial origin are of interest for neuroscience applications, where expression and localization to the plasma membrane is a prerequisite for function. We trained Gaussian process (GP) classification and regression models with expression and localization data from 218 ChR chimeras chosen from a 118,098-variant library designed by SCHEMA recombination of three parent ChRs. We use these GP models to identify ChRs that express and localize well and show that our models can elucidate sequence and structure elements important for these processes. We also used the predictive models to convert a naturally occurring ChR incapable of mammalian localization into one that localizes well.
Introduction
As crucial components of regulatory and transport pathways, integral membrane proteins (MPs) are important pharmaceutical and engineering targets [1]. To be functional, MPs must be expressed and localized through a series of elaborate sub-cellular processes that include cotranslational insertion, rigorous quality control, and multi-step trafficking to arrive at the correct topology in the correct sub-cellular location [2][3][4]. With such a complex mechanism for production, it is not surprising that MP engineering has been hampered by poor expression, stability, and localization in heterologous systems [5][6][7]. To overcome these limitations, protein engineers need a tool to predict how changes in sequence affect MP expression and localization. An accurate predictor would enable us to design and produce MP variants that express and localize correctly, a necessary first step in engineering MP function. A useful predictor would be sensitive to subtle changes in sequence that can lead to drastic changes in expression and localization. Our goal here was to develop data-driven models that predict the likelihood of a MP's expression and plasma membrane localization using the amino acid sequence as the primary input.
For this study, we focus on channelrhodopsins (ChRs), light-gated ion channels that assume a seven transmembrane helix topology with a light-sensitive retinal chromophore bound in an internal pocket. This scaffold is conserved in both microbial rhodopsins (light-driven ion pumps, channels, and light sensors-type I rhodopsins) and animal rhodopsins (light-sensing G-protein coupled receptors-type II rhodopsins) [8]. Found in photosynthetic algae, ChRs function as light sensors in phototaxic and photophobic responses [9,10]. On photon absorption, ChRs undergo a multi-step photo-cycle that allows a flux of ions across the membrane and down the electrochemical gradient [11]. When ChRs are expressed transgenically in neurons, their light-dependent activity can stimulate action potentials, allowing cell-specific control over neuronal activity [12,13] and extensive applications in neuroscience [14]. The functional limitations of available ChRs have spurred efforts to engineer or discover novel ChRs [11]. The utility of a ChR, however, depends on its ability to express and localize to the plasma membrane in eukaryotic cells of interest, and changes to the amino acid sequence frequently abrogate localization [5]. A predictor for ChRs that express and localize would be of great value as a pre-screen for function.
The sequence and structural determinants for membrane localization have been a subject of much scientific investigation [15][16][17] and have provided some understanding of the MP sequence elements important for localization, such as signal peptide sequence, positive charge at the membrane-cytoplasm interface (the "positive-inside" rule [18]), and increased hydrophobicity in the transmembrane domains. However, these rules are of limited use to a protein engineer: there are too many amino acid sequences that follow these rules but still fail to localize to the plasma membrane (see Results). MP sequence changes that influence expression and localization are highly context-dependent: what eliminates localization in one sequence context has no effect in another, and subtle amino acid changes can have dramatic effects [5,16,19]. In short, sequence determinants of expression and localization are not captured by simple rules.
Accurate atomistic physics-based models relating a sequence to its level of expression and plasma membrane localization currently do not exist, in large measure due to the complexity of the process. Statistical models offer a powerful alternative. Statistical models are useful for predicting the outcomes of complex processes because they do not require prior knowledge of the specific biological mechanisms involved. That being said, statistical models can also be constructed to exploit prior knowledge, such as MP structural information. Statistical models can be trained using empirical data (in this case expression or localization values) collected from known sequences. During training, the model infers relationships between input (sequence) and output (expression or localization) that are then used to predict the properties of unmeasured sequence variants. The process of using empirical data to train and select statistical models is referred to as machine learning.
Machine learning has been applied to predicting various protein properties, including solubility [20,21], trafficking to the periplasm [22], crystallization propensity [23], and function [24]. Generally, these models are trained using large data sets composed of literature data from varied sources with little to no standardization of the experimental conditions, and trained using many protein classes (i.e. proteins with various folds and functions), because their aim is to identify sequence elements across all proteins that contribute to the property of interest. This generalist approach, however, is not useful for identifying subtle sequence features (i.e. amino acids or amino acid interactions) that condition expression and localization for a specific class of related sequences, the ChRs in this case. We focused our model building on ChRs, with training data collected from a range of ChR sequences under standardized conditions. We applied Gaussian process (GP) classification and regression [25] to build models that predict ChR expression and localization directly from these data.
In our previous work, GP models successfully predicted thermal stability, substrate binding affinity, and kinetics for several soluble enzymes [26]. Here, we asked whether GP modeling could accurately predict mammalian expression and localization for heterologous integral membrane ChRs and how much experimental data would be required. For a statistical model to make accurate predictions on a wide range of ChR sequences, it must be trained with a diverse set of ChR sequences [25]. We chose to generate a training set using chimeras produced by SCHEMA recombination, which was previously demonstrated to be useful for producing large sets (libraries) of diverse, functional chimeric sequences from homologous parent proteins [27]. We synthesized and measured expression and localization for only a small subset (0.18%) of sequences from the ChR recombination library. Here we use these data to train GP classification and regression models to predict the expression and localization properties of diverse, untested ChR sequences. We first made predictions on sequences within a large library of chimeric ChRs; we then expanded the predictions to sequences outside that set.
The ChR training set
The design and characterization of the chimeric ChR sequences used to train our models have been published [5]; we will only briefly describe these results. Two separate, ten-block libraries were designed by recombining three parental ChRs (CsChrimsonR (CsChrimR) [28], C1C2 [29], and CheRiff [30]) with 45-55% amino acid sequence identity and a range of expression, localization, and functional properties (S1 Fig) [5]. Each chimeric ChR variant in these libraries is composed of blocks of sequence from the parental ChRs. These libraries were prepared by the SCHEMA algorithm to define sequence blocks for recombination that minimize the library-average disruption of tertiary protein structure [31,32]. One library swaps contiguous elements of primary structure (contiguous library), and the second swaps elements that are contiguous in the tertiary structure but not necessarily in the sequence (non-contiguous library [33]). The two libraries have similar, but not identical, element boundaries (S1A Fig) and were constructed in order to test whether one design approach was superior to the other (they gave similar results). These designs generate 118,098 possible chimeras (2 x 3 10 ), which we will refer to as the recombination library throughout this paper. Each of these chimeras has a full N-terminal signal peptide from one of the three ChR parents.
Two hundred and eighteen chimeras from the recombination library were chosen as a training set, including all the chimeras with single-block swaps (chimeras consisting of 9 blocks of one parent and a single block from one of the other two parents) and multi-blockswap chimera sequences designed to maximize mutual information between the training set and the remainder of the chimeric library. Here, the 'information' a chimera has to offer is how its sequence, relative to all previously tested sequences, changes ChR expression and localization. By maximizing mutual information, we select chimera sequences that provide the most information about the whole library by reducing the uncertainty (Shannon entropy) of prediction for the remainder of the library, as described in [34,35]. The 112 single-block-swap chimeras in the training set have an average of 15 mutations from the most closely related parent, while the 103 multi-block-swap chimeras in the training set have an average of 73 mutations from the most closely related parent ( Table 1). While the multi-block-swap chimeras provide the most sequence diversity to learn from, they are the least likely to express and localize given their high mutation levels. The single-block-swap chimeras offer less information to learn from due to their sequence redundancies with other chimeras in the training set, but are more likely to express and localize.
Genes for these sequences were synthesized and expressed in human embryonic kidney (HEK) cells, and their expression and membrane localization properties were measured (S1B Fig) [5]. The expression levels were monitored through a fluorescent protein (mKate) fused to the C-termini of the ChRs. Plasma-membrane localization was measured using the SpyTag/ SpyCatcher labeling method, which exclusively labels ChR protein that has its N terminus exposed on the extracellular surface of the cell [36]. The training set sequences displayed a wide range of expression and localization properties. While the majority of the training set sequences express, only 33% of the single-block-swap chimeras localize well, and an even smaller fraction (12%) of the multi-block-swap chimeras localize well, emphasizing the importance of having a predictive model for membrane localization. First we explored whether ChR chimera properties could be predicted based on basic biological properties, specifically, signal peptide sequence and hydrophobicity in the transmembrane (TM) domains. Each chimera in the library has one of the three parental signal peptides. Although the signal peptide sequence does affect expression and localization (S2A Fig), chimeras with any parental signal peptide can have high or low expression and localization. Thus, the identity of the signal peptide alone is insufficient for accurate predictions of the ChR chimera properties. We then calculated the level of hydrophobicity within the 7-TM domains of each chimera. With very weak correlation between increasing hydrophobicity and measured expression and localization (S2B Fig), hydrophobicity alone is also insufficient for accurate prediction of ChR chimera properties. These models do not accurately account for the observed levels of expression or localization (S1 Fig). Therefore, we need more expressive models to predict expression and localization from the amino acid sequences of these MPs.
Using GP models to learn about ChRs
Our overall strategy for developing predictive machine-learning models is illustrated in Fig 1. The goal is to use a set of ChR sequences and their expression and localization measurements to train GP regression and classification models that describe how ChR properties depend on sequence and predict the behavior of untested ChRs. GP models infer predictive values from training examples by assuming that similar inputs (ChR sequence variants) will have similar outputs (expression or localization). We quantify the relatedness of inputs (ChR sequence variants) by comparing both sequence and structure. ChR variants with few differences are considered more similar than ChR variants with many differences. We define the sequence similarity between two chimeras by aligning them and counting the number of positions at which they are identical. For structural comparisons, a residue-residue 'contact map' was built for each ChR variant, where two residues are in contact if they have any non-hydrogen atoms within 4.5 Å. The maps were generated using a ChR parental sequence alignment and the C1C2 crystal structure, which is the only available ChR structure [29], with the assumption that ChR chimeras share the overall contact architecture observed in the C1C2 crystal structure. The structural similarity for any two ChRs was quantified by aligning the contact maps and counting the number of identical contacts [26]. Using these metrics, we calculated the sequence and structural similarity between all ChRs in the training set relative to one another (218 x 218 ChR comparisons).
These similarity functions are called kernel functions and specify how the functional properties of pairs of sequences are expected to covary (they are also known as covariance functions). In other words, the kernel is a measure of similarity between sequences, and we can draw conclusions about unobserved chimeras on the basis of their similarity to sampled points [25]. The model has high confidence in predicting the properties of sequences that are similar to previously sampled sequences, and the model is less confident in predicting the properties of sequences that are distant from previously sampled sequences.
To build a GP model, we must also specify how the relatedness between sequences will affect the property of interest, in other words how sensitive the ChR properties are to changes in relatedness as defined by the sequence/structure differences between ChRs. This is defined by the form of the kernel used. We tested three different forms of sequence and structure kernels: linear kernels, squared exponential kernels, and Matérn kernels (see Methods). These different forms represent the kinds of functions we expect to observe for the protein's fitness landscape (i.e. the mapping of protein sequence to protein function). The linear kernel corresponds to a simple landscape where the effects of changes in sequence/structure are additive and there is no epistasis. The two non-linear kernels represent more rugged, complex landscapes where effects may be non-additive. Learning involves optimizing the form of the kernel and its hyperparameters (parameters that influence the form of kernel) to enable accurate predictions. The hyperparameters and the form of the kernel were optimized using the Bayesian method of maximizing the marginal likelihood of the resulting model. The marginal likelihood (i.e. how likely it is to observe the data given the model) rewards models that fit the training data well while penalizing model complexity to prevent overfitting.
Once trained with empirical data, the output of the GP regression model is a predicted mean and variance, or standard deviation, for any given ChR sequence variant. The standard deviation is an indication of how confident the model is in the prediction based on the relatedness of the new input relative to the tested sequences.
We used GP models to infer links between ChR properties and ChR sequence and structure from the training data. We first built GP binary classification models. In binary classification, the outputs are class labels i.e. 'high' or 'low' localization, and the goal is to use the training set data to predict the probability of a sequence falling into one of the two classes (Fig 1). We also (1) Structure-guided SCHEMA recombination is used to select block boundaries for shuffling protein sequences to generate a sequence-diverse ChR library starting from three parent ChRs (shown in red, green, and blue). (2) A subset of the library serves as the training set. Genes for these chimeras are synthesized and cloned into a mammalian expression vector, and the transfected cells are assayed for ChR expression and localization. (3) Two different models, classification and regression, are trained using the training data and then verified. The classification model is used to explore diverse sequences predicted to have 'high' localization. The regression model is used to design ChRs with optimal localization to the plasma membrane. https://doi.org/10.1371/journal.pcbi.1005786.g001 built a GP regression model that makes real-valued predictions, i.e. amount of localized protein, based on the training data (Fig 1). After training these models, we verify that their predictions generalize to sequences outside of the training set. Once validated, these two models can be used in different ways. A classification model trained from localization data can be used to predict the probability of highly diverse sequences falling into the 'high' localization category (Fig 1). The classification model can only predict if a sequence has 'high' vs 'low' localization, and it cannot be used to optimize localization. The regression model, on the other hand, can be used to predict sequences with 'optimal' properties; for example, a regression model trained from localization data can predict untested sequences that will have very high levels of localization (Fig 1).
Building GP classification models of ChR properties
The training set data (S1 Fig) were used to build a GP classification model that predicted which of the 118,098 chimeras in the recombination library would have 'high' vs 'low' expression, localization, and localization efficiency. The training set includes multi-block swaps chosen to be distant from other sequences in the training set in order to provide information on sequences throughout the recombination library. A sequence was considered 'high' if it performed at least as well as the lowest performing parent, and it was considered 'low' if it performed worse than the lowest performing parent. Because the lowest performing parent for expression and localization, CheRiff, is produced and localized in sufficient quantities for downstream functional studies, we believe this to be an appropriate threshold for 'high' vs 'low' performance. For all of the classification models (Fig 2 and S3 Fig), we used kernels based on structural relatedness. For the expression classification model, we found that a linear kernel performed best, i.e. achieved the highest marginal likelihood. This suggests that expression is best approximated by an additive model weighting each of the structural contacts. Localization and localization efficiency required a non-linear kernel for the model to be predictive. This more expressive kernel allows for non-linear relationships and epistasis and also penalizes differing structural contacts more than the linear kernel. This reflects our intuitive understanding that localization is a more demanding property to tune than expression, with stricter requirements and a non-linear underlying fitness landscape.
Most of the multi-block-swap sequences from the training set did not localize to the membrane [5]. We nonetheless want to be able to design highly mutated ChRs that localize well because these are most likely to have interesting functional properties. We therefore used the localization classification model to identify multi-block-swap chimeras from the library that had a high predicted probability (>0.4) of falling into the 'high' localizer category (Fig 2D). From the many multi-block-swap chimeras predicted to have 'high' localization, we selected a set of 16 highly diverse chimeras with an average of 69 amino acid mutations from the closest parent and called this the 'exploration' set ( S4 Fig). We synthesized and tested these chimeras and found that the model had accurately predicted chimeras with good localization (Fig 2 and Fig 3): 50% of the exploration set show 'high' localization compared to only 12% of the multiblock-swap sequences from the original training set, even though they have similar levels of mutation ( Table 1 and S1 Data) (chimeras in the exploration set have on average 69 ± 12 amino acid mutations from the closest parent, versus 73 ± 21 for the multi-block-swap chimeras in the training set). The classification model provides a four-fold enrichment in the number of chimeras that localize well when compared to randomly-selected chimeras with equivalent levels of mutation. This accuracy is impressive given that the exploration set was designed to be distant from any sequence the model had seen during training. The model's performance on this exploration set indicates its ability to predict the properties of sequences distant from the training set. The data from the exploration set were then used to better inform our models about highly diverse sequences that localize. To characterize the classification model's performance, we calculated the area under the receiver operating characteristic (ROC) curve (AUC). A poorly performing model would not do better than random chance, resulting in an AUC of 0.5, while a model that perfectly separates the two classes will have an AUC of 1.0. The revised models achieved AUC up to 0.87 for "leave-one-out" (LOO) cross-validation, indicating that there is a high probability that the classifiers will accurately separate 'high' and 'low' performing sequences for the properties measured. The AUC is 0.83 for localization, 0.77 for localization efficiency and 0.87 for expression for LOO cross-validation predictions (S5 Fig).
To further test the models, we then built a verification set of eleven chimeras, designed using the localization model. This verification set was composed of four chimeras predicted to be highly likely to localize, six chimeras predicted to be very unlikely to localize, and one chimera with a moderate predicted probability of localizing (S4 Table 1 and S1 Data). The verification sets consist exclusively of chimeras with 'high' measured expression, which is consistent with the model's predictions (Fig 2B). The model perfectly classifies the eleven chimeras as either 'high' or 'low' for each property (expression, localization, or localization efficiency) as shown in plots of predicted vs measured properties (Fig 2B and 2E and S3B Fig) and by perfect separation in ROC curves i.e. AUC = 1.0 (S5 Fig). These models are powerful tools that can confidently predict whether a chimera will have 'high' or 'low' expression (Fig 2C), localization (Fig 2F), and localization efficiency (S3C Fig). Of the 118,098 chimeras in the recombination library, 6,631 (5.6%) are predicted to have a probability > 0.5 of 'high' localization, whereas the vast majority of chimeras (99%) are predicted to have a probability > 0.5 of 'high' expression.
Building a regression model for ChR localization
The classification model predicts the probability that a sequence falls into the 'high' localizer category, but does not give a quantitative prediction as to how well it localizes. Our next goal was to design chimera sequences with optimal localization. Localization is considered optimal if it is at or above the level of CsChrimR, the best localizing parent, which is more than adequate for in vivo applications using ChR functionality to control neuronal activity [28]. A regression model for ChR plasma membrane localization is required to predict sequences that have optimal levels of localization. We used the localization data from the training and exploration sets to train a GP regression model (Fig 4A). The diversity of sequences in the training data allows the model to generalize well to the remainder of the recombination library. For this regression model, we do not use all of the features from the combined sequence and structure information; instead, we used L1 linear regression to select a subset of these features. The L1 linear regression identifies the sequence and structural features that most strongly influence ChR localization. Using this subset of features instead of all of the features improved the quality of the predictions (as determined by cross-validation). This indicates that not all of the residues and residue-residue contacts have a large influence on localization of ChR. We then used a kernel based on these chosen features (specific contacts and residues) for GP regression. The regression model for localization showed strong predictive ability as indicated by the strong correlation between predicted and measured localization for LOO cross-validation (correlation coefficient, R > 0.76) (Fig 4A). This was further verified by the strong correlation between predicted and measured values for the previously-discussed verification set (R > 0.9) (Fig 4A). These cross-validation results suggest that the regression model can be used to predict chimeras with optimal localization. We used the localization regression model to predict ChR chimeras with optimal localization using the Lower Confidence Bound (LCB) algorithm, in which the predicted mean minus the predicted standard deviation (LB1) is maximized [37]. The LCB algorithm maximally exploits the information learned from the training set by finding sequences the model is most certain will be good localizers. The regression model was used to predict the localization level and standard deviation for all chimeras in the library, and from this the LB1 was calculated for all chimeras (Fig 4B). We selected four chimeras whose LB1 predictions for localization were ranked in the top 0.1% of the library (S4 Fig). These were constructed and tested (Fig 3 and S6 Fig and S1 Data). Measurements showed that they all localize as well as or better than CsChrimR (Fig 3 and Fig 4A and Table 1). Cell population distributions of the optimal set show properties similar to the CsChrimR parent, with one chimera showing a clear shift in the peak of the distribution towards higher levels of localization (S7 Fig). These four sequences differ from CsChrimR at 30 to 50 amino acids (S4 Fig).
We were interested in how predictive the GP localization models could be with fewer training examples. To assess the predictive ability of the GP models as a function of training set size, we sampled random sets of training sequences from the dataset, trained models on these random sets, then evaluated the model's performance on a selected test set (S8 Fig). As few as 100 training examples are sufficient for accurate predictions for both the localization regression and classification models. This analysis shows that the models would have been predictive with even fewer training examples than we chose to use.
Sequence and structure features that facilitate prediction of ChR expression and localization
In developing the GP regression model for localization, we used L1-regularized linear regression to identify a limited set of sequence and structural features that strongly influence ChR (Fig 4). These features include both inter-residue contacts and individual residues and offer insight into the structural determinants of ChR localization. To better gauge the relative importance of these features, L2-regularized linear regression was used to calculate the positive and negative feature weights, which are proportional to each feature's inferred contribution to localization. While not as predictive as the GP regression model because it cannot account for higher-order interactions between features, this linear model has the advantage of being interpretable.
When mapped onto the C1C2 structure, these features highlight parts of the ChR sequence and structural contacts that are important for ChR localization to the plasma membrane ( Fig 5). Both beneficial and deleterious features are distributed throughout the protein, with no single feature dictating localization properties (Fig 5). Clusters of heavily weighted positive contacts suggest that having structurally proximal CsChrimR-residue pairs are important in the N-terminal domain (NTD), between the NTD and TM4, between TM1 and TM7, and between TM3 and TM7. CsChrimR residues at the extracellular side of TM5 also appear to aid localization, although they are weighted less than CheRiff residues in the same area. Beneficial CheRiff contacts and residues are found in the C-terminal domain (CTD), the interface between the CTD and TM5-6, and in TM1. C1C2 residues at the extracellular side of TM6 are also positively weighted for localization, as are C1C2 contacts between the CTD and TM3-4 loop. From the negatively weighted contacts, it is clear that total localization is harmed when CheRiff contributes to the NTD or the intracellular half of TM4 and when CsChrimR contributes to the CTD. Interestingly, positive contacts were formed between TM6 from C1C2 and TM7 from CheRiff, but when the contributions were reversed (TM6 from CheRiff TM7 from C1C2) or if CsChrimR contributed TM6, strong negative weights were observed. Not surprisingly, the sequence and structure of optimal localizers predicted by GP regression (Fig 4) largely agree with the L2 weights (S9 Fig).
Using this strategy for model interpretation (L1 regression for feature selection followed by L2 regression), we can also weight the contributions of residues and contacts for ChR expression (S10 Fig and S11 Fig). There is some overlap between the heavily weighted features for ChR expression and the features for localization, which is expected because more protein expressed means more protein available for localization. For example, both expression and localization models seem to prefer the NTD from CsChrimR and the extracellular half of TM6 from C1C2, and both disfavor the NTD and the intra-cellular half of TM4 from CheRiff. While the heavily-weighted expression features are limited to these isolated sequence regions, localization features are distributed throughout the protein. Moreover, the majority of heavilyweighted features identified for expression are residues rather than contacts. This is in contrast to those weighted features identified for localization, which include heavily-weighted residues and structural contacts. This suggests that sequence is more important in determining expression properties, which is consistent with the largely sequence-dependent mechanisms associated with successful translation and insertion into the ER membrane. In contrast, both sequence and specific structural contacts contribute significantly to whether a ChR will localize to the plasma membrane. Our results demonstrate that the model can 'learn' the features that contribute to localization from the data and make accurate predictions on that property.
Using the GP regression model to engineer novel sequences that localize
We next tested the ChR localization regression model for its ability to predict plasma-membrane localization for ChR sequences outside the recombination library. For this, we chose a natural ChR variant, CbChR1, that expresses in HEK cells and neurons but does not localize to the plasma membrane and thus is non-functional [28]. CbChR1 is distant from the three parental sequences, with 60% identity to CsChrimR and 40% identity to CheRiff and C1C2. We optimized CbChR1 by introducing minor amino acid changes predicted by the localization regression model to be beneficial for membrane localization. To enable measurement of CbChR1 localization with the SpyTag-based labeling method, we substituted the N-terminus of CbChR1 with the CsChrimR N-terminus containing the SpyTag sequence downstream of the signal peptide to make the chimera CsCbChR1 [36]. This block swap did not change the membrane localization properties of CbChR1 (Fig 6C). Using the regression model, we predicted localization levels for all the possible single-block swaps from the three library parents (CsChrimR, C1C2 and CheRiff) into CsCbChR1 and selected the four chimeras with the Features can be residues (spheres) or contacts (sticks) from one or more parent ChRs. Features from CsChrimR are shown in red, features from C1C2 are shown in green, and features from CheRiff are shown in blue. In cases where a feature is present in two parents, the following color priorities were used for consistency: red above green above blue. Sticks connect the beta carbons of contacting residues (or alpha carbon in the case of glycine). The size of the spheres and the thickness of the sticks are proportional to the parameter weights. Two residues in contact can be from the same or different parents. Single-color contacts occur when both contributing residues are from the same parent. Multi-color contacts occur when residues from different parents are in contact. The N-terminal domain (NTD), C-terminal domain (CTD), and the seven transmembrane helices (TM1-7) are labeled. highest Upper Confidence Bound (UCB). These chimeras have between 4 and 21 mutations when compared with CsCbChR1. Unlike the LCB algorithm, which seeks to find the safest optimal choices, the UCB algorithm balances exploration and exploitation by maximizing the sum of the predicted mean and standard deviation.
The selected chimeras were assayed for expression, localization, and localization efficiency (S1 Data). One of the four sequences did not express; the other three chimeras expressed and had higher localization levels than CsCbChR1 (Fig 6B). Two of the three had localization properties similar to the CheRiff parent (Fig 6B). Images of the two best localizing chimeras illustrate the enhancement in localization when compared with CbChR1 and CsCbChR1 (Fig 6C and S12 Fig). This improvement in localization was achieved through single-block swaps from CsChrimR (17 and 21 amino acid mutations) (Fig 6A). These results suggest that this regression model can accurately predict minor sequence changes that will improve the membrane localization of natural ChRs.
Discussion
The ability to differentiate the functional properties of closely related sequences is extremely powerful for protein design and engineering. This is of particular interest for protein types that have proven to be more recalcitrant to traditional protein design methods, e.g. MPs. We show here that integral membrane protein expression and plasma membrane localization can be predicted for novel, homologous sequences using moderate-throughput data collection and advanced statistical modeling. We have used the models in four ways: 1) to accurately predict which diverse, chimeric ChRs are likely to express and localize at least as well as a moderatelyperforming native ChR; 2) to design ChR chimeras with optimized membrane localization that matched or exceeded the performance of a very well-localizing ChR (CsChrimR); 3) to identify the structural interactions (contacts) and sequence elements most important for predicting ChR localization; and 4) to identify limited sequence changes that transform a native ChR from a non-localizer to a localizer.
Whereas 99% of the chimeras in the recombination library are predicted to express in HEK cells, only 5.6% are predicted to localize to the membrane at levels equal to or above the lowest parent (CheRiff). This result shows that expression is robust to recombination-based sequence alterations, whereas correct plasma-membrane localization is much more sensitive. The model enables accurate selection of the rare, localization-capable, proteins from the nearly 120,000 possible chimeric library variants. In future work we will show that this diverse set of several thousand variants predicted to localize serves as a highly enriched source of functional ChRs with novel properties.
Although statistical models generalize poorly as one attempts to make predictions on sequences distant from the sequences used in model training, we show that it is possible to train a model that accurately distinguishes between closely related proteins. The tradeoff between making accurate predictions on subtle sequence changes vs generalized predictions for significantly different sequences is one we made intentionally in order to achieve accurate predictions for an important and interesting class of proteins. Accurate statistical models, like the ones described in this paper, could aid in building more expressive physics-based models.
This work details the steps in building machine-learning models and highlights their power in predicting desirable protein properties that arise from the intersection of multiple cellular processes. Combining recombination-based library design with statistical modeling methods, we have scanned a highly functional portion of protein sequence space by training on only 218 sequences. Model development through iterative training, exploration, and verification has yielded a tool that not only predicts optimally performing chimeric proteins, but can also be applied to improve related ChR proteins outside the library. As large-scale gene synthesis and DNA sequencing become more affordable, machine-learning methods such as those described here will become ever more powerful tools for protein engineering offering an alternative to high-throughput assay systems.
Materials and methods
The design, construction, and characterization of recombination library chimeras is described in Bedbrook et al. [5]. Briefly, HEK 293T cells were transfected with purified ChR variant DNA using Fugene6 reagent according to the manufacturer's recommendations. Cells were given 48 hours to express before expression and localization were measured. To assay localization level, transfected cells were subjected to the SpyCatcher-GFP labeling assay, as described in Bedbrook et al. [36]. Transfected HEK cells were then imaged for mKate and GFP fluorescence using a Leica DMI 6000 microscope (for cell populations) or a Zeiss LSM 780 confocal microscope (for single cells: S12 Fig). Images were processed using custom image processing scripts for expression (mean mKate fluorescence intensity) and localization (mean GFP fluorescence intensity). All chimeras were assayed under identical conditions.
For each chimera, net hydrophobicity was calculated by summing the hydrophobicity of all residues in the TM domains. The C1C2 crystal structure was used to identify residues within TM domains (S2B Fig), and the Kyte & Doolittle amino acid hydropathicity scale [38] was used to score residue hydrophobicity.
GP modeling
Both the GP regression and classification modeling methods applied in this paper are based on work detailed in [26]. Romero et al. applied GP models to predict protein functions and also defined protein distance using a contact map. We have expanded on this previous work. Regression and classification were performed using open-source packages in the SciPy ecosystem [39][40][41]. Below are specifics of the GP regression and classification methods used in this paper. The hyperparameters and the form of the kernel were optimized using the Bayesian method of maximizing the marginal likelihood of the resulting model. GP regression. In regression, the problem is to infer the value of an unknown function f(x) at a novel point xà given observations y at inputs X. Assuming that the observations are subject to independent identically distributed Gaussian noise with variance s 2 n , the posterior distribution of fà = f(xÃ) for Gaussian process regression is Gaussian with mean Where 1. K is the symmetric, square covariance matrix for the training set, where K ij = k(x i ,x j ) for x i and x j in the training set.
2. kà is the vector of covariances between the novel input and each input in the training set, where kà i = k(xÃ,x i ).
We found that results could be improved by first performing feature selection with L1-regularized linear regression and then only training the GP model on features with non-zero weights in the L1 regression. The hyperparameters in the kernel functions, the noise hyperparameter σ p and the regularization hyperparameter were determined by maximizing the log marginal likelihood: where n is the dimensionality of the inputs. GP classification. In binary classification, instead of continuous outputs y, the outputs are class labels y i 2 {+1,−1}, and the goal is to use the training data to make probabilistic predictions π(xÃ) = p(yà = +1|xÃ). Unfortunately, the posterior distribution for classification is analytically intractable. We use Laplace's method to approximate the posterior distribution. There is no noise hyperparameter in the classification case. Hyperparameters in the kernels are also found by maximizing the marginal likelihood.
GP kernels for modeling proteins. Gaussian process regression and classification models require kernel functions that measure the similarity between protein sequences. A protein sequence s of length l is defined by the amino acid present at each location. This information can be encoded as a binary feature vector x se that indicates the presence or absence of each amino acid at each position. The protein's structure can be represented as a residue-residue contact map. The contact-map can be encoded as a binary feature vector x st that indicates the presence or absence of each possible contacting pair. The sequence and structure feature vectors can also be concatenated to form a sequence-structure feature vector.
We considered three types of kernel functions k(s i ,s j ): linear kernels, squared exponential kernels, and Matérn kernels. The linear kernel is defined as: where σ p is a hyperparameter that determines the prior variance of the fitness landscape. The squared exponential kernel is defined as: where l and σ p are also hyperparameters and |Á| 2 is the L2 norm. Finally, the Matérn kernel with v ¼ 5 2 is defined as: Where l is once again a hyperparameter. L1 regression feature identification and weighting. To identify those contacts in the ChR structure most important in determining chimera function (here, localization) we used L1 regression. Given the nature of our library design and the limited set of chimeras tested, there are certain residues and contacts that covary within our training set. The effects of these covarying residues and contacts cannot be isolated from one another using this data set and therefore must be weighted together for their overall contribution to ChR function. By using the concatenated sequence and structure binary feature vector for the training set we were able to identify residues and contacts that covary. Each individual set of covarying residues and contacts was combined into a single feature. L1 linear regression was then used to weight features as either zero or non-zero in their contribution to ChR function. The level of regularization was chosen by LOO cross-validation. We then performed Bayesian ridge linear regression on features with non-zero L1 regression weights using the default settings in scikit-learn [42]. The Bayesian ridge linear regression weights were plotted onto the C1C2 structure to highlight positive and negative contributions to ChR localization (Fig 5) and ChR expression (S11 Fig). Supporting information S1 Data. Localization and expression characterization of ChR chimeras predicted by the models. Measured localization and expression properties for each chimera tested and associated chimera_name, set, number of mutations, chimera_block_ID, and sequence. Chimera names and chimera_block_ID begin with either 'c' or 'n' to indicate the contiguous or noncontiguous library. The following 10 digits in the chimera_block_ID indicate, in block order, the parent that contributes each of the 10 blocks ('0':CheRiff, '1':C1C2, and '2':CsChrimR). For the contiguous library, blocks in the chimera_block_ID are listed from N-to C-termini; for the non-contiguous library the block order is arbitrary. The set for which the chimera was generated is listed. The number of mutations (m) from the closest parent for each chimera is included. Sequences list only the ChR open reading frame, the C-terminal trafficking and mKate2.5 sequences have been removed. The table shows mean properties (mKate_mean, GFP_mean, and intensity_ratio_mean) and the standard deviation of properties (mKate_std, GFP_std, and intensity_ratio_std). ND: not detected, below the limit of detection for our assay. Block identity of chimeras from each set ranked according to their performance for localization with the best ranking chimera listed at the top of the list. 'High' and 'low' indicates those chimeras had a high predicted probability of localization vs a low predicted probability of localization. Each row represents a chimera. The three different colors represent blocks from the three different parents (red-CsChrimR, green-C1C2, and blue-CheRiff). The number of mutations from the nearest parent and the number of mutations from the nearest previously tested chimera from the library are shown for each chimera. (Fig 5) are displayed on the C1C2 crystal structure which is colored based on the block design of two different chimeras, (A) n1_7 and (B) n4_7, from the optimization set. Features can be residues (spheres) or contacts (sticks) from one or more parent ChRs. Features/blocks from CsChrimR are shown in red, features/blocks from C1C2 are shown in green, and features/blocks from CheRiff are shown in blue. Gray positions are conserved residues. Sticks connect the beta carbons of contacting residues (or alpha carbon in the case of glycine). The size of the spheres and the thickness of the sticks are proportional to the parameter weights. (grey). Features can be residues (spheres) or contacts (sticks) from one or more parent ChRs. Features from CsChrimR are shown in red, features from C1C2 are shown in green, and features from CheRiff are shown in blue. In cases where a feature is present in two parents, the following color priorities were used for consistency: red above green above blue. Sticks connect the beta carbons of contacting residues (or alpha carbon in the case of glycine). The size of the spheres and the thickness of the sticks are proportional to the parameter weights. Two residues in contact can be from the same or different parents. Single-color contacts occur when both contributing residues are from the same parent. Multi-color contacts occur when residues from different parents are in contact. The N-terminal domain (NTD), C-terminal domain (CTD), and the seven transmembrane helices (TM1-7) are labeled. (TIF) S12 Fig. Localization of engineered CbChR1 variant chimera 3c. Representative cell confocal images of mKate expression and GFP labeled localization of CsCbChR1 compared with top-performing CsCbChR1 single-block-swap chimera (chimera 3c), and top-performing parent (CsChrimR). CsCbChR1 shows weak expression and no localization, while chimera 3c expresses well and clearly localizes to the plasma membrane as does CsChrimR. Gain was adjusted in CsCbChR1 images to show any low signal. Scale bar: 10 μm. (TIF)
|
v3-fos-license
|
2023-04-01T15:17:54.175Z
|
2023-03-29T00:00:00.000
|
257862921
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6694/15/7/2036/pdf?version=1680084517",
"pdf_hash": "7c2f36a4cfc4361dac091b956327405b0480924e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44245",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "f128c077ea800c1f10b6894c8976e60d36f074f5",
"year": 2023
}
|
pes2o/s2orc
|
Sensitization of Resistant Cells with a BET Bromodomain Inhibitor in a Cell Culture Model of Deep Intrinsic Resistance in Breast Cancer
Simple Summary Cell culture models of cancer typically favor proliferative but therapy-sensitive cells because body-like selection pressures are absent. To address this limitation, we previously described a function-based selection strategy to model deep intrinsic resistance in cultures of triple-negative breast cancer cells. Our methods were designed to reveal therapy-resistant, adaptable cancer cells that opportunistically switch between quiescence and proliferation. To determine the validity of this approach in identifying noncytotoxic drugs that could inhibit highly resistant breast cancer cells, we used our novel cell culture model to evaluate a well-known BET bromodomain inhibitor, JQ1, which modulates the cancer epigenome. JQ1 has been found to inhibit resistant cancer cells in several cancer types, including breast cancer. Low-dose JQ1 inhibited the growth of highly adaptable/resistant breast cancer cells in our cell culture model. Our results support the validity of a cell culture-based approach for modeling cancer. Abstract We treated highly metabolically adaptable (SUM149-MA) triple-negative inflammatory breast cancer cells and their control parental SUM149-Luc cell line with JQ1 for long periods to determine its efficacy at inhibiting therapy-resistant cells. After 20 days of treatment with 1–2 µM of JQ1, which killed majority of cells in the parental cell line, a large number of SUM149-MA cells survived, consistent with their pan-resistant nature. Interestingly, though, the JQ1 treatment sensitized resistant cancer cells in both the SUM149-MA and SUM149-Luc cell lines to subsequent treatment with doxorubicin and paclitaxel. To measure JQ1-mediated sensitization of resistant cancer cells, we first eradicated approximately 99% of relatively chemotherapy-sensitive cancer cells in culture dishes by long treatments with doxorubicin or paclitaxel, and then analyzed the remaining resistant cells for survival and growth into colonies. In addition, combination, rather than sequential, treatment with JQ1 and doxorubicin was also effective in overcoming resistance. Notably, Western blotting showed that JQ1-treated cancer cells had significantly lower levels of PD-L1 protein than did untreated cells, indicating that JQ1 treatment may reduce tumor-mediated immune suppression and improve the response to immunotherapy targeting PD-L1. Finally, JQ1 treatment with a low 62.5 nM dose sensitized another resistant cell line, FC-IBC02-MA, to treatment with doxorubicin and paclitaxel.
Introduction
Many breast cancers, such as inflammatory breast cancer (IBC) and triple-negative breast cancer (TNBC), do not adequately respond to currently offered therapies. TNBC lacks 2 of 14 the targeted therapy options available for hormone receptor-positive and HER2-positive breast cancers, so treatment relies on cytotoxic chemotherapy. A promising development for TNBC (and possibly for IBC) is the availability of immune checkpoint therapies such as those targeting PD-L1, although the response rate with such therapies is low and the duration of response is short [1]. Breast cancer patients who do not achieve a pathological complete response to neoadjuvant therapy carry minimal residual disease (MRD), which is often treatment-resistant, and are more likely to relapse [2][3][4]. To improve patient outcomes, we are investigating strategies that could decrease the probability of disease recurrence.
A major cause of therapy resistance, leading to an early relapse in triple-negative inflammatory breast cancer (TN-IBC), is deep intrinsic resistance in cancer cells. To meet the goal of advancing noncytotoxic broadly active therapeutic agents to overcome deep intrinsic resistance in a timely manner (prior to relapse), we have developed a cell culture-based approach for modeling deep intrinsic resistance. In this regard, with cancer progression being an evolution-like process [5][6][7][8][9], we select highly adaptable rare cells (0.01% subpopulation in a TN-IBC cell line) that are fit to survive in the body [10,11]. In the absence of glutamine, 0.01% of cells survive in quiescence for several weeks and then proliferate indefinitely. In support of the validity of this cell culture-based approach for modeling deep intrinsic resistance, we have found that IBC-derived TNBC cell lines (SUM149 and FC-IBC02) are a better source of highly adaptable cancer cells than non-IBC-derived TNBC cell lines (i.e., MDA-231 and its metastatic variants cultured from bone metastases in nude mice). This is evident from our observations that adaptable cells derived from IBC cell lines proliferate indefinitely in a glutamine-deficient medium, while adaptable cells derived from non-IBC cell lines fail to do so [10,11]. This result in cell culture mirrors the higher adaptability (and therapy resistance) of IBC than non-IBC in vivo.
We refer to the adaptable cancer cells selected in this manner as metabolically adaptable (MA) cells. SUM149-MA cells can also survive other metabolic challenges, such as a lack of glucose or oxygen, better than the parental cell line [10,11]. They are resistant to the chemotherapeutic drugs paclitaxel and doxorubicin and to several targeted therapies that inhibit cell proliferation [12]. Importantly, SUM149-MA cells are highly tumorigenic in nude mice, efficiently metastasizing to multiple organs such as the skin, lungs, and brain [11]. Molecular characterization has shown that they have activation of pathways that promote epithelial-to-mesenchymal transition (EMT) (e.g., high ZEB1, high SNAIL1, low GRHL2) and numerous changes affecting the epigenome (e.g., low TET2), alternative RNA splicing (e.g., low ESRP1, ESRP2), and RNA base modifications (e.g., high FTO) [12][13][14][15]. These characteristics suggest that body-like selection pressures select highly adaptable cancer cells that may drive therapy resistance. In the context of our cell culture model, genetic changes present in MA cells may cooperate with alterations in the epigenome, transcriptome, and proteome to provide different selection advantages over time and to confer deep intrinsic resistance.
Although SUM149-MA cells exhibit some useful features of poor-prognosis MRD, the problems inherent to cell culture still exist. The selection pressures imposed by the body occur one after another, a sequence that is not feasible to emulate in cell culture. Once a given challenge has been overcome in cell culture, proliferative cells, irrespective of their inherent adaptability, will have an advantage. To address this issue at a practical level, we rely on another evolution-related concept that is well-accepted in cancer research: the inverse correlation between cell proliferation and cell adaptability in biological systems. This concept also fits well with the cancer stem cell concept, wherein a small subpopulation of progenitor-like cancer cells (residing mostly in quiescence) drives the disease. Therefore, when we assay the efficacy of therapeutic drugs in cell culture, we assume that highly proliferative cancer cells are the first to be eliminated. Thus, by eliminating the vast majority of less-resistant cancer cells, we can reveal the resistant cells, which survive by switching to quiescence (a measurable phenotype and a characteristic of poor-prognosis MRD).
To provide a proof of concept for the usefulness of our cell culture model for evaluating cancer therapies, in the present study we evaluated a well-known BET bromodomain inhibitor, JQ1, which has been shown to favorably modulate the cancer epigenome to overcome therapy resistance in many leukemias and solid cancers [16][17][18][19][20][21]. Preclinical studies suggest that JQ1 may overcome resistance to other therapies in estrogen receptor-positive and triple-negative breast cancers [22,23]. The differentiation-state plasticity that often drives therapy resistance can be targeted by JQ1, which prevents changes to open chromatin architecture in basal-like breast cancer [24]. Briefly, the BET family proteins (BRD2, BRD3, BRD4, and BRDT) bind to chromatin via an interaction between their bromodomain motifs and acetylated lysine residues on histone tails and direct the assembly of macromolecular complexes involved in DNA replication, DNA damage repair, chromatin remodeling, and transcription. JQ1, a thienotriazolodiazepine, competitively binds to acetyl-lysine recognition motifs (bromodomains) on BET proteins, thus displacing them from chromatin [18]. Interestingly, JQ1 treatment may also enhance antitumor immunity, which involves changes in the expression of PD-L1 and PD-1 in tumor cells and immune cells [25,26]. Thus, the purpose of this study was to use our novel cell culture model of resistant MRD to assess the efficacy of JQ1 against metabolically adaptable TN-IBC cells and its immune-modulatory effects in these cells via PD-L1.
Cell Lines and Drugs
The resistant TN-IBC cell line used in this study was SUM149-MA, which was derived from a firefly luciferase-transfected SUM149 cell line (SUM149-Luc) [27]. We obtained the SUM149 cell line as a gift from Stephen Ethier (then at Barbara Ann Karmanos Cancer Institute, Detroit, MI, USA). The generation and characterization of these cell lines and their culture conditions were previously described [11,27]. We chose to use a luciferasetransfected cell line because it allowed us to image tumor growth and metastases from xenografts in nude mice in previous studies [11,27]. The resistant TN-IBC cell line FC-IBC02-MA, which was derived from the FC-IBC02 cell line [28], was also used. The generation and characterization of the FC-IBC02-MA cell line were previously described [10,14]. Since the initial selection of MA cell lines was performed using a glutamine-deficient medium, dialyzed fetal bovine serum (FBS) was used instead of regular FBS to further reduce the glutamine levels in the culture medium. Therefore, for consistency, all experiments for the present study were performed using media containing dialyzed FBS, even when the medium was supplemented with glutamine (as was carried out in parental cell lines and MA cell lines after the initial selection in the absence of glutamine).
We purchased JQ1 from APExBIO (Houston, TX, USA) and paclitaxel and doxorubicin from Sigma-Aldrich (St. Louis, MO, USA). We dissolved JQ1, paclitaxel, and doxorubicin in dimethyl sulfoxide (DMSO). We added equal volumes of DMSO without drugs to the cultures of the control cells. The DMSO volume was equal to or less than 0.04% of the volume of the culture medium.
JQ1 Treatment of Resistant Cancer Cells
SUM149-MA and SUM149-Luc cells were treated with JQ1 at 1-2 µM for 20 days. The effects of these treatments on cell growth and morphology were frequently monitored under a microscope. We observed substantially higher growth inhibition and cell killing in the parental SUM149-Luc cell line than in resistant SUM149-MA cells. Owing to the abnormal morphologies of cells after this treatment, it was difficult to clearly distinguish the dead cells from the live cells. Therefore, to obtain a better idea about the survival and growth ability of cells remaining after the treatment, we switched the JQ1-treated cell cultures to a regular drug-free medium and allowed the cells to recover and grow for 5-6 days. At the end of treatment, the cultured cells were stained with crystal violet and photographed or scanned.
Assays of Relative Resistance of Cells to Paclitaxel and Doxorubicin
To determine whether treatment with JQ1 affected the sensitivity of the resistant SUM149-MA and SUM149-Luc cells to paclitaxel and doxorubicin, we treated SUM149-MA and SUM149-Luc cells with 1 µM of JQ1 or DMSO for 6 days, allowed the cells to recover for 9 days, and then passaged them. The cells were then treated for 5-6 days with predetermined concentrations of paclitaxel (5 nM) or doxorubicin (25 nM), which killed 99% of the proliferating cells (confirmed by microscopic examination). We then removed the chemotherapeutic drugs by changing the medium and let the surviving cells form colonies for 12-13 days. We stained the colonies with crystal violet and counted them. In addition to the sequential treatments with JQ1 followed by chemotherapeutic drugs, we also analyzed the effect of a combination therapy involving JQ1 plus doxorubicin compared to doxorubicin alone.
Western Blotting
We performed Western blotting to detect protein bands as enhanced chemiluminescence signals on X-ray films, as previously described [29]. We used anti-PD-L1 (catalog number 13684; Cell Signaling Technology, Danvers, MA, USA) and anti-HSP90 (catalog number 4875; Cell Signaling Technology) antibodies for protein detection. After the detection of PD-L1, the Western blot membranes were re-probed to detect HSP90, which served as an internal control for the normalization of protein loading. Each Western blot was performed at least twice.
JQ1 Resistance of SUM149-MA Cells
Evaluation of any drug in our cell culture model began by determining the appropriate dose range. Since epigenetic modulators are typically not cytotoxic drugs and require time to influence the phenotype, in pilot experiments we evaluated a broad range of doses to identify a dose that was noncytotoxic but influenced the phenotype (cell morphology, cell growth, cell death) over time, as observed by monitoring under a microscope. To compare the resistance to JQ1 of the SUM149-MA cells with that of parental SUM149-Luc cells, we treated cultures of both cell lines with different concentrations of JQ1 and monitored them under a microscope. We changed the drug medium as needed to remove floating dead cells from the culture dishes. Typically, when most of the cells were killed and about 1% of the cells were still attached to the dishes in the parental cell line, we shifted the cells to fresh medium without JQ1 and let the resistant cells recover and grow for a few days so that we could evaluate the relative resistance of these cells to JQ1 in both cultures.
In these experiments comparing JQ1 resistance in SUM149-MA and SUM149-Luc cells, we consistently observed considerably more surviving cells in JQ1-treated SUM149-MA cultures than in SUM149-Luc control cultures. Microscopic images of cells treated with 1 µM of JQ1 for 20 days are shown (Supplementary Figure S1). Figure 1 shows representative images of crystal violet-stained SUM149-Luc and SUM149-MA cells treated with 1 µM or 2 µM of JQ1 for 20 days, followed by 5 or 6 days of recovery in a drug-free medium. Crystal violet staining showed that SUM149-Luc cells treated with 1 µM of JQ1 had only about 10% of the cell mass of SUM149-MA cells with the same treatment. The proportion of surviving SUM149-Luc cells compared to SUM149-MA cells further decreased to about 1% with a JQ1 dose of 2 µM. This result is similar to the results obtained for most agents, including the chemotherapeutic drugs doxorubicin and paclitaxel, demonstrating the highly adaptable and resistant nature of SUM149-MA cells [10][11][12].
Figure 1.
Resistance of SUM149-MA cells to treatment with JQ1. SUM149-MA and parental SUM149-Luc cells were treated in parallel with 1 or 2 µM of JQ1 for 20 days (treatment killed most of the cells in the parental cell line) and then allowed to recover and grow in a drug-free medium for 5 or 6 days (as indicated) before being stained with crystal violet. Passage numbers: SUM149-Luc, 6 passages in a medium with dialyzed FBS and glutamine; SUM149-MA, 7 passages in a glutamine-free medium followed by 6 passages in a medium with dialyzed FBS and glutamine.
Sensitization of SUM149-MA Cells to Chemotherapeutic Drugs by Treatment with JQ1
Our main objective was to identify potential therapeutic agents that are noncytotoxic-and therefore suitable for early intervention at the MRD stage-and inhibit resistant cancer cells. Therefore, we investigated whether JQ1 treatment modifies resistant cancer cells toward a nonresistant phenotype. Specifically, we asked whether JQ1 treatment could render resistant cells sensitive to chemotherapeutic drugs, even without direct cell killing. To answer this question, we pretreated SUM149-MA cells with JQ1, allowed them to recover and grow in the absence of JQ1, and then compared their sensitivity to treatment with doxorubicin or paclitaxel with that of control MA cells not pretreated with JQ1. Figure 2 shows the relative resistance of SUM149-MA cells pretreated with 1 µM of JQ1 for 6 days, allowed to recover and grow for 9 days, and treated with 5 nM of paclitaxel or 25 nM of doxorubicin for 6 days (microscopic examination indicated that these treatments eradicated 99% of the cells in all dishes), followed by the recovery and growth of resistant cells into colonies for 12 days. We observed more than 250 colonies of different sizes among the control SUM149-MA cells not pretreated with JQ1. In contrast, SUM149-MA cells pretreated with JQ1 formed markedly fewer colonies after treatment with either paclitaxel or doxorubicin, and we observed only 5-6 colonies per dish ( Figure 2). These findings demonstrated that treatment with JQ1, a known modulator of the epigenetic state, rendered the resistant cancer cells sensitive to commonly used chemotherapeutic agents. and then allowed to recover and grow in a drug-free medium for 5 or 6 days (as indicated) before being stained with crystal violet. Passage numbers: SUM149-Luc, 6 passages in a medium with dialyzed FBS and glutamine; SUM149-MA, 7 passages in a glutamine-free medium followed by 6 passages in a medium with dialyzed FBS and glutamine.
Sensitization of SUM149-MA Cells to Chemotherapeutic Drugs by Treatment with JQ1
Our main objective was to identify potential therapeutic agents that are noncytotoxicand therefore suitable for early intervention at the MRD stage-and inhibit resistant cancer cells. Therefore, we investigated whether JQ1 treatment modifies resistant cancer cells toward a nonresistant phenotype. Specifically, we asked whether JQ1 treatment could render resistant cells sensitive to chemotherapeutic drugs, even without direct cell killing.
To answer this question, we pretreated SUM149-MA cells with JQ1, allowed them to recover and grow in the absence of JQ1, and then compared their sensitivity to treatment with doxorubicin or paclitaxel with that of control MA cells not pretreated with JQ1. Figure 2 shows the relative resistance of SUM149-MA cells pretreated with 1 µM of JQ1 for 6 days, allowed to recover and grow for 9 days, and treated with 5 nM of paclitaxel or 25 nM of doxorubicin for 6 days (microscopic examination indicated that these treatments eradicated 99% of the cells in all dishes), followed by the recovery and growth of resistant cells into colonies for 12 days. We observed more than 250 colonies of different sizes among the control SUM149-MA cells not pretreated with JQ1. In contrast, SUM149-MA cells pretreated with JQ1 formed markedly fewer colonies after treatment with either paclitaxel or doxorubicin, and we observed only 5-6 colonies per dish ( Figure 2). These findings demonstrated that treatment with JQ1, a known modulator of the epigenetic state, rendered the resistant cancer cells sensitive to commonly used chemotherapeutic agents.
Sensitization of SUM149-Luc Cells to Chemotherapeutic Drugs by Treatment with JQ1
Although SUM149-MA cells are highly adaptable and resistant, we recognize that the parental cell line SUM149 is one of the most resistant TNBC cell lines. Therefore, we also aimed to determine whether treatment with JQ1 sensitizes resistant cancer cells in the parental SUM149 cell line to chemotherapeutic drugs. To that end, we pretreated SUM149-Luc cells with JQ1 and then tested their sensitivity to paclitaxel and doxorubicin. Specifically, we pretreated the cells with 1 µM of JQ1 for 6 days, allowed them to recover and grow for 9 days, and then treated them with 5 nM of paclitaxel or 25 nM of doxorubicin (5 days of treatment followed by 13 days of recovery for both drugs). Based on the microscopic examination after treatment with chemotherapeutic drugs (prior to recovery), we observed that the treatments eradicated more than 99% of the cells, leaving behind only a few morphologically abnormal cells in total. We found that JQ1-pretreated SUM149-Luc cells formed dramatically fewer colonies than did non-pretreated control cells (15 versus 200 colonies per dish), demonstrating that the pretreatment sensitized SUM149-Luc cells to both chemotherapeutic drugs ( Figure 3).
Sensitization of SUM149-Luc Cells to Chemotherapeutic Drugs by Treatment with JQ1
Although SUM149-MA cells are highly adaptable and resistant, we recognize that the parental cell line SUM149 is one of the most resistant TNBC cell lines. Therefore, we also aimed to determine whether treatment with JQ1 sensitizes resistant cancer cells in the parental SUM149 cell line to chemotherapeutic drugs. To that end, we pretreated SUM149-Luc cells with JQ1 and then tested their sensitivity to paclitaxel and doxorubicin. Specifically, we pretreated the cells with 1 µM of JQ1 for 6 days, allowed them to recover and grow for 9 days, and then treated them with 5 nM of paclitaxel or 25 nM of doxorubicin (5 days of treatment followed by 13 days of recovery for both drugs). Based on the microscopic examination after treatment with chemotherapeutic drugs (prior to recovery), we observed that the treatments eradicated more than 99% of the cells, leaving behind only a few morphologically abnormal cells in total. We found that JQ1-pretreated SUM149-Luc cells formed dramatically fewer colonies than did non-pretreated control cells (15 versus 200 colonies per dish), demonstrating that the pretreatment sensitized SUM149-Luc cells to both chemotherapeutic drugs (Figure 3).
It is noteworthy that we used somewhat different drug treatment and recovery times for SUM149-MA versus SUM149-Luc cells (Figures 2 and 3). We did this to optimize experimental conditions for a particular cell line. SUM149MA cells represent only approximately 0.01% of the subpopulation present in the parental cell line, which survived in quiescence and was reprogrammed to proliferate indefinitely [10,11]; therefore, they behave very differently as compared to the parental cell line. Our main purpose was to determine whether JQ1 pretreatment sensitizes resistant cells. It is noteworthy that we used somewhat different drug treatment and recovery times for SUM149-MA versus SUM149-Luc cells (Figures 2 and 3). We did this to optimize experimental conditions for a particular cell line. SUM149MA cells represent only approximately 0.01% of the subpopulation present in the parental cell line, which survived in quiescence and was reprogrammed to proliferate indefinitely [10,11]; therefore, they behave very differently as compared to the parental cell line. Our main purpose was to determine whether JQ1 pretreatment sensitizes resistant cells.
Combination with JQ1 Enhanced Efficacy of Chemotherapeutic Drugs against Resistant Cancer Cells
In addition to evaluating JQ1 and chemotherapeutic drugs in a sequential manner, we also wanted to determine whether JQ1 treatment would also be effective when used in combination with chemotherapeutic drugs. Combination therapy is commonly used to treat advanced-stage disease when there is an urgency to overcome therapy resistance as soon as possible. We treated both parental SUM149-Luc cells and resistant SUM149-MA cells with 100 nM of doxorubicin alone or in combination with 1 µM of JQ1 for 6 days. Microscopic examination indicated that these treatments eradicated 99% of the cells in all dishes. We then allowed the remaining resistant cells to recover and form colonies in drugfree medium for 11 days and stained them with crystal violet (Figure 4). A comparison of the stained cells showed that doxorubicin-treated SUM149-MA cells formed more colonies than did the parental cell line, as expected. Moreover, the addition of JQ1 along with doxorubicin eradicated substantially more resistant cells in both cell lines than did doxorubicin alone: SUM149-MA cells yielded more than 1000 colonies after doxorubicin treatment, and doxorubicin in combination with JQ1 reduced the number of colonies to fewer
Combination with JQ1 Enhanced Efficacy of Chemotherapeutic Drugs against Resistant Cancer Cells
In addition to evaluating JQ1 and chemotherapeutic drugs in a sequential manner, we also wanted to determine whether JQ1 treatment would also be effective when used in combination with chemotherapeutic drugs. Combination therapy is commonly used to treat advanced-stage disease when there is an urgency to overcome therapy resistance as soon as possible. We treated both parental SUM149-Luc cells and resistant SUM149-MA cells with 100 nM of doxorubicin alone or in combination with 1 µM of JQ1 for 6 days. Microscopic examination indicated that these treatments eradicated 99% of the cells in all dishes. We then allowed the remaining resistant cells to recover and form colonies in drug-free medium for 11 days and stained them with crystal violet (Figure 4). A comparison of the stained cells showed that doxorubicin-treated SUM149-MA cells formed more colonies than did the parental cell line, as expected. Moreover, the addition of JQ1 along with doxorubicin eradicated substantially more resistant cells in both cell lines than did doxorubicin alone: SUM149-MA cells yielded more than 1000 colonies after doxorubicin treatment, and doxorubicin in combination with JQ1 reduced the number of colonies to fewer than 10 ( Figure 4). Although 1 µM of JQ1 alone did not substantially inhibit the growth of SUM149-MA cells after 6 days of treatment (not shown), it sensitized the cells to doxorubicin-mediated cell killing. a drug-free medium for 7 days (Figure 4). These results suggest that the combination of paclitaxel and JQ1 caused more inhibition in SUM149-MA cells than paclitaxel alone. However, the inhibition was modest (approximately 50%) as compared to the experiment where cells were first treated with JQ1 and then with paclitaxel (compare Figures 2 and 4). The likely explanation is that the cytotoxic effects of paclitaxel created an unfavorable environment for JQ1 to optimally reprogram the epigenome in SUM149-MA cells, which may be needed for sensitization. , and then allowed to recover in a drug-free medium for 11 days before being stained with crystal violet. A comparison of the number of colonies showed that the combination of JQ1 and doxorubicin eradicated more cancer cells in both cell lines than did doxorubicin alone. We also treated SUM149-MA cells with 5 nM of paclitaxel or 5 nM of paclitaxel plus 1 µM of JQ1, as indicated (right) for 6 days, and then allowed to recover in a drug-free medium for 7 days before crystal violet staining. Passage numbers: passage 6 for both cell lines.
Decreased PD-L1 Expression in Cancer Cells upon Treatment with JQ1
There is significant clinical interest in immune checkpoint therapy targeting PD-L1 in TNBC. PD-L1 protein on the surface of tumor cells contributes to immune suppression, which promotes tumor progression. It has been shown that JQ1 treatment reduces the expression of PD-L1 in both cancer cells and immune cells (dendritic cells and macrophages), thereby inhibiting cancer progression [25,26]. Here, we investigated whether treatment with JQ1 decreased PD-L1 expression in SUM149-MA cells as part of the reprogramming of the epigenome toward a therapy-sensitive state in TNBC. Western blot analyses showed that 20-day treatment of SUM149-MA cells with 1 µM of JQ1 (followed by a 9-day recovery period) substantially decreased the level of PD-L1 protein (to about 60% that of untreated cells based on a comparison of band intensities; Figure 5, right panel, lane 3). A similar treatment of parental SUM149-Luc cells with JQ1 also substantially decreased the PD-L1 protein levels (to about 50% that of untreated cells; Figure 5, left panel). , and then allowed to recover in a drug-free medium for 11 days before being stained with crystal violet. A comparison of the number of colonies showed that the combination of JQ1 and doxorubicin eradicated more cancer cells in both cell lines than did doxorubicin alone. We also treated SUM149-MA cells with 5 nM of paclitaxel or 5 nM of paclitaxel plus 1 µM of JQ1, as indicated (right) for 6 days, and then allowed to recover in a drug-free medium for 7 days before crystal violet staining. Passage numbers: passage 6 for both cell lines.
We similarly evaluated paclitaxel plus JQ1 combination therapy on resistant SUM149-MA cells. We treated SUM149-MA cells with 5 nM of paclitaxel alone or in combination with 1 µM of JQ1 for 6 days, and then allowed the remaining resistant cells to recover and grow in a drug-free medium for 7 days (Figure 4). These results suggest that the combination of paclitaxel and JQ1 caused more inhibition in SUM149-MA cells than paclitaxel alone. However, the inhibition was modest (approximately 50%) as compared to the experiment where cells were first treated with JQ1 and then with paclitaxel (compare Figures 2 and 4). The likely explanation is that the cytotoxic effects of paclitaxel created an unfavorable environment for JQ1 to optimally reprogram the epigenome in SUM149-MA cells, which may be needed for sensitization.
Decreased PD-L1 Expression in Cancer Cells upon Treatment with JQ1
There is significant clinical interest in immune checkpoint therapy targeting PD-L1 in TNBC. PD-L1 protein on the surface of tumor cells contributes to immune suppression, which promotes tumor progression. It has been shown that JQ1 treatment reduces the expression of PD-L1 in both cancer cells and immune cells (dendritic cells and macrophages), thereby inhibiting cancer progression [25,26]. Here, we investigated whether treatment with JQ1 decreased PD-L1 expression in SUM149-MA cells as part of the reprogramming of the epigenome toward a therapy-sensitive state in TNBC. Western blot analyses showed that 20-day treatment of SUM149-MA cells with 1 µM of JQ1 (followed by a 9-day recovery period) substantially decreased the level of PD-L1 protein (to about 60% that of untreated cells based on a comparison of band intensities; Figure 5, right panel, lane 3). A similar treatment of parental SUM149-Luc cells with JQ1 also substantially decreased the PD-L1 protein levels (to about 50% that of untreated cells; Figure 5, left panel). Since SUM149-MA cells were not as strongly affected by JQ1 treatment as the parental cell line was, we were also able to analyze the PD-L1 level immediately after JQ1 treatment, which showed a similar decrease as observed after a nine-day recovery (compare lanes 2 and 3, Figure 5, right panel). These results suggest that in addition to modulating resistant TN-IBC cells toward a therapy-sensitive state, JQ1 treatment reduced the expression of the immune-suppressive protein PD-L1. Since SUM149-MA cells were not as strongly affected by JQ1 treatment as the parental cell line was, we were also able to analyze the PD-L1 level immediately after JQ1 treatment, which showed a similar decrease as observed after a nine-day recovery (compare lanes 2 and 3, Figure 5, right panel). These results suggest that in addition to modulating resistant TN-IBC cells toward a therapy-sensitive state, JQ1 treatment reduced the expression of the immune-suppressive protein PD-L1.
Sensitization of FC-IBC02-MA Cells to Chemotherapeutic Drugs by Treatment with JQ1
To test whether low-dose JQ1 would affect resistant cells modeled from other TN-IBC cell lines, we used highly chemo-resistant FC-IBC02-MA cells that had been selected in a similar manner as SUM149-MA cells [10,14]. Evaluating a range of drug concentrations, we first observed that FC-IBC02 and FC-IBC02-MA cells were significantly more sensitive to JQ1 treatment than SUM149 and SUM149-MA cells. We chose a relatively low dose (62.5 nM) of JQ1 to investigate whether it would sensitize resistant cells to chemotherapeutic drugs. This dose appeared to be relatively non-cytotoxic since a treatment for six days caused only a modest growth inhibition (less than 50%) and JQ1-treated cells quickly recovered to healthy proliferating cells upon shifting to the medium without JQ1 (Supplementary Figure S4). We found that pretreatment with 62.5 nM of JQ1 sensitized FC-IBC02-MA cells to the chemotherapeutic drugs paclitaxel and doxorubicin. Compared to the control cells, JQ1-pretreated cells yielded a dramatically lower number of cells/colonies (10-20% that of the control) after treatment with chemotherapeutic drugs (Figure 6).
Sensitization of FC-IBC02-MA Cells to Chemotherapeutic Drugs by Treatment with JQ1
To test whether low-dose JQ1 would affect resistant cells modeled from other TN-IBC cell lines, we used highly chemo-resistant FC-IBC02-MA cells that had been selected in a similar manner as SUM149-MA cells [10,14]. Evaluating a range of drug concentrations, we first observed that FC-IBC02 and FC-IBC02-MA cells were significantly more sensitive to JQ1 treatment than SUM149 and SUM149-MA cells. We chose a relatively low dose (62.5 nM) of JQ1 to investigate whether it would sensitize resistant cells to chemotherapeutic drugs. This dose appeared to be relatively non-cytotoxic since a treatment for six days caused only a modest growth inhibition (less than 50%) and JQ1-treated cells quickly recovered to healthy proliferating cells upon shifting to the medium without JQ1 (Supplementary Figure S4). We found that pretreatment with 62.5 nM of JQ1 sensitized FC-IBC02-MA cells to the chemotherapeutic drugs paclitaxel and doxorubicin. Compared to the control cells, JQ1-pretreated cells yielded a dramatically lower number of cells/colonies (10-20% that of the control) after treatment with chemotherapeutic drugs (Figure 6).
Discussion
A common concern about any in vitro system, including our cell culture model, is that it is devoid of the tumor microenvironment present in the body, particularly the immune system. In this era of immunotherapy, we consider this issue very important and have approached it in several ways. First, we considered the mechanisms that render immune checkpoint blockade ineffective in cancers where such therapies do not work. There is overwhelming evidence in the literature demonstrating that cancer cells with deep intrinsic resistance, such as that modeled in our system, have several adaptive traits, such as high EMT, that enable them to evade immunity during progression and at the time immune checkpoint therapies are typically offered [30,31]. A recent study has shown that quiescent cancer cells play an important role in resisting T-cell attacks in TNBC [32]. Therefore, a good way to improve the response to immune therapies would be through therapeutic targeting of deep intrinsic resistance, and a cell culture approach such as ours could possibly help in this task.
Second, an ideal way to mitigate the limitation of the missing immune system in a cell culture model is to complement this approach by limiting the evaluation of potential therapeutic agents to those that have desirable characteristics in animal models and humans. For example, there is a tremendous amount of information available from the clinical use of therapies in autoimmune diseases and from past clinical trials in cancer that may point to noncytotoxic compounds that do not harm the immune system and/or modulate it toward a healthier state. If such agents inhibit deep intrinsic resistance in our model, this information, considered in the context of all relevant information from animal
Discussion
A common concern about any in vitro system, including our cell culture model, is that it is devoid of the tumor microenvironment present in the body, particularly the immune system. In this era of immunotherapy, we consider this issue very important and have approached it in several ways. First, we considered the mechanisms that render immune checkpoint blockade ineffective in cancers where such therapies do not work. There is overwhelming evidence in the literature demonstrating that cancer cells with deep intrinsic resistance, such as that modeled in our system, have several adaptive traits, such as high EMT, that enable them to evade immunity during progression and at the time immune checkpoint therapies are typically offered [30,31]. A recent study has shown that quiescent cancer cells play an important role in resisting T-cell attacks in TNBC [32]. Therefore, a good way to improve the response to immune therapies would be through therapeutic targeting of deep intrinsic resistance, and a cell culture approach such as ours could possibly help in this task.
Second, an ideal way to mitigate the limitation of the missing immune system in a cell culture model is to complement this approach by limiting the evaluation of potential therapeutic agents to those that have desirable characteristics in animal models and humans. For example, there is a tremendous amount of information available from the clinical use of therapies in autoimmune diseases and from past clinical trials in cancer that may point to noncytotoxic compounds that do not harm the immune system and/or modulate it toward a healthier state. If such agents inhibit deep intrinsic resistance in our model, this information, considered in the context of all relevant information from animal models and humans, may justify clinical trials involving combinations of immune checkpoint blockade and other agents for halting/delaying relapse in high-risk breast cancers such as IBC and TNBC. As a good example of this strategy, we have recently reported that the ribonucleoside analog 6-mercaptopurine, which is extensively used for treating autoimmune diseases, inhibited resistant breast cancer cells in our model system [14,15]. Specifically, in choosing JQ1 for this study, we also considered positive results from various studies in animal models and clinical trials showing its efficacy as a single agent and in combination with various therapeutic agents [16][17][18][19][20][21][22][23][24][25][26]. In summary, we believe it is important to use the unique strengths of in vitro and in vivo approaches in a complementary manner to improve drug discovery.
Another important issue is how to interpret the data obtained for JQ1 in our model in a clinical context, i.e., its potential relevance at the MRD stage prior to relapse. Our results are promising because JQ1 treatment sensitized resistant cancer cells to chemotherapeutic drugs. This type of sensitization observed in vitro implies similar sensitization to chemotherapeutic drugs in vivo, which is supported by the results obtained with JQ1 evaluation in animal models of different cancers [16][17][18]22,23]. Further, the cancer cells that are sensitive to chemotherapeutic drugs may also be susceptible for elimination by other selection pressures in the body, such as those imposed by the immune system. Another promising aspect is that we did not see an emergence of resistant clones after several weeks of treatment with JQ1 in cell culture. For agents such as JQ1 to be considered for clinical evaluation, a more rigorous evaluation may involve evaluation of even lower doses for even longer periods in cell culture to determine whether similar results are obtained.
What does the finding that JQ1 treatment reduced PD-L1 expression in cultured cancer cells mean in a clinical context? A lower PD-L1 level in a tumor implies lower immune suppression, which is desirable. We recognize that the relationship between tumors and the immune system varies according to the stage of the disease: the immune system generally functions well when the tumor is at an early stage, but immunity is suppressed in advancedstage disease. Perhaps this is why it has not been possible to optimize anti-PD-L1 immune checkpoint blockade by aligning it with PD-L1 expression in tumors [1,33]. At this time, PD-L1 antibody therapy is offered to many patients with TNBC, irrespective of their tumor PD-L1 status. Future studies should address whether a combination of JQ1 and immune checkpoint blockade would inhibit metastasis from SUM149-MA-like cancer cells in an immunocompetent mouse model.
Regarding the mechanism of JQ1's effect on the PD-L1 protein level, there is evidence that JQ1 inhibits BRD4 binding on the PD-L1 promoter, thus inhibiting its expression [25,26]. However, since we have not investigated the mechanism(s) of JQ1 action in our system, there are several possibilities. Since there is a high degree of heterogeneity among MA cells, JQ1's effect could be mediated through epigenetic changes in all cells or some specific subpopulations of cells. It is also possible that JQ1 does not directly affect the PD-L1 level, but instead a long 20-day treatment causes a shift in subpopulations with high versus low PD-L1 levels (i.e., the cells surviving this treatment have a pre-existing low PD-L1 level). Our goal was not to investigate the mechanisms of JQ1 action, but to investigate through function-based approaches whether JQ1 has the potential to overcome deep intrinsic resistance in our model. We believe MA cells can serve as a good model for investigating the mechanisms of therapy response versus resistance upon treatment with JQ1 and other clinical-grade bromodomain inhibitors in the context of adaptable cancer cell states in the future.
Conclusions
Our results obtained in a cell culture model of deep intrinsic resistance, featuring highly adaptable TNBC cells that opportunistically switch between quiescence and proliferation, suggest that the BET bromodomain inhibitor JQ1 may be useful for sensitizing resistant breast cancer cells at the MRD stage in patients with breast cancer at high risk of recurrence. Our results are consistent with the results obtained with JQ1 in other model systems of therapy resistance, representing several different cancers, including breast can-cer. These results support the usefulness of modeling a realistic resistance phenotype in cancer by incorporating body-like selection pressures in cell culture, which can be used for evaluating noncytotoxic agents that may be suitable for use before the disease advances to metastasis.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/cancers15072036/s1, Figure S1: Resistance of SUM149-MA cells to treatment with JQ1. Figure S2: Uncropped Western blots used for Figure 5. Figure S3: An additional PD-L1 Western blot similar to the one shown in Figure 5. Figure
|
v3-fos-license
|
2018-04-03T03:56:53.593Z
|
2012-09-26T00:00:00.000
|
3570084
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/jna/2012/962652.pdf",
"pdf_hash": "0808bc195d53f4acb1f25aede98afaba402c6383",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44247",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "d696099e104f85d68f0bb61ef5851e01435dedcb",
"year": 2012
}
|
pes2o/s2orc
|
Imaging mRNA Expression in Live Cells via PNA·DNA Strand Displacement-Activated Probes
Probes for monitoring mRNA expression in vivo are of great interest for the study of biological and biomedical problems, but progress has been hampered by poor signal to noise and effective means for delivering the probes into live cells. Herein we report a PNA·DNA strand displacement-activated fluorescent probe that can image the expression of iNOS (inducible nitric oxide synthase) mRNA, a marker of inflammation. The probe consists of a fluorescein labeled antisense PNA annealed to a shorter DABCYLplus-labeled DNA which quenches the fluorescence, but when the quencher strand is displaced by the target mRNA the fluorescence is restored. DNA was used for the quencher strand to facilitate electrostatic binding of the otherwise netural PNA strand to a cationic shell crosslinked knedel-like (cSCK) nanoparticle which can deliver the PNA·DNA duplex probe into cells with less toxicity and greater efficiency than other transfection agents. RAW 264.7 mouse macrophage cells transfected with the iNOS PNA·DNA probe via the cSCK showed a 16 to 54-fold increase in average fluorescence per cell upon iNOS stimulation. The increase was 4 to 7-fold higher than that for a non-complementary probe, thereby validating the ability of a PNA·DNA strand displacement-activated probe to image mRNA expression in vivo.
Introduction
There has been great interest in developing real-time fluorescent imaging agents for mRNA expression in vivo that are based on antisense oligodeoxynucleotides and analogs [1][2][3]. There are two main problems in getting such systems to work well. The first is to deliver the agents efficiently into the cytoplasm, and the second is to minimize background signal from unbound probe. The main problem with getting nucleic acids and analogs into the cytoplasm is that they are membrane impermeable, thereby requiring the use of a physical, chemical, or biochemical device or agent [4]. Many mRNA-imaging studies have used microinjection, electroporation, or pore forming agents such as streptolysin O (SLO), but such agents would be unsuitable for in vivo work. Others have made use of cell-penetrating peptides, or transfection agents, but these often result in endocytosis and trapping of the probe in endosomes which reduces the amount of probes in the cytoplasm and can lead to nonspecific background signal. To reduce the background signal from unbound probe, probes have been designed to emit fluorescence only in the presence of target mRNA by a variety of strategies. Among these are molecular beacons, binary and dual FRET probes, strand-displacement probes, quenched autoligating probes, FIT-probes, and nucleic-acidtriggered probe activation [5,6].
One general approach to activatable probes makes use of a fluorophore-quencher pair, typified by the molecular beacon strategy [5,7]. Molecular beacons consist of a fluorescent molecule and a quencher that are conjugated to both ends of an antisense nucleic acid sequence which may or may not have a short complementary stem. When free in solution, the fluorophore component is quenched by either FRET, in which case the energy of the excited fluorophore is transferred to a quencher by a through space mechanism [8]; or by "contact quenching," in which a fluorophore and a quencher are close enough that they can form a nonfluorescent complex [9]. Upon binding to the target RNA, the fluorophore and quencher are physically held apart by duplex formation, and fluorescence is restored. While this is an elegant system, it suffers from background fluorescence due to nonspecific binding events that lead to separation of the fluorophore and the quencher. A bimolecular version of a molecular beacon, often referred to as a strand-displacement probe, makes use of an antisense oligonucleotide conjugated to a fluorescent probe that is annealed to a shorter complementary oligonucleotide conjugated to a quencher [10][11][12] (Figure 1(a)). In this system, the duplex region is much longer and much more stable than the generally short duplex stem used in molecular beacons. Despite the high stability, rapid strand exchange can take place because the short section of single strand on the probe strand can hybridize to the target RNA and facilitate the thermodynamically favorable displacement of the quencher strand through branch migration. The rate of strand-displacement depends on the single-strand length ("toehold") while the extent of reaction will depend on the difference in length between the fluorescent and quenching probes [13]. The larger the difference, the longer the unpaired section and the faster the rate for displacing the shorter strand by the target mRNA and the more complete the displacement.
While there are numerous studies using molecular beacons for imaging of gene expression in vivo, there have only been a few reports of the use of strand-displacement probes. Hnatowich and coworkers constructed a probe from a 25mer phosphorodiamidate morpholino (MORF) oligomer conjugated to a Cy5.5 and a complementary 18-mer cDNA conjugated to a BHQ3 quencher. They showed that this probe could image a complementary biotinylated 25-mer MORF oligomer immobilized on streptavidin polystyrene microspheres that were intramuscularly implanted into a mouse [14]. The same group also utilized a probe consisting of a 25-mer phosphorothioate DNA bearing Cy5.5 and a 10-mer complementary ODN with the BHQ3 quencher to image the KB-G2 tumor in mice which overexpresses the multi-drug-resistant mdr1 mRNA [15]. In another approach, Mirkin and coworkers developed "nanoflares" in Journal of Nucleic Acids 3 which antisense ODNs to a target mRNA are conjugated to a gold nanoparticle and then hybridized to a shorter strand of complementary DNA bearing Cy5 which is quenched by the gold nanoparticle. When taken up by cells containing the target mRNA, the Cy5-bearing ODN becomes displaced resulting in fluorescence activation [16]. In their design, however, the fluorescent reporter becomes displaced by the mRNA making it unable to report on the location of the mRNA within the cell. DNA-based probes also suffer from premature intracellular degradation, which generates a high background signal.
All previous studies of the strand displacement-activated probes have made use of either DNA, phosphorothioate, or phosphorodiamidate morpholino, and none have made use of PNA. PNAs have a number of properties that make them ideal for strand-displacement probe technology. They are very resistant to chemical and enzymatic degradation, bind with higher affinity to RNA than DNA, and able to invade regions of RNA with secondary structure [17,18]. They also do not activate RNAse H degradation of the target RNA and protect a complementary ODN from degradation. We have also shown that PNA·ODN duplexes can be efficiently delivered into cells by cationic-shell-crosslinked nanoparticles (cSCKs) (Figure 1(b)) through favorable electrostatic interactions, and remain highly bioactive [19,20]. The cSCKs are also much less cytotoxic and efficient than the commonly used Lipofectamine.
To determine whether or not PNA·ODN hybrids delivered by a cSCK can be used as strand-displacement-activated fluorescent probes to monitor gene expression within living cells, we used iNOS as a model target system. iNOS is an important biomarker for inflammation and is greatly upregulated in response to environmental stimuli such as gamma interferon (γ-IFN) or lipopolysaccharide (LPS) [21,22]. We have also previously determined a number of antisense accessible sites on iNOS mRNA that could be used as target sites by a modified reverse transcriptase random oligonucleotide library PCR method [23]. Herein we show that PNA·ODN-strand-displacement-activated fluorescence probes can be used to monitor iNOS mRNA expression in living cells by confocal microscopy following delivery by cationic shell crosslinked knedel-like nanoparticles.
PNA-Fluorescein Synthesis and Purification.
A 23-mer PNA probe antisense to the bases starting at position 480 of iNOS mRNA (FAM-iNOS-PNA) and a control probe with the same length but targeting HeLa pLuc 705 splice correction site (FAM-pLuc-PNA) were synthesized on an Expedite 8909 DNA/PNA synthesizer. After removal of the Fmoc-protecting group at the amino end of the PNA, the resin was dried with nitrogen gas and was shaken overnight with 200 μL of 0.02 M FAM-NHS ester (2 eq) in DMSO, together with 2 eq DIPEA at room temperature. The resin was then washed sequentially with DMF and DCM and dried under nitrogen. The PNA was then cleaved from the support with 250 μL TFA/m-cresol (4 : 1) mixture for 2-4 h. The cleavage mixture was separated from the support and the PNA precipitated by adding 1 mL cold diethyl ether and centrifuging for 10 min. The product was dried on a hot block at 55 • C and dissolved in water containing 0.1% TFA. The FAM-PNAs were purified by HPLC and characterized by MALDI mass spectrometry (See Supplementary Material available online at doi:10.1155/2012/962652), UV and fluorescence spectroscopy. The overall yield for FAM-PNAs was about 5%.
DNA-DABCYL Synthesis and
Purification. Regular and 3 -end-modified ODNs were purchased from IDT Inc. and purified by HPLC. The 17-mer DNAs modified with an amino linker at the 3 -end (50 nmol) were shaken overnight with 10 eq of DABCYL plus -NHS ester in 10 mM Na 2 CO 3 /NaHCO 3 buffer (adjusted to pH 8.5 with hydrochloride acid). The products were purified by gel electrophoresis on a 20% polyacrylamide gel. Bands containing the desired product were eluted with 0.5 M ammonium acetate, 10 mM MgCl 2 , 1 mM EDTA, and 0.1% SDS, precipitated with 3 volumes of ethanol, cooled to −20 • C for 30 min, and collected after centrifugation for 30 min. The DNA-DABCYLs were characterized by MALDI mass spectrometry and UV spectroscopy.
In Vitro mRNA Transcription.
The PCMV-SPORT6 vector containing the iNOS mRNA gene was purchased from American Type Culture Collection (ATCC, Manassas, VA). LB media was inoculated with E. coli containing the vector at 37 • C for 18 h after which the plasmid was isolated from the E. coli by using HiPure Plasmid Maxprep kit (Invitrogen). The plasmid was then digested by Xhol (Promega) to form linear DNA, which was purified by phenol extraction, ethanol precipitation and was characterized by electrophoresis on a 1% agarose gel stained with ethidium bromide. The linear DNA was then transcribed into iNOS mRNA using the RiboMAX SP6 large scale RNA transcription kit (Promega) following the manufacturer's protocol. The integrity of iNOS mRNA was verified on the 1% w/v agarose gel. All aqueous solutions used in this process were prepared with diethylpyrocarbonate-(DEPC-) treated water and the mRNA was stored at −80 • C in water with 2 μL (80 U) RNaseOUT recombinant RNase inhibitor (Invitrogen).
Quantitative RT PCR to Quantify iNOS mRNA Copy
Numbers in RAW 264.7 Cells. RAW 264.7 cells were seeded on 10 mm Petri dish plates (Corning Inc, Lowell, MA) and grown until 70% confluence. Selected plates were then treated with 1 μg/mL LPS and 300 ng/mL γ-IFN for 18, 6, and 0 h (without LPS and γ-IFN), respectively. Cells in each plate were counted with a hemocytometer and spun down in a centrifuge. Total RNA from each sample was extracted with the TRizol reagent (Invitrogen, CA) following the manufacturer's protocol and quantified by measuring UV absorbance at 260 nm. After treatment with Turbo DNase (RNase free), 0.5 μg of each total RNA sample was reverse-transcribed into cDNA using SuperScript II reverse transcriptase (Invitrogen), following the manufacture's procedure. Briefly, 0.5 μg of each total RNA sample was mixed with 300 ng random primers and 1 μL dNTP (10 mM each) to make a solution of 12 μL. The mixture was incubated at 65 • C for 5 min and quickly chilled on ice. Then 4 μL 5 × first-strand buffer, 2 μL 0.1 M DTT, and 1 μL RNaseOUT were added, and the mixture was incubated at 25 • C for 2 min. Then 1 μL of the SuperScript II RT was added to the mixture, incubated at 25 • C for 10 min and then at 42 • C for another 50 min. The reaction was inactivated at 70 • C for 15 min, and the cDNA product was diluted 2500-fold for RT-PCR reaction. To generate the cDNA standard, 0.5 μg mRNA prepared previously was reverse transcribed into cDNA using the same kit with exactly the same procedure. The resulting cDNA product was serially diluted by a factor of ten. The cDNAs and standards were then mixed with Power SYBR Green RT-PCR master mix (Invitrogen) and the RT-PCR was performed on a Steponeplus real-time PCR system with the following profile: 1 cycle of 50 • C for 2 min, 95 • C for 15 min, then 40 cycles of 95 • C for 15 s, 60 • C for 30 s, and 72 • C for 45 s. The primers used to amplify iNOS cDNA were d(TGGTGGTGACAAGCACATTT) and d(AAGGCCAAACACAGCATACC), and for the GAPDH cDNA, the primers were d(TGGAGAAACCTGCCAAG-TATG) and d(GTTGAAGTCGCAGGAGACAAC). Each well contained 25 μL of reaction mixture including 2.5 μL forward primer, 2.5 μL reverse primer, 2.5 μL double distilled water, 5 μL cDNA template and 12.5μL Power SYBR Green RT-PCR master mix. The threshold cycle C T was automatically set by the machine. The standardcurve method was used to determine the absolute copy number of the iNOS mRNA in cells. The comparative C T ( C T ) method was used to calculate the relative
Design and Synthesis of the Strand-Displacement Probes.
The strand-displacement probes were designed to have a longer antisense PNA conjugated to the fluorophore and a shorter sense DNA conjugated to the quencher to insure that the fluorophore-bearing PNA would both kinetically and thermodynamically favor hybridization to the target mRNA ( Figure 2). We chose to image iNOS mRNA because it is a biomarker for inflammation that is dramatically elevated upon treatment of cells or tissue with γ-interferon and LPS (lipopolysaccharide). The PNA sequence used for the construction of the fluorescent probe was selected from a number of PNAs that we had previously demonstrated to bind to in vitro transcribed and endogenous iNOS mRNA, and to suppress iNOS expression in vivo [23]. The antisense accessible sites on the iNOS mRNA were identified by an RT-ROL (reverse transcriptase-random oligonucleotide library) method that we had improved upon [25]. Transfection of selected PNA·ODN duplexes with Lipofectamine confirmed the ability of these PNAs to inhibit gene expression. In vitro binding assays with in vitro transcribed mRNA confirmed that a number of these sites bound both antisense ODNs and PNAs with high affinity [26]. From these, we chose the 23-mer PNA480 sequence that targets nucleotides 473-494 on iNOS mRNA. The specificity of the antisense iNOS 23mer sequence was assessed by BLAST (basic local alignment search tool) which revealed that the next best mRNA targets were complementary to only 14 bases of the 23-mer nucleoredoxin-like, protein 1-like, and myosin VA (Myo5a) mRNAs (See Supplementary Material). The length of the quenching strand was therefore chosen to be 17 nucleotides so that the PNA·ODN duplex would be less stable than the targeted PNA·mRNA duplex, and more stable than the nontarget PNA-RNA duplexes. This length would also leave a 6-nucleotide toehold for binding to the mRNA target and initiating strand displacement by branch migration.
We chose fluorescein as the fluorophore and DABCYL plus as the quencher on the complementary strand as this is a common fluorophore/quencher combination [27,28]. DABCYL plus is a more soluble version of DABCYL and though its structure is proprietary, appears to involve the addition of an ethylene sulfonate chain as deduced from its molecular weight. Since it is known that a G opposite to fluorescein can also quench up to 90% of its fluorescence [29], we designed the PNA probe to have a C at the amino end (equivalent to the 5 end of DNA), to be complementary to a G at the 3 -end of the quencher DNA strand to enhance the quenching efficiency. Because there is an A in the target iNOS mRNA at this position, we did not expect any quenching from the target mRNA. As a control, we synthesized a 23-mer PNA that is antisense to an mRNA splice correcting site in a pLuc 705 HeLa cell line which we have previously used to demonstrate the ability of cSCKs to deliver PNA•DNA hybrids into this cell line. BLAST analysis indicated that there are no mRNAs sequences greater than 13 nt in mice that could activate this probe. The probes were prepared by automated solid phase
Fluorescence Activation by Complementary DNA.
The PNA•DNA strand-displacement probes were first tested with a 21-mer ODN identical to the mRNA target sequence (iNOS-DNA) (Figure 3(a)). This sequence was truncated at the 3 -end to avoid introducing complementary Gs that might have quenched some of the fluorescence emission. ( Figure 3(b)). When the strand-displacement reaction with the iNOS probe and iNOS-DNA was followed as a function of time about 80% of the maximal fluorescence was achieved in less than 10 min (Figure 4). The fluorescence recovery could be best fit to a biexponential where the major component (about 75%) occurred with a rate constant of about 0.02 s −1 while the slower component had a rate constant of about about 0.001 s −1 . The origin of the slower phase is not understood at the moment. The results clearly show that the strand-displacement probe is able to effectively detect a complementary nucleic acid target in solution.
Fluorescence Activation by In Vitro Transcribed mRNA.
Unlike the 21-mer iNOS-DNA target, in vitro transcribed iNOS mRNA is about 4000 nucleotides, and adopts a complicated folded structure. Studies in our lab have previously shown that the iNOS mRNA is accessible to the iNOS-PNA used for the iNOS probe [23], and that the 18-mer carboxy terminal 18-mer section, TGAAATCCGATGTGGCCT, has a high binding affinity (86 ± 26 pM) for annealed in vitro transcribed iNOS mRNA [26]. Also, siRNA knockdown and PNA antisense inhibition of iNOS expression suggested that the 480 site was also accessible in vivo [23]. The mRNA was transcribed from a cDNA clone in vitro and characterized by agarose gel electrophoresis (See Supplementary Material). To demonstrate that the in vitro transcribed iNOS mRNA has the correct sequence and could displace the quencher strand without interference from its folded structure, the iNOS probe was heated together with varying concentrations of the mRNA to 65 • C for 1 min to unfold the mRNA and then cooled to 37 • C for 15 min. With 0.5 to 1 equivalents of iNOS mRNA, there was about 50% recovery of fluorescence, and at 2 equivalents, about 70% demonstrating that the target mRNA sequence was indeed present and accessible after heating (Figure 5(a)). When the same procedure was carried out with the pLuc strand displacement probe no increase in fluorescence was observed, again showing the specificity of the strand displacement reaction ( Figure 5(b)).
We then investigated the ability of the probe to be activated by the full length iNOS mRNA transcript at 37 • C. Initial studies with directly transcribed mRNA at 37 • C were not very reproducible, so the samples were annealed first to insure that the results would be reproducible and could be correlated with independent PNA-binding measurements that were also carried out on annealed mRNA. Thus, the mRNA was first heated to 65 • C for 1.5 min and then annealed at 37 • C for 15 min in 10 mM Tris buffer. The iNOS probe was similarly annealed at a high concentration (1 μM) and then 20-fold diluted into the mRNA solution. The fluorescence of the mixtures was monitored as a function of time and iNOS mRNA concentration at 37 • C ( Figure 6). When the concentration of iNOS mRNA increased from 25 nM to 250 nM, corresponding to 0.5 to 5 times the concentration of the probe, an unexpected rapid jump in fluorescence was observed, followed by an increase in the fluorescence intensity of the mixture. The pLuc probe with two equivalents of mRNA, also showed a rapid jump in fluorescence, but there was no further increase in fluorescence with time suggesting that the jump in fluorescence was due to some experimental artifact. We have not been able to establish the origin of the initial jump in fluorescence with the addition of the mRNA and it was not observed in the DNA experiment. The portion of the curve following the initial rapid rise could be fit to the same type of biexponential curve as with the DNA experiment with two approximately equal phases with rate constants of about 0.006 s −1 and 0.0005 s −1 . The maximum increase in fluorescence following the rapid jump with 10-fold excess iNOS mRNA was only about 33% of that observed for a sample in which the strand displacement probe was heated and cooled with the mRNA. The lower amount of fluorescence may be due to the tertiary structure of the mRNA at 37 • C which could reduce the binding affinity, and/or to the presence of multiple folded mRNAs, some of which are more kinetically accessible than others. Such folded structures, as well as protein binding, could affect the accessibility of an antisense probe in vivo. 3.4. Copy Number of iNOS mRNA in Cells. mRNAs are usually expressed at very low levels inside cells, ranging from tens to thousands of copies per cell [30]. The low copy number of mRNAs can be a problem for in vivo mRNA imaging because the signal generated will be very low and hard to be distinguished from background noise. So far, antisense imaging by fluorescently labeled probes are still limited to relatively abundant transcripts [2]. Normally, the expression level of iNOS is very low, but becomes greatly stimulated by LPS and γ-IFN, making it a good system for testing and validating antisense imaging probes. To our best knowledge, the actual copy number of iNOS mRNA inside cells before or after stimulation has not been reported. To determine the copy numbers for iNOS mRNA, we performed quantitative RT-PCR on nonstimulated RAW 264.7 cells and cells stimulated with LPS/γ-IFN for 6 and 18 h. We chose RAW264.7 cells for these studies because this is a mouse macrophage cell line which is well known to elevate iNOS expression in response to LPS/γ-IFN [31]. Furthermore, the cells primarily responsible for iNOS induction in acute lung injury (ALI) are alveolar macrophages, and we plan to ultimately extend our studies to mouse models of ALI [32]. The in vitro transcribed iNOS mRNA was used to generate a standard curve, and the housekeeping gene glyceraldehyde 3phosphate dehydrogenase (GAPDH) was used as an internal control to determine the relative increase of iNOS mRNA (See Supplementary Material). Using the standard curve, the copy number for unstimulated cells was estimated to be 760 per cell, but rose 70-fold to about 53,000 after 6 h of stimulation, and 100-fold to 76,000 after eighteen hours. The C T method using GAPDH as an internal reference also showed a 96-fold increase for the iNOS mRNA after 18 h of stimulation, confirming the results obtained from the standard-curve method. The large change in copy number, and high mRNA level after stimulation makes iNOS mRNA an ideal target for development and validation of antisense imaging agents.
Imaging of iNOS mRNA Expression in Living Cells.
Intracellular delivery of nucleic acids has always been a major obstacle for in vivo antisense imaging due to their membrane impermeability. We have found that PNAs can be efficiently delivered into cells by hybridizing the PNA with negatively charged DNA and then forming an electrostatic complex with cSCK (cationic-shell-crosslinked knedel-like nanoparticle) [19,20,24]. In addition to being able to form the electrostatic complex with the PNA•DNA duplex, the positively charged shell of the cSCK nanoparticle also facilitates entry into cells via endocytosis, and escape of the PNA•DNA duplex from the endosome by the proton sponge effect. Figure 7 shows the results of confocal imaging of live RAW cells following with optimized concentrations of both the probes and cSCK nanoparticles. For cells treated Journal of Nucleic Acids with LPS/γ-IFN, and the iNOS probe, there was bright fluorescence inside the cytoplasm, indicating hybridization of the probes to the mRNA. For cells not treated with LPS/γ-IFN and cells treated with LPS/γ-IFN but with pLuc probe, there was much less observable fluorescence. Quantification of the fluorescence shows that there was a 16.6 ± 7.9-fold increase in the average fluorescence of the iNOS probes per cell that were stimulated with LPS/γ-IFN relative to the cells that were not stimulated, which is consistent with the expected difference in iNOS mRNA expression level.
The average fluorescence/cell for the stimulated cells treated with pLuc probe, however, showed a 4.1 ± 2.3-fold increase in fluorescence compared to that for the iNOS probe in unstimulated cells. One possible explanation is that LPS/γ-IFN treatment might have caused an increased internalization of the probes which would lead to an increase in background fluorescence compared to unstimulated cells. Figure 7 shows that stimulated cells are about two times larger in diameter than unstimulated cells which could explain the increase in background signal. LPS/γ-IFN stimulation may also lead to an increase in degradation rate of the probes within the cells that could increase the background signal. The same experiment was repeated one month later with similar, if not better results (See Supplementary Material). In the second experiment, a 56 ± 24-fold increase in average fluorescence per cell was observed for the iNOS probe upon stimulation, while an 8 ± 4.2-fold increase was observed for the pLuc probe. The difference in the fluorescence per cell between the iNOS and pLuc probes in the stimulated cells in the second experiment (7-fold) was also greater than that observed in the first experiment (4-fold). This second set of results, together with results from an initial experiment preceding the first experiment indicate that the results are reproducible but that there may be experiment-to-experiment variability.
There are many other factors that could contribute to the lower-than-expected difference in fluorescence from the probes between the stimulated and unstimulated cells, such as a difference in accessibility to the targeted mRNA in stimulated and nonstimulated cells due to different protein interactions and ribosomal activity. There is also a possibility that the change in expression level of iNOS mRNA determined by RT-PCR does not properly reflect the change in expression level in the presence of nanoparticle in the cytoplasm, where the probes appear to be. We saw no fluorescence in the nucleus, either suggesting that the probes are not entering the nucleus or the mRNA is inaccessible in the nucleus. The former explanation is more likely, as unpublished experiments carried out with similar but unquenched probes do not appear to enter the nucleus. Since it has been recently reported that there can be differences in the level of a particular gene transcript in the cytoplasm and the nucleus [33], it is possible that the increase in cytoplasmic iNOS expression measured by the displacement probes is less than what is being measured by RT-PCR for the whole cell.
Conclusion
We have showed that the strand-displacement-activated PNA probes function in vitro and can be efficiently delivered by cSCK nanoparticles to image iNOS mRNA in living cells. The iNOS probes showed a 17-to-56-fold increase in average fluorescent signal per cell upon stimulation of cells, but the signal was only 4-to-7-fold greater than the signal seen for the noncomplementary pLuc probe. The observed increase in iNOS probe fluorescence intensity compared to unstimulated cells is much less than the expected value of about 100 determined by RT-PCR, which may be due to off target activation of the nontargeted probe, and/or activation of the nontargeted probe resulting from degradation of the quencher strand. The difference could also be due to differences in mRNA expression detected by the strand displacement probes in the cytoplasm, compared to that detected by RT-PCR in the whole cell. Nonetheless, this class of PNA-based strand-displacement probes combined with cSCK nanoparticle delivery looks promising for live-cell mRNA imaging, and merits further study and optimization. In the future, the quencher strand could be made more stable through the use of nondegradable nucleic acid analogs, and the probes shifted farther to the red for in vivo studies.
|
v3-fos-license
|
2019-03-08T14:05:39.475Z
|
2019-02-25T00:00:00.000
|
73414079
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/path.5249",
"pdf_hash": "81a07bbdbcc085797627634e6a233bfeb02a5a9f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44249",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "81a07bbdbcc085797627634e6a233bfeb02a5a9f",
"year": 2019
}
|
pes2o/s2orc
|
Plasma Epstein–Barr virus DNA as an archetypal circulating tumour DNA marker
Abstract Analysis of circulating tumour DNA (ctDNA), as one type of ‘liquid biopsy’, has recently attracted great attention. Researchers are exploring many potential applications of liquid biopsy in many different types of cancer. In particular, it is of biological interest and clinical relevance to study the molecular characteristics of ctDNA. For such purposes, plasma Epstein–Barr virus (EBV) DNA from patients with nasopharyngeal carcinoma (NPC) would provide a good model to understand the biological properties and clinical applications of ctDNA in general. The strong association between EBV and NPC in endemic regions has made plasma EBV DNA a robust biomarker for this cancer. There are many clinical utilities of plasma EBV DNA analysis in NPC diagnostics. Its role in prognostication and surveillance of recurrence is well established. Plasma EBV DNA has also been validated for screening NPC in a recent large‐scale prospective study. Indeed, plasma EBV DNA could be regarded as an archetypal ctDNA marker. In this review, we discuss the biological properties of plasma EBV DNA from NPC samples and also the clinical applications of plasma EBV DNA analysis in the management of NPC. Of note, the recently reported size analysis of plasma EBV DNA in patients with NPC has highlighted size as an important analytical parameter of ctDNA and demonstrated clinical value in improving the diagnostic performance of an EBV DNA‐based NPC screening test. Such insights into ctDNA analysis (including size profiling) may help its full potential in cancer diagnostics for other types of cancer to be realised. © 2019 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of Pathological Society of Great Britain and Ireland.
Introduction
Circulating tumour DNA (ctDNA) analysis has demonstrated great promise in cancer diagnostics [1][2][3][4]. The term 'liquid biopsy' has been used to emphasise its ability to non-invasively obtain cancer-associated genomic and other information [5]. However, the current clinical utility of ctDNA analysis is limited by our incomplete understanding of ctDNA biology. The need to search for molecular markers (both genetic and epigenetic) has hindered ready detection of ctDNA in many cancer types. For example, in many cancer types there are no readily usable mutational hotspots where ctDNA markers could be developed [6,7]. In contrast, circulating Epstein-Barr virus (EBV) DNA in patients with the EBV-associated cancers [8] could serve as a good model to understand the biological properties of ctDNA in general. The tumoural origin of circulating EBV DNA thus provides one type of ctDNA that could be readily detected using many different molecular strategies.
Plasma EBV DNA has been widely studied in nasopharyngeal carcinoma (NPC), as one of the EBV-associated malignancies. NPC demonstrates distinct geographical and ethnic patterns [9] and is endemic in the southern parts of China and Southeast Asia. Virtually all NPC in endemic regions belong to the undifferentiated subtype in which virtually every cancer cell harbours the EBV genome. EBV DNA released from cancer cells into the plasma has been proven to be a robust biomarker of NPC [10]. Circulating EBV DNA has been shown to be of clinical value in prognostication [11], surveillance of recurrence [12,13] and screening [14] for NPC. In fact, plasma EBV DNA could be regarded as an archetypal example of ctDNA utility. Here, we review the molecular characteristics of plasma EBV DNA ( Figure 1) and its clinical applications in NPC. The knowledge that could be gained through plasma EBV DNA analysis in NPC might illuminate the wider clinical utility of ctDNA analysis, including in cancers that do not have a viral aetiology.
Origin of plasma EBV DNA from cancer cells in patients with EBV-associated malignancies
Our group demonstrated the presence of high concentrations of EBV DNA in the plasma of NPC patients by using real-time qPCR [10]. Plasma EBV DNA was shown to be a highly sensitive and specific biomarker for NPC. The prototype assay, which targeted the BamHI-W repeat region of the EBV genome, could detect EBV DNA in the plasma of over 95% of NPC patients and approximately 5% of the healthy controls from the study cohort. The findings were subsequently confirmed by other groups [15][16][17][18][19]. EBV DNA was also shown to be present in the plasma of patients with other EBV-associated neoplastic disorders, including Hodgkin's lymphoma, Burkitt's lymphoma, natural killer T cell lymphoma, post-transplant lymphoproliferative disorder [20,21] and EBV-positive gastric carcinoma [22]. Although it was estimated that 95% of the population in the world have an asymptomatic lifelong EBV infection [8], only 5% have detectable levels of EBV DNA in plasma. The virus remains latent in the B lymphocyte pool [8] with little cell turnover. Our proposed working model is that the minute amounts of EBV DNA, if any, released from cell death would not be sufficient to be detected in the circulation. This conceptual framework is built on the background of the rapid in vivo clearance of circulating EBV DNA. The situation changes, however, in viral reactivation, when much more EBV DNA would be released into the circulation [23]. In contrast, there is a much higher cell turnover rate in cancers, e.g. up to 200 000 cancer cells/day in NPC [14], which would release sufficient cell-free EBV DNA into the circulation to be detected.
With the discovery of circulating EBV DNA in patients with these virus-associated cancers, researchers then asked a fundamental question about the origin of EBV DNA. Theoretically, EBV DNA in the circulation may be released from cancer cells during the process of apoptosis/necrosis [24][25][26][27] or generated from viral replication. We have attempted to study the molecular characteristics of circulating EBV DNA in order to infer its origin [28]. In one study, EBV DNA was measured before and after DNase I treatment of plasma samples from NPC and lymphoma patients by PCR analysis. Although plasma EBV DNA was originally present in all NPC and lymphoma patients tested, it could no longer be detected after DNase I treatment of their plasma samples. Ultracentrifugation of plasma samples showed that circulating EBV DNA from cancer patients existed in the supernatants but not in the pellet portion. As a control, a spike-in experiment using EBV particles from the virus-infected cell line (B95-8) showed the opposite findings. Taken together, these findings suggest that circulating EBV DNA molecules in cancer patients exist as naked DNA fragments, which are susceptible to DNase digestion and could not be pelleted down, instead of intact virions as a result of viral replication. Lin et al [15] provided another strand of evidence by showing consistent genotypes of EBV DNA extracted from paired plasma and tumour samples of NPC patients.
Positive correlation between quantitative level of plasma EBV DNA and tumour burden
One desirable characteristic of an ideal cancer biomarker is the ability to reflect the tumour burden. It has been well demonstrated that the pretreatment quantitative level of plasma EBV DNA correlated with the NPC tumour stage using qPCR analysis [10,17,29,30]. Patients with advanced-stage NPC had higher pretreatment plasma EBV DNA concentrations than those with early-stage diseases. In addition to the correlation with tumour stage, plasma EBV DNA level also demonstrated a positive linear relationship with the total tumour (both the primary tumour and regional nodes) volume quantified by volumetric analysis on MRI [31]. This correlation was also illustrated in a mouse model with a positive linear relationship between plasma EBV DNA concentration and the weight of NPC tumour xenografts [32]. All of these findings support a positive quantitative relationship between plasma EBV DNA level and tumour load. These results formed the foundation for clinical studies to investigate plasma EBV DNA as a marker for prognostication and surveillance of recurrence in NPC, which will be discussed below.
Rapid kinetics of plasma EBV DNA
There are two main factors that govern the levels of EBV DNA in the circulation, namely the release of viral DNA from cancer cells and the in vivo clearance of EBV DNA. The release of EBV DNA into the circulation is in turn determined by the cancer cell population and also the cell turnover rate. To study the in vivo clearance dynamics, serial analysis of plasma EBV DNA levels in NPC patients during and after the surgical treatment procedure provides a good model for analysis, as curative surgical treatment is generally performed with an intention to eradicate all tumour cells within a short period of time (in terms of hours). With such a study design involving surgical candidates with locoregional recurrent diseases, we have shown that plasma EBV DNA was cleared at a rate that followed the first-order kinetics model of decay; the median half-life was 139 min [33]. This number is in the same order as the half-life of fetal DNA clearance in maternal plasma reported in the delivery model [34]. Given the rapid elimination of EBV DNA in the circulation, measuring plasma EBV DNA concentrations thus provides an almost real-time readout of tumour burden and would also be useful for monitoring recurrence.
Size profile of plasma EBV DNA
We have previously analysed the sizes of plasma EBV DNA molecules in NPC and lymphoma patients [28]. In that study, we used multiple PCR assays with different amplicon sizes to show that the majority of plasma EBV DNA from NPC patients were short DNA fragments (shorter than 181 bp). This is in concordance with the previous findings that ctDNA exists as fragmented DNA molecules [24]. The fragmentation process of circulating DNA in general is non-random and governs its size profiles. Through the use of next-generation paired-end sequencing, the size of each sequenced DNA molecule can be deduced by the start and end coordinates of a sequence read. Thus, it allows a high-resolution size profiling of plasma DNA down to a single nucleotide level. The size profile of plasma DNA is one biological parameter that has been increasingly studied in a number of physiological and pathological conditions. In a pregnancy model, we have previously demonstrated that maternally derived DNA and fetally derived DNA in maternal circulation exhibit different size profiles [35]. Maternal DNA peaks at a size of 166 bp, which has been postulated to represent the nucleosomal core and 10-bp linkers at both ends. Fetal DNA, in contrast, peaks at a size of 143 bp, which corresponds to the size of the nucleosome core only. Such difference has been exploited to develop a size-based diagnostic approach for the detection of fetal chromosomal aneuploidy in non-invasive prenatal testing [36]. Similarly, in a bone marrow transplantation model, it was shown that non-haematopoietically derived DNA (with recipient-specific SNP) is shorter than haematopoietically derived DNA (with donor-specific SNP) [37]. All of these findings suggest that plasma DNA molecules of different origins have different size profiles.
Recently, we performed size profiling of plasma EBV DNA from NPC patients using a sequencing-based analysis [38]. Plasma EBV DNA from NPC patients was shown to exhibit a nucleosomal size pattern, with a peak size at around 150 bp. It was shorter than plasma autosomal DNA, which was predominantly non-tumour-derived DNA. These results might suggest that ctDNA in general is shorter than non-tumour-derived DNA ( Figure 2). We have also studied the size characteristics of ctDNA in patients with hepatocellular carcinoma [39]. The model of hepatocellular carcinoma was chosen because copy number aberrations are frequently found in this cancer. By comparing the plasma DNA from amplified regions (enriched with ctDNA) and deleted regions (enriched with non-tumour-derived DNA), we reached the same conclusion as for plasma EBV DNA in NPC, that the ctDNA is in general shorter than non-tumour-derived DNA. The finding was also confirmed in mouse models with different types of human tumour xenograft (hepatocellular carcinoma and glioblastoma multiforme) [40]. This observation underscores the importance of preserving short plasma DNA molecules during the laboratory procedures of sample preparation for ctDNA analysis. In a more specific example, the size analysis was used to improve the diagnostic performance of a plasma EBV DNA-based NPC screening test [38], which will be elaborated below.
Transrenal excretion
Simultaneous analysis of EBV DNA in the plasma and urine of NPC patients allows the transrenal excretion of plasma DNA in general to be explored [41]. Previously it was not clearly known how much transrenal excretion contributes to the in vivo elimination of circulating DNA. There were inconclusive results from studying the concentration of Y-chromosome fragments in the urine of pregnant women carrying male fetuses [42][43][44], perhaps due to the low concentrations of Y-chromosome DNA molecules in the maternal plasma. In studies involving patients with urological cancers, the tumour-derived DNA in their urine is from both direct release into the urinary system and transrenal excretion [45,46]. In contrast, circulating EBV DNA is exclusively released from the tumour cells in NPC patients. Any EBV DNA detected in the urine samples of NPC patients is theoretically derived through transrenal excretion only. Our group has analysed the concentrations of EBV DNA in the plasma and urine of 74 NPC patients using two real-time PCR assays with shorter (59 bp) and longer (76 bp) amplicon sizes [41]. All patients (99%) except one had detectable EBV DNA in their plasma samples using the shorter-amplicon assay. Urinary EBV DNA was detectable in 56% of patients using the PCR assay with the shorter amplicon size and 28% using the assay with the longer amplicon size. The group of patients with detectable urinary EBV DNA had a significantly higher median concentration of plasma EBV DNA than the group with undetectable urinary EBV DNA. It was calculated that only a small fraction of plasma EBV DNA was excreted transrenally, with only 0.0026% of the renal clearance of creatinine. The very low transrenal excretion of ctDNA (or circulating DNA in general) could potentially be explained by the anatomy of kidney glomeruli (pore size), the possible binding with other circulating proteins (e.g. nucleosome) and also the negatively charged nature of DNA [47,48]. The pore size of the glomerular barrier is about 30 Å [49], which is smaller than the nucleosome-DNA complex [47]. The glomerular basement membrane is negatively charged, which may expel the also negatively charged DNA molecules. These findings of low transrenal excretion might be generalised to other circulating DNA.
For prognostication
The quantitative correlation between plasma EBV DNA concentrations and tumour burden suggests its potential as a prognostic marker. Given the rapid in vivo elimination of plasma EBV DNA in the circulation and therefore the almost real-time reflection of tumour load, researchers have explored the prognostic implications of plasma EBV DNA concentrations measured at different timepoints with reference to the standard fractionated radiotherapy regime. Pre-, mid-and post-treatment plasma EBV DNA levels have been investigated.
Although pretreatment plasma EBV DNA concentration correlates with the stage of NPC, it was shown that both pretreatment level and cancer stage were independent prognostic factors for overall survival on multivariate analyses [11,15,19,50]. Our group demonstrated that pretreatment plasma EBV DNA level predicted the risk of local recurrence and distant metastasis within 1 year [11]. The prognostic effect has been separately analysed in patients with early-and advanced-stage NPC. Among patients with advanced-stage NPC (stage III and IV), Lin et al [15] reported an inferior overall survival and relapse-free survival for patients with a higher pretreatment plasma EBV DNA concentration (with a cut-off at 1500 copies/ml). For early-stage NPC patients (stage I and II), we have shown that high pretreatment levels (at a cut-off of 4000 copies/ml) strongly predicted unfavourable survival (overall, disease-specific and distant metastasis-free survival) in the long-term follow-up [50]. Early-stage (stage I or II) cancer patients with high EBV DNA levels had an overall survival similar to that of stage III disease, whereas those with low levels had survival similar to that of stage I disease [50]. These findings suggest that the pretreatment EBV DNA level provides additional prognostic information and might reveal tumour biology and aggressiveness independent of the tumour stage. It has been proposed to incorporate the quantitative level of pretreatment plasma EBV DNA into the current staging system of NPC [51]. As the first step, there is ongoing international collaborative effort to harmonise the PCR-based assay for quantitation of EBV DNA [52].
The standard treatment for primary NPC is a fractionated course of radiotherapy. Serial analysis of plasma EBV DNA during the course of radiotherapy could reflect the change in the tumour cell population and might therefore imply the treatment response. As a proof-of-concept study [53], we measured the plasma EBV DNA concentrations of primary NPC patients serially during radiotherapy. There was an initial rise in the concentration of EBV DNA during the first treatment week, which was postulated to be due to its release by treatment-associated cancer cell death. The rise was followed by a gradual drop in concentration over the treatment course. The median half-life among the NPC patients studied was 3.8 days.
To further explore the potential, we prospectively analysed the plasma EBV DNA concentrations at the midpoint of the treatment course (4 weeks after initiation of radiotherapy) for 107 NPC patients receiving radiotherapy/chemoradiotherapy [54]. The study was based on the postulate that patients with a faster plasma EBV DNA clearance and therefore undetectable mid-treatment level would have a more radiosensitive tumour. Mid-treatment plasma EBV DNA was shown to be the only independent prognostic factor (not even pretreatment EBV DNA) for distant failure, disease-free survival and overall survival. Patients with detectable mid-treatment EBV DNA represented an at-risk group with an unfavourable response to treatment. It suggests the prognostic value of studying the in vivo dynamic of plasma EBV DNA and ctDNA in general [55]. The observation could pave the way for future interventional studies with either escalation or de-escalation of treatment regime based on the plasma EBV DNA molecular response.
Post-treatment analysis of plasma EBV DNA aimed to reflect the residual tumour load after completion of treatment [56][57][58]. In our study involving 170 NPC patients of different stages, post-treatment EBV DNA level (at 1 month after the completion of treatment) was shown to allow the prediction of recurrence and progression-free survival [56]. Lin et al [15] showed that NPC patients with any detectable levels of post-treatment plasma EBV DNA (at 1 week after the completion of radiotherapy) had a worse overall and relapse-free survival. As post-treatment plasma EBV DNA level predicts the risk of future relapse, Chan et al [59] conducted a prospective, multicentre, randomised controlled trial to evaluate the use of adjuvant chemotherapy in the high-risk group of patients with a detectable level of post-treatment plasma EBV DNA (at 6-8 weeks after the completion of radiotherapy). The post-treatment level was again shown to predict worse prognosis. However, the use of adjuvant chemotherapy did not improve the relapse-free survival among patients with detectable post-treatment plasma EBV DNA. The authors postulated that the lack of response may be attributed to the use of the same chemotherapeutic agent (cisplatin) in the adjuvant setting as in the primary treatment regimen. Patients with detectable post-treatment levels might already represent a group with treatment resistance and therefore no benefits of adjuvant chemotherapy were observed.
Similarly, for EBV-associated haematological malignancies, plasma EBV DNA was also shown to be a significant prognostic marker. Positive mid-or post-treatment EBV DNA levels predict inferior survival in Hodgkin's lymphoma and natural killer T cell lymphoma [20,[60][61][62].
The ability to prognosticate by pre-, mid-and post-treatment plasma EBV DNA levels actually reflects the potential of quantifying ctDNA at different treatment timepoints for other types of cancer. This concept might also apply to ctDNA analysis for other types of cancer. To illustrate the concept, we demonstrated the clearance of tumour-associated copy number aberrations in the postoperative plasma samples of four patients with hepatocellular carcinoma receiving surgical treatment [63]. Tie et al [64] analysed the postoperative ctDNA levels in patients with stage II colon cancer by targeted sequencing. Postoperative ctDNA status was found to independently predict the risk of recurrence, in addition to T stage, on multivariate analysis. These findings echo those of plasma EBV DNA studies that quantitative analysis of ctDNA could provide additional information for risk stratification.
For surveillance of recurrence
Another main clinical application of plasma EBV DNA analysis in the management of NPC is for surveillance of recurrence [12,13,[65][66][67]. The level is expected to drop to an undetectable level after curative treatment and tumour eradication. It is intuitive that any rise in the plasma EBV DNA concentrations in the subsequent follow-up period could potentially signify a disease relapse. In fact, it has been suggested that the rise could precede clinical symptoms by weeks to months [68]. Currently, regular plasma EBV DNA measurement every 3-6 months is used as an adjunct to endoscopy and imaging in the clinical surveillance protocol of treated NPC patients. Similarly, in other types of cancer, Tie et al [64] demonstrated the utility of ctDNA analysis for monitoring of recurrence among colon cancer patients. Concurrent ctDNA and carcinoembryonic antigen analyses showed a higher sensitivity for the detection of recurrence.
For screening
To utilise plasma EBV DNA analysis for NPC screening, there is one major concern over the sensitivity for the detection of early/presymptomatic stage of NPC with a low concentration of plasma EBV DNA. Previous studies have reported lower sensitivities for the detection of stage I or II NPC compared with advanced-stage disease using PCR-based assays [13,17,30]. To evaluate its use for screening, we conducted a large-scale prospective study in Hong Kong, which is an endemic region [14]. More than 20 000 middle-aged men who did not have any symptoms of NPC were recruited. All recruited subjects were tested for plasma EBV DNA using qPCR analysis. Subjects who had detectable plasma EBV DNA on two consecutive tests were defined as 'screen-positive' and would undergo confirmatory investigations, including nasoendoscopy and MRI. Thirty-four subjects were identified to have NPC by the screening protocol. One subject out of 19 865 screen-negative subjects was found to have stage II NPC within 1 year after the test. The screening protocol thus achieved a high sensitivity of 97.1%. Importantly, among the 34 NPC patients identified by screening, 24 (70%) had early-stage diseases (stages I and II) and the remaining 10 patients had advanced-stage diseases (stages III and IV). In contrast, only approximately 30% of NPC patients would present symptomatically at an early stage and 70% at an advanced stage without screening, according to the local cancer registry [69]. The earlier detection was accompanied by more superior progression-free survival among the NPC patients identified by screening. These findings validated the use of plasma EBV DNA analysis for screening NPC and eliminated the concern of inadequate sensitivity for such a purpose.
ctDNA has been actively pursued as a non-invasive screening tool for other different types of cancer [70]. Again, one major concern has been the sensitivity for identifying early-stage cancers [71,72]. It has been demonstrated that there were lower concentrations of ctDNA in early-stage cancers by detecting genetic mutations in plasma DNA [73], similar to the observations of plasma EBV DNA in NPC. In a recent case-control study involving 1005 patients of different cancer types and 812 healthy controls, Cohen et al [74] used amplicon-based sequencing for the detection of cancer-associated mutations in 16 genes. They reported sensitivities of 43 and 73% for stage I and II cancers, respectively. These results highlight the current technical challenge of ctDNA analysis used for cancer screening purpose. In fact, the qPCR-based analysis of plasma EBV DNA for screening NPC [14] might shine light on the sensitivity issue of ctDNA-based assays for other cancer types. The PCR assay used for screening NPC targets the BamHI-W repeat region of the EBV genome, with approximately 10 repeats per genome. Assuming that there are some 50 viral genomes in each NPC cell, there are thus of the order of 500 molecular targets per cancer genome that are analysed by the PCR assay. A similarly high level of sensitivity as in the detection of early NPC might be feasible if we could achieve such a number of targets in the ctDNA-based assay.
Approximately 5% of the general population harbour EBV DNA in their plasma [10,75]. In the NPC screening context, these subjects contribute to the false-positive group and would need to undergo unnecessary confirmatory procedures. However, they have indistinguishable levels of plasma EBV DNA from early-stage NPC patients by PCR-based quantitation [38]. In the prospective screening study [14], we adopted a two timepoint testing protocol of plasma EBV DNA analysis. Participants who had detectable EBV DNA in their baseline plasma samples would be retested again 4 weeks after the first test. Those who also had detectable EBV DNA in the follow-up test would be defined as 'screen-positive'. This is on the basis that NPC patients should have continuous release of EBV DNA from the cancer cells into the circulation, whereas non-NPC subjects tend to have transiently positive results [76]. This strategy was shown to substantially reduce the false-positive rate from 5.4 to 1.4%, with a positive predictive value of 11.0% in our target population. However, it will be logistically inconvenient to launch a mass screening programme based on this two timepoint protocol.
In our 20 000 subject screening cohort, we noticed that participants of older age were more likely to have detectable plasma EBV DNA while not having NPC [23]. Another interesting observation was that there was a higher proportion of participants having detectable EBV DNA but no NPC on the days of blood collection with lower ambient temperatures [23]. All of these findings hinted at the presence of an immunocompromised state and viral reactivation in those non-NPC subjects with plasma EBV DNA positivity. Given the different origins of EBV DNA between NPC and non-NPC subjects, our group therefore postulated the presence of different molecular characteristics between the two groups [38]. In results from sequencing-based analysis, NPC patients were shown to have a higher proportion of EBV DNA reads in their plasma samples than non-NPC subjects (quantitative-based analysis). Second, NPC patients had a different size profile of plasma EBV DNA from non-NPC subjects (size-based analysis). In detail, plasma EBV DNA from NPC patients exhibited a typical nucleosomal size profile with a peak size of around 150-160 bp, whereas EBV DNA from non-NPC subjects did not. Based on the quantitative and size differences of plasma EBV DNA, we developed a second-generation sequencing-based screening test that could achieve a modelled positive predictive value of 19.6% in our test population [38]. This value is almost double that obtained from the two timepoint PCR-based protocol. Of note, this second-generation test requires only a single timepoint testing. Our study result has illustrated that size profiling of plasma DNA could bring another dimension to the current approach for ctDNA analysis. This has also highlighted the importance of understanding the biology and molecular characteristics of plasma DNA in general, which could be translated into improvements in the diagnostic performance of ctDNA analysis.
Conclusion
Since plasma EBV DNA was recognised as a biomarker of NPC researchers have been working extensively to validate various clinical applications in prognostication, surveillance of recurrence and screening. As a result, plasma EBV DNA is one of the most widely used ctDNA markers to date. With technological advances, especially in next-generation sequencing, ctDNA analysis demonstrates great potential in cancer diagnostics for different cancer types. However, the current bottlenecks encountered in ctDNA-based biomarker research have presented some challenges that would need to be solved. Researchers could gain insights from the plasma EBV DNA analysis model into solving these challenges. Furthermore, plasma EBV DNA in NPC has provided a good model to study the biology of ctDNA, when plasma EBV DNA has already been shown to share similar molecular characteristics as ctDNA. A better understanding of the molecular features of ctDNA would improve such analysis for cancer diagnostic purposes, as illustrated in the example of size profiling of plasma EBV DNA, which improves the specificity for screening NPC. We envision that ctDNA could become a powerful tool in cancer diagnostics and has the potential to revolutionise cancer management.
|
v3-fos-license
|
2017-06-25T17:08:11.224Z
|
2013-12-10T00:00:00.000
|
8745293
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/1824-7288-39-77",
"pdf_hash": "d6163233d16498986e59e5a7e88bea897cd9236c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44250",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ae7fcf8c8740bed22fe311de47ef39b14ced8145",
"year": 2013
}
|
pes2o/s2orc
|
Rhinocerebral zygomycosis with pansinusitis in a 14-year-old girl with type 1 diabetes: a case report and review of the literature
Background Zygomycosis is a rare life-threatening fungal infection affecting mostly patients with predisposing conditions such as diabetes mellitus, immunodeficiency, haemochromatosis or major trauma. Methods We describe a case of rhinocerebral zygomycosis in a girl with type 1 diabetes and review previous published cases and treatment options. Results A 14-year-old girl with type 1 diabetes mellitus occurred with dental pain, facial swelling, ecchymosis and left eye decreased visual acuity, unresponsive to antibiotic therapy. The coltures of the sinusal mucosa were positive for fungal species belonging to the Zygomycetes. She performed antifungal therapy with posaconazole (POS) with a very slow improvement and a poor glycemic control, leading to blindness of the left eye. Conclusion Our report adds further awareness on rhinocerebral zygomycosis and emphasizes on urgent diagnosis and timely management of this potentially fatal fungal infection through an adequate treatment.
Background
Zygomycosis is a rare life-threatening opportunistic fungal infection in humans, that often complicates diabetes mellitus and primary and acquired immunodeficiencies characterised by defects of the cell-mediated immunity. Other predisposing factors include steroid therapy, organ transplantation and cytotoxic chemotherapy [1]. The causative organism is an aerobic saprophytic fungus belonging to the order of Mucorales of the class Zygomycetes. It is ubiquitous insoil, grows rapidly and constantly discharges spores into the environment [2]. Infection is usually caused by inhalation of sporangiospores or via direct contamination of skin wounds, especially burns. The lungs, nasal cavity and paranasal sinuses, gut and cutaneous tissues are therefore the most common sites for primary infection. At onset, Rhinocerebral Zygomycosis presents with different clinical features such as blindness, cranial nerve palsies, eye proptosis and pain. Fungal hyphae preferentially invade the walls of blood vessels producing thrombi and infarction. The resulting progressive tissue ischaemia and necrosis of deep tissues may include muscle and fat, ultimately leading to multiorgan failure and sepsis. The central nervous system can be invaded by Zygomycetes either contiguously from adjacent paranasal sinuses (rhinocerebral zygomycosis), or haematogenously from a remote site of infection [3]. We report a case of necrotizing invasive rhinocerebral zygomycosis in a 14-year-old girl with type 1 diabetes and review previous reported cases from 1980 to 2012.
Case presentation
A 14-year-old caucasian girl is followed at our Department of Pediatrics for type 1 diabetes mellitus since the age of three years with poor glycemic control despite regular insulin therapy. She occurred with a 7-day history of dental pain, facial swelling (extending superiorly from the supraorbital margin and inferiorly to the angle of the mouth), and ecchymosis in the left periorbital region with decreased visual acuity and colour vision, unresponsive to antibiotic therapy with amoxicillin. Physical examination showed left-sided facial numbness, lagophthalmos with inability to complete closure of the left eyelid and tongue deviation to the left. Intraorally there were carious lesions and low sensitivity of the upper teeth. Biochemical investigations revealed neutrophilia and increasing of inflammatory markers. The electromyography showed a severe VII nerve damage. Radiographic examination showed haziness of left maxillary sinus with erosion of lateral sinus wall. Magnetic resonance imaging of her head revealed a marked mucosal thickening of the left maxillary sinus extended to the sphenoid, ethmoid and frontal sinus with moderate inflammatory effusion ( Figure 1). For three times inflammatory tissue was excised from the sinuses through an Endoscopic Sinus Surgery for microbial eradication and histopathologic examination. The tissue biopsy showed fragments of mucosa lined by respiratory epithelium with chronic aspecific inflammatory infiltrate and growed fungal species belonging to the Zygomycetes. After a 20-day treatment with imipenem, teicoplanin, metronidazole and aciclovir, she was given intravenous amphotericin B to which posaconazole (POS) was immediately replaced, due to the onset of side effects such as hyperglycemia, marked hypothermia and profuse sweating. The follow-up magnetic resonance images showed progression of the disease with significant intracranial extension. The magnetic resonance angiography showed the involvement of the neurovascular retromandibular axis and left cavernous sinus with occlusion of ipsilateral cavernous carotid and increased signal of the walls of the left middle cerebral artery caused by arteritis. She continued antifungal drug therapy and clinic follow-up showing a very slow improvement, a poor glycemic control and many recurrences, leading to blindness of the left eye.
Results
Zygomycosis is rare in pediatric patients, and there are few reports in the literature. We performed a MEDLINE search for articles published in the English-language literature, ages: 0-18 years, from January 1980 to September 2013. The search terms used were: rhinocerebral zygomycosis or mucormycosis and case report.
Our search yielded a total of 25 articles including 28 case reports (Table 1). Cases included were those with acute zygomycosis infection in the rhinocerebral region diagnosed by histology with or without a positive culture, with hematologic malignancies or type 1 diabetes as predisposing factors. Fifteen out of 28 cases (53.6%) of rhinocerebral zygomycosis from these case series were found in patients with type 1 diabetes, twelve (42.8%) in patients with hematologic malignancies and one (3.6%) in a patient with autoimmune hepatitis.
All cases have the involvement of at least one eye. All patients underwent surgery and/or therapy with amphotericin B and other antifungal agents. The 25% of patients with type 1 diabetes has passed the infection uncomplicated, while the 40% underwent to complications such as eye exenteration, partial loss of vision, invasive surgery and nerve palsies. The overall mortality was 20% in cases with type 1 diabetes and 50% in cases with other predisposing factors. The mortality rate was significantly higher when the central nervous system was involved compared to sinus or sino-orbital involvement only.
Discussion
The Zigomycosis' predisposing factors are uncontrolled diabetes, lymphomas, leukemias, renal failure, organ transplant, long-term corticosteroid and immunosuppressive therapy [28]. Hyperglycemia, usually with an associated metabolic acidosis, is responsible for impaired neutrophil function, neuropathies and vascular insufficiency leading to a diminished resistance to infections and altered tissue response. In ketoacidosis, the acid environment due to the increase in glucose levels and the increase in levels of free iron ions favour fungal growth [2].
The clinical onset of the Rhinocerebral Zygomycosis in our report and in most cases is characterized by cranial nerve palsies, facial/eye swelling and blindness, less frequently by decreasing consciousness.
Mucormycosis is known for having a very poor prognosis: survival rates are currently thought to exceed 80%. Even with successful treatment, Mucorales can reappear during future courses of chemotherapy and neutropenia [29]. There may be predictive factors on the evolution of complications: patients who presented with "blindness" seem to have a higher prevalence of survival (p ≤ 0.04) while patients who presented with "decreased consciousness" seem to have an higher prevalence of death (p ≤ 0.04). With the advent of potent antifungal medications, a combination of surgery, medication and correction of the underlying conditions has provided better outcomes [26]. Surgery needs to be radical with an aim to remove all devitalized tissue, and has to be repeated based on disease progression. The new triazole POS has a broad antifungal spectrum against filamentous fungi. The use of POS seems to be associated with a high prevalence of survival with or without outcome (p ≤ 0.04). This is important as standard therapy with amphotericin B often fails [23].
Conclusion
The zygomycosis may present with specific clinical symptoms in patients with predisposing conditions. "Blindness" seems to be correlated with survival predictive factors while "decreased consciousness" seems to be associated with severe outcome.
Early diagnosis of zygomycosis and meticulous broad spectrum antifungal therapy are necessary to avoid the further spread of infection, which may lead to high morbidity and mortality. The timely use of POS alone or in combination with other therapies seems to be associated with a high prevalence of survival with or without outcome.
Larger studies and populations are needed to test if these relationships are casual or real. It is important to understand and implement the treatment options which can help to manage these patients with peculiar onset before the coltures results. Knowledge of potentially devastating complications can help to prevent the unfortunate consequences.
Written informed consent was obtained from the patient's parents for publication of this Case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
|
v3-fos-license
|
2024-07-24T16:15:12.058Z
|
2024-07-17T00:00:00.000
|
271393254
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2673-9461/4/3/13/pdf?version=1721196363",
"pdf_hash": "2970575964acdaf11f44d22228147a33476594f4",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44251",
"s2fieldsofstudy": [
"History"
],
"sha1": "7a657b2dca5041afd7f0ec39f803fe9aad4c8426",
"year": 2024
}
|
pes2o/s2orc
|
San Bushman Human–Lion Transformation and the “Credulity of Others”
: Lion transformation, among San-Bushmen, is arguably the most dramatic and spectacular instance of animal transformation. Transformation is a central component of San curing and initiation ritual and of certain San hunting practices. Moreover, it is a recurrent theme in San mythology, art and cosmology, all of them domains of San expressive and symbolic culture that are pervaded by ontological mutability (manifested most strikingly in the therianthropes of San myth and art). Lion transformation is a phenomenon that has received much mention in the ethnographic literature on Khoisan ritual and belief, through information that is based not on first-hand but second-or third-hand ethnographic and ethno-historical information. In the paper, I describe my own eye-witness account of what San people deemed a lion transformation by a trance dancer, which I observed in my early field work among Ghanzi (Botswana) Naro and = Au//eisi San in the 1970s. This is followed by my own musings on the actuality or reality of lion transformation, from both my own perspective and from what I understand to be the indigenous perspective. In terms of the latter, lion transformation—and animal transformation in general—is a plausible proposition. Indigenous doubt and scepticism, deriving from a rarely if ever fully conclusive witnessing of such transformations, are assuaged in a number of epistemological, cosmological and phenomenological ways. These are not available to a Western cultural outsider with a Cartesian mindset, nor to a Westernized—and perhaps also Christianized—insider, whose cosmos has become “disenchanted” through historical–colonial and contemporary–acculturational influences.
Introduction
"We know about Bushmen who turn into lions.Their ears change shape and they grow lion's fur.Their hands become identical to a lion's paws and they roar mightily.However, you can still recognize the person if you look carefully at the lion's face."Motaope, Ju/hoan healer ( [1], p. 81) "When we [healers] shake very hard, it's like slipping out of our skin so that we can become something else.As all strong healers know, it's not physically being the other animal.It's becoming the feeling of the other animal . .."G/aq'oKaqece, Ju/'hoan healer ( [2], p. 153) "When I turn into a lion, I can feel my lion hair growing and my lion-teeth forming.I'm inside that lion, no longer a person.Others to whom I appear see me just as another lion."TshaoMatze, Ju/'hoan trance dancer ( [3], p. 24) "These great healers went hunting as lions, searching for people to kill.Then someone would shoot an arrow or throw a spear into these healers who were prowling around as lions.When these great healers tried to change back into their own human skins, they usually died.When a healer changes into a lion, only other healers can still see him.To ordinary people he is invisible."Wa Na, Ju/'hoan healer ( [4], p. 227) Accounts about lion transformation in recent and not so recent ethnographies about Kalahari San are replete with equivocation (to which this paper will add its own two cents).As one might expect, given their bred-in-the-mind rationalist scepticism and Cartesian dualism, in particular, as regards the topic at hand, about the human-animal divide, anthropologists cannot quite get a handle on shape-shifting (unless one is into Dungeons and Dragons and enjoys video fantasy games).Notwithstanding our discipline's renewed commitment to cultural relativism and the indigenous perspective, in the context specifically of a "revisited" New Animism, whose ontological turn makes some of us-archaeologists included [5][6][7][8]-more receptive to the notion of a human turning into a lion, I still find it difficult to get my head around it (a task I have set for myself in a recent book project [9,10]).Unlike others ( [11], p. 13, [12], p. 263, [13], p.48 [14], pp.[30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47], I deem this experience on the part of a shaman (there is some debate amongst Khoisan researchers about the applicability of the term and category of "shaman"-a term derived from circumpolar hunting peoples-to the San Bushman healer ( [15], p. 7, [16], pp.60-62).Most of them, qua "lumpers", have opted in favour of the term because of the presence and importance in the San instance of altered states of consciousness, outer-body travel and animal transformation).Something other than a hallucination-primarily because it can occur to a person outside the hallucination-prone altered states context of trance healing (such as hunting, as seen below)-and certainly not an "insane delusion" or "a form of mental, even "monomaniacal derangement" as did the Old Animists ( [17], pp.30-31, [18], p. 126), I nevertheless find it difficult to situate human-animal transformation within a familiar ontological, epistemological and phenomenological space.In the episteme familiar to me, you are either human or you are animal, notwithstanding Darwin's findings that we are actually both phylogenetically.While a widely accepted premise in Western cosmology, it is a kinship, however, that is too remote in time and the human being's being as Homo sapiens to allow for on-the-spot human-animal metamorphosis (unless the experience is hallucinatory, oneiric or delusional, and the sense of it mytho-magical or metaphorical).
"Their Ears Change Shape and They Grow Fur" vs. "It's Not Physically Being the Other Animal"
Anthropologists who write about this phenomenon describe it either as something actual or real-transformation into a real-life lion, fangs and claws, mane and tail-or as something virtual and unreal-a "spirit lion" or "lion of God" visible only to shamans.Both of these opposing perspectives may be presented by the same ethnographer, underscoring the above-mentioned equivocation and confusion.Thus, we read in Richard Katz's classic monograph on Ju/'hoansi trance healing that when a shaman travels at night "in the form of lions of god" (an aspect, to Katz, of the "concrete reality of healing"), such a lion is to the Ju/'hoansi a "real lion, different from normal lions, but no less real" ( [4], p. 115).The epigraphs at the opening of the paper, all snippets of comments by Ju/'hoan healers to ethnographers, contain the same sort of equivocation.One of the two Ju/'hoan shamans Bradford Keeney interviewed, Motaope, described the experience of transformation as utterly real-anatomically, even aurally ("they roar mightily").The other, G/aq'o Kaqece, stresses that it is not a case of "physically being the other animal" but, instead, of "becoming the feeling of the other animal", ratcheting down the experiential and ontological aspects of transformation, from "being" to "becoming", the former bodily, the latter mentally.The Ju/'hoan shaman's Tshao Matze's description to Richard Katz and Megan Biesele of his own experience of lion transformation is indicative of deep immersion, ontologically, into the animal, both its "interiority" and "exteriority" (using Descola's terminology); indeed, he is seen as such-"just another lion"-by other people who lay eyes on him.The last aspect of realism of such a leonine shaman is dismissed by Katz's other informants, the woman healer Wa Na, who states categorically that "only other healers can still see him.To ordinary people he is invisible".
The ethnographers' equivocation and confusion are a reflection thus not only of their culture-bound epistemology but also of their San informants' own view on the matter.This, as I found among my Naro informants in the 1960s through 1990s, spans the spectrum from dismissal of the notion of human-animal transformation, through doubt, to acceptance.Notions on the left of the spectrum-dismissal-in large part derive from the impact of acculturation, through schools and mission churches, and its "disenchanting" effect on San belief and cosmology ( [10], pp.69-104).Those on the right-acceptance-are linked to an intrinsic epistemological and phenomenological quality of San cosmology, ontological ambiguity, which renders humans both same as and other than animals (and vice versa), a feature I have recently traced through such domains of San culture as myth, art, ritual, play and hunting ( [10], see also [19]).Ontological ambiguity is evident in the cited passages, such as Motaope's comment, which suggests that the transformation from human to lion may not have been quite as radical as it is described at the outset: for all his lion's ears and paws, fur and roar, the "tranceformed" shaman's human personage is still recognizable, in the lion's face.And even though, according to her assertion, the transformed shaman-lion is "invisible to ordinary people", Wa Na begins her narrative on human-lion transformation with an account of how these were-beings "go prowling around as lions", on the hunt "for people to kill".The latter defend themselves against these predators, perceived as actual lions, and turn tables on them, throwing their spears at them, which may eventually kill them, after their change back to humans.
The ambiguity inherent in these sorts of caveats and contradictions about lion transformation may, for some San, translate into doubt about the entire phenomenon.In the course of my field work among Naro San in the Ghanzi District of western Botswana in the 1960s and 70s, I talked to some San individuals-"ordinary people" as well as some trance dancers-who were altogether dismissive of lion transformations.The matter was something they either did not believe existed or deemed possible, or would attribute, as an in-group's boundary-maintaining mechanism, to outside groups, as something-lion transformation-"other people do" (the Naro attribute the skill of leonine transformation to the = Au//eisi, as part of the general stereotype of fierceness and vindictiveness ( [15], p. 18).The linguistically related G/wi hold the same = Au//eisi stereotype, especially its aspect of turning themselves into lions and preying on people ( [20], p. 320), whereas the linguistically distinct !Kõ vis à vis, whom the Naro are the San out-group, attribute the same stereotype to the Naro ( [21], p. 28).The San, in turn, collectively appear to be wont to stereotype their Black neighbours, again collectively, in terms of the same leonine stereotype ( [22], p. 733, [23], pp.[13][14][15] themselves into lions and preying on people ( [20], p. 320), whereas the linguistically distinct !Kõ vis à vis, whom the Naro are the San out-group, attribute the same stereotype to the Naro ( [21], p. 28).The San, in turn, collectively appear to be wont to stereotype their Black neighbours, again collectively, in terms of the same leonine stereotype ( [22], p. 733, [23], pp.13-15).).On the other side of the plausibility spectrum of human-lion transformation, I talked to starry-eyed were-lion believers, awed by their mystical-magical powers, as well as fearful as these were usually put to malevolent ends, by lion-shamans who "stalk the desert in search for human prey" ([24], p. 46; see also [25]).Their malfeasance usually followed lines of Tswana patterns of sorcery, features of which have entered the ritual tool kit and arcane knowledge of some San trance dancers [26].
Most of the San informants I talked to about lion transformation were more guarded, however, neither fully dismissive nor fully accepting, but acknowledging the possibility of such a thing happening, albeit very rarely (and, as some would qualify further, among other San groups, as "it's not something we Naro people do; it's a = Au//ei thing").A number based their information on second-or third-hand information, reported to them in urban-legend fashion, as coming from a credible source, a friend of a cousin or a cousin of a friend and such like, but never actually witnessed.Such information, transmitted through gossip and family memorates and traced to an "absolutely reliable", identifiable person, holds sufficient authority to be deemed possible in San intellectual culture, given its situatedness in an animistic, "connective" cosmology (a point to which I will return again).
As I have suggested elsewhere ( [19], p. 6) most, if not all, of the ethnographic information by anthropologists on this feature of San ritual and belief also appears not to be first-hand.It seems that descriptions of lion transformation in the anthropological literature all pretty well seem to fall into the second-or third-hand category, based, as they are, on accounts-vivid and dramatic ones as exemplified by the Ju/'hoan snippets provided above-related to the ethnographer by San informants, who either have had the experience themselves (trance dancers) or who have witnessed the same, or had heard it described to them by others.Even Bradford Keeney, who claims to have undergone lion transformationas a "visionary" experience (Keeney offers no description of his own experience, other than referring to it as a "visionary occurrence", and noting that "there are different means for making this transformation" ([2], p. 151, foot note))-bases his descriptions on accounts from "numerous Bushmen he has known who turned into lions" ([2], p. 151).
A Lion Transformation Observed
In my own field work at D'Kar village in the Ghanzi District of western Botswana, I heard half a dozen such second-and third-hand accounts from people about lion transformations-including one first-hand transformation account, by an = Au//ei trance dancer, albeit not into a lion, as his wife "doesn't want him to do so dangerous a thing", but into a non-poisonous snake (called n!am di tsoro in Naro)!There was only one occasion on which I actually witnessed what people who were present at the event and with whom I talked about it afterwards considered a lion transformation.It occurred at a trance dance on 30 May 1974, in the evening around a dance fire, less than half an hour into the trance dance ritual.The dancer, a man with the (fictional) name Sebetwane, was in his mid-forties.He was dark-skinned, his father having been Kgalagadi (his mother was ̸ = Au//ei).The performance as a whole was relatively brief-a little under an hour-and its climactic conclusion abruptly ended the trance dance, which had been scheduled to last for several hours.My account of this event, drawn verbatim from my field journal, is provided in the Appendix A of this paper.
The lion transformation component of the dance unfolded over the last half hour of the performance, when the dancer's deportment changed from pre-trance play mode, in a decidedly erotic, as well as scatological idiom, to almost full-trance transformation mode, into what people held to be into a lion.Its first indication of transformation was the dancer's scowling and evident anger, conveyed at first through grimacing, licking of his mouth and contorting his head and neck.It escalated with the dancer lunging at the spectators, snarling and, at one time, on all fours.The accosted spectators were some of the women and children, sitting near the fire and dance circle.He grabbed one of the children, a boy toddler, by the arm, picking him up roughly; to wiggle the terrified child laterally on his shoulders and hold him against his chest, almost dropping the toddler throughout these agitated histrionics.People's reactions appeared to be shock, deep unease and fear that, in one of the spectators I talked to afterwards, bordered on terror-all the more so when the dancer, near the end of his transformative performance, incorporated into it a standard element of Kgalagadi and Tswana sorcery-tossing sand, in a frenzied, culturally stereotyped gesture that included directionally shaking his arm and hand, toward some intended victim (I have elsewhere [26] described the component of witchcraft and sorcery within Ghanzi San trance dancers' repertoire of ritual healing techniques and ideas and examined the ways in which this cultural borrowing was integrated within their body of magi-co-religious practice).
A Lion Transformation Explained (Away?)
For all of its intensity and histrionics and the danger and dread it evoked in the attendants at the dance, its leonine transformation aspect seemed-to me, a cultural outsiderrather subdued and somewhat contrived and, for all of its dread and drama, somewhat anti-climactic.The reason may be that the ritual performance did not contain full trance, thereby reducing its duration and intensity.Furthermore, it contained other choreographic and ritual elements-erotic and scatological play as well as Kgalagadi sorcery-which all diluted and distracted from dance's central feature, the dancer's transformation into a lion.
Apart from these factors that contributed toward reducing the emotional impact-and the cultural integrity-of the human-animal transformation I witnessed, my own culturally conditioned and stubborn, hard to bracket out scepticism about the "reality" of the observed human-animal transformation-into an actual lion, rather than one imagined-muted this impact.The credibility regarding its reality was also underscored by what to me was a lack of "realism" of its performance, which "was not produced with any superlative art".I am here borrowing language from the explorer and ethnographer Knud Rasmussen when he observed a shamanic animal transformation in another animal-a bear-in another part of the world-Canada's eastern Arctic in the 1920s, by an Iglulik shaman.A Qua sceptical cultural outsider-"I could distinguish all through the peculiar lisps of the shaman acting ventriloquist"-Rasmussen's opinion on the matter was that for this shamanic performance to be believed and persist within Iglulik ritual and symbolic culture, the key requirement was "the credulity of others" ( [27], pp.[39][40].The latter-"astonishing credulity"-Rasmussen finds himself "repeatedly obliged to note" other aspects of Iglulik shamanism, especially messages shamans communicate and attendants at their "séances" receive and accept about the spirit world ( [27], p. 43).
So, then, is human-lion transformation "for real"?Is it merely shaman showmanship, as Rasmussen suspected the bear transformation he observed to have been, dependent for its credibility in the shaman on the credulity of spectators?Is it, like, "the extraction trick" wide-spread in shamanic healing, yet another status-authenticating, "deceptive curing practice" on the part of the shaman "to give the appearance of impressive powers", as per the anthropologist William Buckner's recent take on the matter ( [28], p.95)? Another "trick that would astonish the audience" ([29], p. 55)?
While raising questions of this sort, especially in a tone of cynicism-tinged scepticism about an indigenous cultural practice by a cultural outsider who studies the same, might be seen as an affront to some of the core analytical and ideological tenets of their discipline, of objectivity, cultural relativism and respect for the indigenous perspective and inclusion of its voices, the reason I allow myself to do so is because, as seen above, the latter, the indigenous voices, themselves include such questions, along with scepticism and cynicism.My conversations with San people (including trance dancers) included mentions of this or that San trance dancer who faked the trance experience-including its ear-piercing kow-he-dile, "death shriek" and collapse.Doubts were also expressed about the genuineness, if not even possibility, of the other two mystical components of the shamanic performance, transformation-"the lion experience"-and extra-body travel.People distinguished between-and dancers performed-"small dances" and "big dances" or between "play dances" as opposed to "death dances", the former for entertainment and money (oftentimes for tourists), the latter for curing and in the spirit of sharing and collective well-being and transcendence.I have described these in detail elsewhere, in an examination of the contemporary trance dance among "post-foraging" Nharo and ̸ = Au//ei San people, in its "disenchanting" acculturative context of Western education, Christianization, commoditization and rationalization ( [30]; see also [3], pp.80-81, 85-89).
Having worked primarily amongst acculturated "farm Bushmen" in my own field work, I lack the data to comment on whether or not, and to what extent, shamanic practice among the Kalahari San people I stayed amongst and had conversations with contained the same voices of doubt and scepticism about their practitioners.Lewis-Williams has recently suggested that to some extent this might have been the case among/Xam of the north-western, whose shamans-the !giten-were more differentiated in terms of their ritual skills and specializations than their Kalahari counterparts as well as more elevated socially, with "power, privileges and position unavailable to anyone else", all of it deriving from their access to and transmission of ritual knowledge ( [16], p. 205).(The arcane skills of some of the /Xam shamans included the power to transform into lions.This was described by /Xam story tellers to their interlocutors Wilhelm Bleek and Lucy Lloyd, in the 1870s, with reference to specific historical figures they themselves knew in their childhood or were told about from a parent or grandparent ( [31], pp.202-5, 267-72, 279-83; see also [32], p. 19).Some of the accounts about these nineteenth-and eighteenth-century/Xam shamans were strikingly similar in some details to what I observed in the Kalahari.These included such actions as growing neck hair when in the process of trans-formation, walking about at night on their "magical expeditions", and avoiding people on their "magical expeditions" lest they mistake them for real lions and kill or injure them (with injuries retained when shape-shifting back into their human form in the morning).Folklorist Manuél de Prada Samper also found these notions in the folktales he collected amongst modern descendants of the /Xam in the Northern Cape ( [33], texts 8 and 43)), In order "to convince people that their 'symbolic labour' could create conditions in which ordinary people's daily labour could be successful" ( [16], p. 205), /Xam shamans also vied with each other and failuressay at rainmaking-would be "challenged" by the people as, indeed, they "could attract people's wrath if their rainmaking failed" ( [16], p. 189).All this created a social climate for invidious comparison amongst the shamans themselves and amongst the people "who were not always appreciative of his [the shaman's] efforts" (ibid.),all of it conducive to scepticism about the competence, genuineness, and legitimacy of certain shamans.
Turning again to the Eastern Arctic Inuit, the scepticism expressed by their ethnographer, Rasmussen, about their shamanic séances was evidently also shared by Inuit shamans, "Real shamans"; one of them told Rasmussen: do not "jump about the floor and lisp all sorts of absurdities and lies in their so-called spirit language . . . to impress the ignorant. . . .It always appeared to me that they attached more weight to tricks that would astonish the audience. . . .These shamans never seemed trustworthy to me. . . .Real shamans do not need it."( [29], pp.54-55; cited in [34], p. 106)
Animal Transformation in a Connective Cosmos
As I have recently suggested [9,10] the reason-in the San context-"real shamans do not need it" is rooted in San cosmology, specifically its central component of ontological mutability.As I show there, one of its manifestations, transformation, is a central component of San curing and initiation ritual and of certain San hunting practices and a pervasive theme in San mythology, art and cosmology, which are both shot through with ontological mutability (manifested most strikingly in the therianthropes of San myth and art, which include human-lion as well as lion-human transformations).All this creates a world view of inter-connectedness-which the South African archaeologist Sven Ouzman [35] and poet-novelist Antje Krog refer to as a "connective cosmos" and an "interconnected world view" ( [36], p. 184, quoted in [37], p. 187), respectively, and see as the defining characteristics of Khoisan symbolic culture (tracing the complex, mutually interactive and "not always predictable" strands of connectivity of all its domains in the context of a /Xam transformation myth, David Lewis-Williams ( [38], p. 41) has recently substituted "entangled" for "connective" as the defining adjective for the San cosmos.Entanglement captures-even more than inter-connectedness-what I have elsewhere referred to as "tolerance for ambiguity" and what I see as the defining quality of San society, ethos and cosmology, at a social-structural and conceptual level ( [10], pp.226-37), and phenomenologically, in the way being-especially being-in-the-world-is experienced, vis à vis the non-human animate and inanimate features of the dwelled-in landscape [10]).In a connective cosmos-"a boundary-less universe whose entangled people and animals move across time and space" ( [39], p. 351)-ontological boundaries between species are fluid and porous and beings and states are not set each in their respective molds but interact with and flow into each other.
Such a cosmology provides the ideological, epistemological and phenomenological underpinnings for a belief in human-animal transformation, which becomes, for people subscribing to this cosmology, a plausible proposition (for a consideration of this point, in the context of northern European mythology and cosmology-an "animist cosmovision"and its shamanic roots and undercurrents and focused on the bear and its therianthropic, humanimalian features, see Frank ([40], p. 114).This is one of a number of writings on this topic by the same scholar).It also allows for doubt as this proposition is by no means unequivocal: humans are both the same as and other than animals; as noted by Eduardo Kohn in the context of the Runa, an Amazonian hunting people, "a constant tension . . .exists between ontological blurring and maintaining difference".This presents an intellectual and existential challenge for the Runa: "to find ways to maintain, this tension without being pulled to either extreme" ( [41], p. 12).One of these extremes -which Philip Descola refers to as the "angst peculiar to animism" ([42], p. 286)-is carnivory given its conceptual and phenomenological equivalence in this sort of humanimalian cosmology, with cannibalism ( [10], pp.[28][29][30][31][32][33][34][35][36][37][38].For the San, one of these ways is for humans situationally to focus fully on the animal's ontological alterity and bracket out the component of identity, for instance, when hunting, killing and butchering an animal and cooking and eating its flesh, when its comestible-animal-otherness is at the forefront of people's minds, as a bon à manger rather than à penser [43]. Transformation is believed to be an arcane power and skill of some shamans-not all of them, as some of them have never acquired it, nor ever wanted to, and others are considered either unskilled or inexperienced.Yet, others-including children-just play at transformation when miming a trance dance, at times in the early phase of its performance by adults.And, for both of them-children and adults-transformation is experienced, at varying levels of intensity, in the many animal mimicry dances that make up most of San recreational pastimes ( [9], pp.203-22).For pubescent girls, amongst most of the Kalahari San groups, one of the most intense and memorable experience is her spiritual transformation into an eland, at menarche, while sitting in her seclusion hut and surrounded by old people whose own eland transformation is expressed, and at some level experienced, through a dance that mimics the actions of eland courtship behaviour ( [9], pp.179-870).Transformation, into eland and other antelope species, is a prominent feature also of San male initiation rites ( [9], pp.187-96).Hunters may experience a strong somatic connection with the prey animal, especially at the moment at which they kill the animal, after a protracted stalk that may include "running down" the animal and, as a result of the intense physical exertion and total exhaustion, intensify the experience of "sympathetic" connectedness to the animal ( [9], pp.222-42, see also [19] and [12]).
These instances and intensities of ontological transformation place this experiential process on a phenomenological gradient from actual to virtual, ritual to recreational, liminal to ludic.They provide epistemic and experiential underpinnings for a belief in humanto-animal transformation-and vice versa, the theme of much of San myth and lore ( [9], pp.49-90)-and are able to accommodate such doubts as people may entertain about shamming shamans.Indeed, the latter may themselves, their own shamming notwithstanding, hold fast to the notion of "the real shaman" and his-as opposed to their-ability to enter, through trance, an altered state of consciousness, and, through transformation, an alternate state of being.
Conclusions-Human-Lion Transformation in a "Disconnective" Cosmology
In my conversations with San people, in the 1960s and 1970s, I found the San notion of human-lion transformation as a plausible proposition withstand even the growing and spreading element of doubt amongst some of today's San, derived from people's exposure to, and acceptance, by some, of an alternate "disconnective" cosmology, through techno-economic, health, educational, social and political practices mediated by state or NGO bureaucrats or volunteers along western lines, or through Christian missionaries or evangelists.The scepticism arising from these outside-derived doubts did not, I found, obviate "what if questions", "yes but" or, "maybe so" statements about if and how actual-as opposed to imagined and virtual-human-lion transformation is or can be.Such ques-tions and statements are legitimate and pressing to people whose cosmology is connective and, with respect to its ontology component, allows for the notion of human-animal connectedness and transformation.
I do not know whether or not the San of Ghanzi-or anywhere else-still see and think about the world and umwelt-in particular its animals-in this way now, over half a century after I had these conversations with them.They were conversations set largely in the epistemological and phenomenological framework of a connective cosmology closely tied to hunting-gathering lifeways.These are gone, and the worldview underlying the lifeways that have displaced them, especially wage labour for adults and school attendance for children, is one of disconnectedness.
The social and existential context of this world view is one of poverty, unemployment, marginalization, discrimination and disease.The response of many San people over the last three to four decades has been resistance and assertion, of their political rights and their cultural identity.These activities have given rise to initiatives that rehabilitate and revitalize San ideational culture, through "cultural heritage projects" such as collecting, transcribing and archiving local oral histories and languages as well as ritual trance dances, "traditional" recreational dances and songs, myths and lore.Some of these projects are very active and prolific (as well as, in some instances, "high tech": digitizing San myths and oral history narratives-or even converting the same into VR (virtual reality) games-and placing them on the worldwide web [44,45] and engage San people from communities from all over southern Africa [46].
One of them was undertaken by two NGO development workers-Willemien and Alison White-in the late 1990s as a collaborative project, with San interviewers from eight different San language who collected over two hundred stories and memorates, myths and tales from eighteen San story tellers from various San communities in southern Africa [47].Published in a remarkable book titled Voices of the San in 2004, these narratives cover the full gambit of the cosmology and mythology, customs and ways of San traditional culture and, in the process of telling, recording and transcribing the same, both recall and to some extent restore salient features of the people's traditional lifeways and world view.
Included in that recall is the connective, animist notion of human-animal connectedness.I close this paper with one of its most explicit expressions, a memorate by the Ju/'hoan storyteller Tci!xo Tsaa from G!hoce village in Botswana, about his late father ( [47], p. 133): Things that I know about that were done by the healers is that they could turn themselves into lions when we did not have meat to eat.My father was a healer.During the night someone would turn and go hunting, and when he saw a kudu, he would kill it and return home during the same night.When he returned home, he would again change himself into a person, or he would enter his body.The following morning, he would then tell the people to go in that direction.He did not tell them his secret, about what he had done.While you were walking, he would say that he saw something there last night, but he was not sure if it was true.When we got there, we would find the kudu that he had killed.It was then skinned with great happiness.
The healer was never seen doing that; even if you were with him, you could not see him changing himself, you only saw the body lying there.When you picked him up, his body just collapsed, and you would have to leave him there, knowing he had changed himself and gone somewhere.People were not supposed to bother his body when he had changed into something else.
Funding:
This research received no external funding.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable.Data Availability Statement:The rich data base-over 12,000 notebook pages of text-on the myths and oral traditions of the nineteenth-century /Xam San of the Northern Cape collected, transcribed
|
v3-fos-license
|
2017-07-06T11:05:02.651Z
|
2013-01-01T00:00:00.000
|
8666030
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/jev.v2i0.20304?needAccess=true",
"pdf_hash": "6c8a3096b288ca92848e39ee381c78f87dc32d34",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44253",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "765bbbd62b8542764b2c181fac92bde8754cdede",
"year": 2013
}
|
pes2o/s2orc
|
Emerging roles of extracellular vesicles in the adaptive response of tumour cells to microenvironmental stress
Cells are constantly subjected to various types of endogenous and exogenous stressful stimuli, which can cause serious and even permanent damage. The ability of a cell to sense and adapt to environmental alterations is thus vital to maintain tissue homeostasis during development and adult life. Here, we review some of the major phenotypic characteristics of the hostile tumour microenvironment and the emerging roles of extracellular vesicles in these events.
I t has become widely accepted that aberrant cellular stress responses may underlie a variety of pathological conditions, including cancer (1,2). To overcome the harsh microenvironmental barriers, tumour cells activate stress response mechanisms, which in concert with resistance mechanisms to programmed cell death confer them growth advantage and drive tumour progression. In this context, extracellular vesicles (EVs) are of particular interest as they may constitute a novel, adaptive mechanism against stressful conditions of the tumour microenvironment (3,4). We will initially describe the complexity of the tumour microenvironment, and then discuss the potential role of EVs as mediators of tumour progression through adaptive effects to counteract microenvironmental stressors.
The tumour microenvironment
It is well-established that cancer cells do not exist in isolation but rather within a complex milieu, known as the tumour microenvironment. This intricate niche consists of multiple cell types immersed in an extracellular matrix (ECM), and it plays a fundamental role in tumour progression, as originally proposed by Paget's ''seed and soil'' hypothesis (5,6) (Fig. 1). During the course of tumour development, neoplastic cells actively recruit normal cells into their neighbourhood, which support malignant progression in multiple ways. In this context, endothelial cells and pericytes are of great importance being responsible for the formation and function of the tumour vasculature (7); fibroblasts that deposit an ECM and secrete both matrix-degrading enzymes and soluble growth factors (8); and cells of the immune system, which may provide an immunosuppressive and growth-promoting compartment (9). The three dimensional organisation and architecture of a tumour mass are provided by the ECM, which in contrast to normal matrix, is typically enriched in several proteins, such as type I collagen and heavily glycosylated glycoproteins, for example, proteoglycans. In addition, the tumour stroma regulates cellular signalling and acts as a reservoir of growth factors (10). The successful expansion of malignant tumours requires an active collaboration between malignant and stromal cells via heterotypic cellular interactions. Accordingly, malignant cells and subsidiary stromal cells communicate and exchange information by direct cell-to-cell contacts as well as the release of signalling molecules, such as soluble growth factors, ECM proteins (11) and the only recently appreciated EVs (3).
The driving forces of tumour microenvironmental evolution are genetic instability of malignant cells and environmental selection forces, which include endogenous, tumour-growth-induced stress stimuli, such as hypoxia, acidosis, starvation, oxidative stress, biomechanical stress and immunoediting as well as exogenous stresses, for example, therapeutic interventions (12). Together, these factors select for tumour cells that acquire intrinsic and extrinsic properties to overcome microenvironmental threats and progress (13Á23). Interestingly, graded and local distribution of microenvironmental stresses in the tumour mass contributes to another feature of cancer, that is, intra-tumoral heterogeneity, which represents a major hurdle for successful large-scale tumour molecular profiling and treatment of cancer patients (24).
EVs in cancer
EVs provide a relatively new route of communication between cancer cells and various stromal cells infiltrat-ing the tumour interstitium. Recently, numerous studies have shown that EVs affect several stages of tumour progression, including angiogenesis, escape from immune surveillance, ECM degradation and metastasis. As comprehensive reviews of EV contribution to tumour development are provided elsewhere (3,25,26), a brief overview is given below.
The primary association of EVs with cancer was noticed already in the late 1970s (27) increased in patients with cancer and may correlate with poor prognosis (28,29). Tumours are characterised by the secretion of various forms of EVs, which based on the mechanism of formation can be divided into exosomes and shed microvesicles (SMVs) (25). Exosomes are vesicles of size ranging from 30 to 100 nm in diameter, and are generated intracellularly as the so-called intraluminal vesicles (ILVs) within multivesicular bodies (MVBs) (30). The secretion of exosomes into the extracellular compartment results from membrane fusion of MVBs with the plasma membrane, which can be spontaneous or induced, for example, due to cell surface receptor activation (30Á32). The mechanisms of assembly and sorting of exosomes are still ill-defined, but several key molecules have been shown to regulate this process, such as Rab11, Rab27, Rab35, p53, ceramide-neutral sphingomyelinase and syndecan-syntenin-Alix (33Á38). Interestingly, vesicles enriched in classical markers of exosomes (CD63, CD81) have also been shown to bud from exosomal-and endosomal-protein-enriched subdomains of the plasma membrane of T-and erythroleukemia cell lines, providing further complexity to exosome biogenesis (39Á41). SMVs comprise a heterogeneous population of vesicles larger than exosomes (100 nm in diameter) that are generated by direct budding off from the plasma membrane (30). The release process seems to be controlled by calcium influx and localised cytoskeleton dynamics, and results from the outward budding of small, cytoplasmic, membrane-covered protrusions followed by detachment from the cell surface dependent on the action of ARF6 (42). Conversely to exosomes, the rate of steady state release of SMVs is generally low, except for cancer cells that seem to release them constitutively. Regulated release of SMVs is efficiently induced upon activation of cell surface receptors with biological agonists (32). EVs are molecularly complex entities carrying lipids, soluble and transmembrane proteins, various RNA species and DNA sequences of retrotransposons (25,32,43). In addition, both exosomes and SMVs have been shown to enclose mitochondrial DNA (44,45). However, this concept still remains controversial. The actual molecular composition of EVs varies depending on the mechanism of formation as well as the type and functional state of the cell of origin, for example, exosomes isolated from malignant effusions of cancer patients contain tumour specific antigens, including Her2/Neu from ovarian cancer ascites, and Mart1 from patients with melanoma (46). By carrying bioactive molecules, and facilitating their cellto-cell transfer, tumour-associated EVs can regulate a variety of cellular events in recipient cells that significantly impact tumour progression (47Á51). So far, several mechanisms of transfer of EV-associated cargo have been described. Upon release from their cell of origin, many vesicles undergo membrane rupture, leading to pericellular release of their cargo (52). Alternatively, EVs can interact with plasma membrane receptors (47,53Á56), or may reach the interior of target cells by plasma membrane fusion or through endocytosis. In these instances, the cargo molecules are released inside target cells, and thereby may interact with their signalling machinery (48Á51). Interestingly, some vesicles may migrate significant distances by diffusion, and ultimately enter biological fluids, such as cerebrospinal fluid, blood, saliva and urine (26). This enables long-range exchange of EV-mediated information, which is relevant in the context of pre-metastatic niche formation (57). In addition, the abundance of tumour EVs in biological fluids offers an interesting possibility to use them as non-invasive biomarkers in the management of cancer patients (26,28,29).
Given that different aspects of tumour progression are driven by stress-mediated adaptive mechanisms, it is tempting to postulate that tumours employ the EV machinery to cope with stressful conditions and to ultimately progress (Fig. 2). Below, we summarise and highlight recent findings related to this idea.
EVs in stress-induced tumour progression
Stress-mediated secretion and trafficking of EVs Several reports have documented that cells vesiculate in response to different types of stresses, such as hypoxia (58Á60), acidosis (61), oxidative stress (62Á64), thermal stress (63,65,66), radiation (58), shear stress (67) and cytotoxic drugs (68). In a study by Levine and colleagues, a p53-regulated gene product, TSAP6, was shown to trigger exosome production in lung cancer cells undergoing g-radiation (36). These in vitro data were lately corroborated in TSAP6 knockout mice exhibiting an impaired DNA damage-induced, p53-dependent exosome secretory pathway (69). Given that various stressors activate different signalling pathways, the existence of alternative, p53-independent mechanisms involved in stress-mediated exosome release may be anticipated. In support of this idea, a study by Trajkovic et al. showed that ceramide regulates the biogenesis and dynamics of ILVs destined for secretion as exosomes (37). Ceramide accumulation and a ceramide-mediated stress response occur as a reaction to various factors, such as lipopolysaccharide, interleukin 1b, tumour necrosis factor (TNF)-a, serum deprivation, irradiation and various cytotoxic drugs (70). Hence, it is tempting to speculate that cellular accumulation of ceramide triggers increased production and secretion of exosomes as an adaptation to stressful conditions. Further studies are needed to test this possibility. Interestingly, p53 has been shown to modulate intracellular ceramide levels through generation of O 2 ( in glioma cells (71), indicating that the p53 signalling pathway may additionally stimulate exosome release in a ceramide-dependent manner.
Emerging findings suggest that oncogenes, such as epidermal growth factor receptor (EGFR) or its mutant, EGFR variant III (EGFRvIII), hypoxia-inducible factor (HIF)-1a and K-ras may trigger the release of EVs from cancer cells (49,60,72,73). While HIF-1a has been shown to mediate hypoxia-dependent secretion of exosomes in breast cancer cells (60), the contribution of EGFR and K-ras to stress-mediated cellular vesiculation is unknown. A recent report by Wang and colleagues showing that hypoxia activates EGFR through HIF-dependent formation of caveolae, provides a potential mechanism for HIF-mediated vesiculation in response to hypoxia (74). Further studies are required to verify whether a CAV1/EGFR signalling axis and/or other components of the HIF signalling pathway regulate EV secretion.
The tumour-promoting activities of EVs are highly dependent on efficient transfer to recipient cells, as described earlier in this review. Based on recent findings, stressful conditions of the tumour milieu may modulate some of the transfer mechanisms, for example, vascular endothelial growth factor (VEGF), interleukin 1b and FasL were shown to reside inside vesicles, and to evoke tumour-promoting activities only when liberated upon disruption of vesicle membrane integrity (52,75,76). How vesicle membrane disruption occurs in vivo is still illdefined; however, Taraboletti et al. recently provided in vitro data showing that acidosis triggers the rupture of tumour-derived SMVs, resulting in VEGF release and enhanced endothelial cell migration (52). In addition to releasing soluble content of EVs, low pH may activate EV-associated cargo. In support of this notion, a study by Giusti et al. suggested that ovarian cancer cells release SMVs containing inactive proenzyme cathepsin B, that is, a cysteine proteinase that facilitates tumour invasion via ECM degradation (77). The tumour-promoting activities of SMV-associated cathepsin B may occur specifically in acidic compartments of the tumour milieu, as it becomes activated at low pH (77). Acidosis was further suggested to increase the uptake of tumourderived exosomes through fusion with the plasma membrane of recipient cells. High rigidity and sphingomyelin/ ganglioside GM3 content in exosomes released at low pH EVs are shed from various cellular components of the tumour milieu to mediate exchange of signalling proteins and genetic material, which altogether may support tumour growth and progression. Diverse tumour microenvironmental stress conditions augment tumourpromoting activities of EVs by modulating their secretion and trafficking in the extracellular space, as well as altering their molecular content and functional activity. Upon release, EVs may also enter the circulation and mediate long-range exchange of EV-associated cargo that may support the process of pre-metastatic niche formation. In addition, circulating EVs carrying multifaceted, molecular stress signatures may offer unique, non-invasive biomarkers that can be used in the management of cancer patients.
were likely responsible for the increased fusion efficiency (61). Collectively, these studies indicate that acidosis, which is a hallmark of the tumour microenvironment, plays a key role in modulating tumour-promoting activities of EVs by altering their activity and trafficking. The specificity of vesicular cross-talk may also be provided by stress-mediated modulation of receptorÁligand interactions. In support of this idea, hypoxic cancer cells were shown to release tissue factor (TF)/VIIabearing SMVs that stimulated protease-activated receptors type 2 (PAR-2) induced by hypoxia on target cells. EV-dependent PAR-2 activation resulted in increased secretion of the pro-angiogenic heparin-binding EGFlike growth factor (HB-EGF) from hypoxic endothelial cells (47).
Microenvironmental stressors modulate the molecular composition of EVs
Extensive analyses by various techniques have partially decoded the molecular composition of EVs derived from various types of cells, including tumour cells (51,78Á81). Any phenotypic changes imposed by stressful conditions on a parental cell may affect the content and function of shed EVs. Accordingly, thermal-and oxidative stress imposed on leukaemia/lymphoma T and B cells were shown to induce the release of exosomes enriched in Natural Killer Group 2, member D (NKGD2) ligands, such as MICA/B and ULBP1 and 2, which provided them with immunosuppressive properties (63). Further, exposure of aggressive B-cell lymphoma cells to the anti-CD20 chimeric antibody rituximab resulted in the secretion of CD20-positive exosomes, which in turn protected lymphoma cells from antibody and complement-dependent cytolysis (82). Finally, treatment of cancer cells with cytotoxic drugs, radiation and hypoxia caused the release of EVs enriched in heat shock proteins (HSPs) (68), anti-apoptotic survivin (83) and pro-coagulant TF (47), respectively. Along with the use of robust proteomic approaches, it has become evident that stress-mediated modifications of EV content are more profound than primarily anticipated. In accordance with this, hypoxic epidermoid carcinoma cells were shown to secrete numerous proteins that have the potential to modulate the tumour microenvironment, and that partly were enriched in exosomes (84). However, the significance of exosomes in mediating hypoxia-dependent angiogenesis and tumour development remains to be determined. Profound differences in the EV-associated protein cargo were observed between EVs derived from primary (SW480) and metastatic (SW620) colorectal cancer cells. Accordingly, SW480derived EVs contained proteins involved in cell adhesion, whereas SW620 EV-enriched proteins were associated with cancer progression, invasion, metastasis and multidrug resistance (79). Since highly metastatic cell phenotypes result from cumulative effects of various stress conditions imposed on tumour cells, these data provide indirect proof that cellular stress modulates EVassociated cargo and potentially EV function. Although it was not pursued whether EVs from primary and metastatic cancer cells serve different functional goals in the metastatic process, the data provide important information for future studies addressing these questions.
Lipids constitute yet another type of EV-associated cargo that may undergo changes in response to cellular stress responses. Membrane biophysical analyses provided in a study by Parolini et al. showed that exosomes from melanoma cells cultured under acidic conditions were characterised by increased membrane rigidity as compared to vesicles from control cells (61). These changes were related to altered lipid composition, as acidic exosomes were enriched in ganglioside GM3 and sphingomyelin. As a consequence of these lipid modifications, acidic exosomes fused more efficiently with the plasma membrane of recipient cells, resulting in enhanced transfer of various signalling molecules (61).
As for lipids, few studies have addressed stressmediated changes in RNA content of tumour-associated EVs. So far, it has been shown that breast cancer cells exposed to hypoxia secrete exosomes with increased levels of miR-210, pointing at the potential for qualitative differences between normoxic and hypoxic exosomes (60). Further, more robust miRNA analysis performed in a study by Hergenreider et al. demonstrated that endothelial cells treated with shear stress secrete exosomes enriched in miR-143 and miR-145. Interestingly, these changes in miRNA contents provided exosomes with atheroprotective properties (85). Since the discovery of mRNAs as molecular cargo of EVs (80) there has, to our knowledge, been no comprehensive analysis to elucidate how stress conditions modulate mRNA content of tumour-associated EVs. However, some indirect conclusions can be drawn from studies investigating molecular composition of EVs derived from non-malignant cells. Accordingly, oxidative stress in mast cells was shown to induce massive changes in the exosome-associated mRNA content (62). Similarly, endothelial cells secreted exosomes enriched in mRNAs and proteins specific for stress conditions imposed on donor cells (86). Overall, these results suggest that environmental stress conditions evoke alterations in the protein, lipid and RNA content of EVs. Consequently, tumour-associated vesicles acquire new biological functions in the tumour microenvironment; whether these changes promote tumour development remains to be conclusively shown.
Given that stress conditions of the tumour milieu mediate tumour progression, EV-associated molecular stress signatures may offer a great opportunity for the development of prognostic and predictive biomarkers in the management of cancer patients. In support of this concept, an increasing number of studies suggest that circulating EVs are constantly released from the tumour to reflect the dynamic nature of cancer and are accessible for repeated isolation from body fluids (28,29,51,87Á89).
EVs as conveyors of stress-mediated tumour progression
According to general wisdom, hypoxic tumour cells secrete a plethora of soluble factors, for example, VEGF-A, into the extracellular space, which collectively activate endothelial cells, and thus induce hypoxia-driven tumour angiogenesis. We recently showed that hypoxic glioma cells release TF/VIIa-bearing EVs that specifically trigger upregulated PAR-2 on hypoxic vascular endothelial cells (47). Similarly, hypoxia-mediated induction of TF-bearing SMVs was observed in ovarian cancer cells, indicating that tumour EVs are vehicles of TF-dependent tumour progression through clotting-dependent and independent mechanisms in the hypoxic tumour niche (90,91). As described above, hypoxic breast cancer cells were shown to secrete exosomes enriched in miR-210 (60). Given that miR-210 is a well-established target of HIF signalling and plays important roles in the regulation of cell growth, angiogenesis and apoptosis, exosome-mediated transfer of miR-210 within the tumour milieu may contribute to hypoxia-driven tumour progression (92).
Stromal cells similarly to cancer cells may respond to stress-related conditions within the tumour microenvironment by secretion of EVs, that is, mesenchymal stem cells stimulated by hypoxia were shown to release microvesicles with angiogenic effects (59). Further, both biomechanical forces and oxidative stress were shown to trigger the secretion of pro-coagulant SMVs from platelets (67,93). Cancer-associated hypercoagulability and its tumour-promoting activities may thus be triggered by EVs secreted from cancer cells as well as auxiliary cells in response to phenotypic characteristics of the tumour microenvironment. Finally, oxidative stress seems to significantly enhance the release of exosome-associated HSP70 from arterial endothelial cells, which in turn activates monocyte adhesion to the endothelium. Hence, it may be speculated that stress-induced secretion of HSP70-bearing exosomes from tumour-associated endothelial cells could stimulate monocyte recruitment to tumours (64).
Accumulating data suggest a link between stress conditions of the tumour milieu and immunological tolerance of tumours (94). However, the mechanisms underlying this process are still ill-defined. Cancer cells may employ vesiculation as a strategy to efficiently blunt immune surveillance mechanisms, and survive in this hostile environment (32). Interestingly, a recent study by Hedlund et al. suggests that oxidative stress imposed on tumour cells triggers the release of NKG2DL-expressing tumour exosomes, which mediate tumour escape from cytotoxic immune attack (63). Moreover, tumour cells may evade complement-induced lysis by SMV-mediated shedding of terminal components of complement from the plasma membrane (95). This mechanism, called ''complement resistance'', may provide protection of tumour cells from antibody-mediated immune attack. Exosomes from various cancer cells were shown to expose Fas ligand (FasL, CD95L) of the death receptor Fas (CD95), which induces T-cell apoptosis and attenuates the function of adaptive immune cells (96,97). Tumour-associated EVs may also promote the function of regulatory T (T Reg ) cells (98,99), impair natural cytotoxic responses mediated by natural killer cells (63,100), down-regulate dendritic cell differentiation from monocytes (101) and instead turn these cells into myeloid immunosuppressive cells (101,102). Finally, cancer cells can fuse with EVs derived from non-cancer cells, for example, platelets, thereby receiving lipids and transmembrane proteins allowing escape from immune system attack (103).
Another interesting area of research is related to the potential role of EVs in tumour resistance to various anti-cancer therapies, such as chemotherapy, immunotherapy and radiation. Recent findings suggest that chemoresistance may result from expulsion of therapeutic drugs from tumour cells through EVs. In support of this concept, cancer cells treated with doxyrubicin accumulated the drug in and released it through SMVs (104). Convincing evidence comes from a study by Safaei and colleagues, showing that exosomes released from cisplatin-resistant cells contained 2.6-fold more platinum than exosomes released from cisplatin-sensitive cells (105). Exosomes may also function to neutralise antibody-based drugs; HER2-overexpressing breast carcinoma cell lines were shown to secrete exosomes enriched in full-length HER2 protein, resulting in sequestration of the HER2 antibody Trastuzumab. As a result, the antiproliferative activity of Trastuzumab in cancer cells was abolished (106). A similar evasion mechanism was observed for B-cell lymphoma cells treated with CD20 antibody Rituximab (82). EVs may also provide tumour cells with radiation resistance; Khan et al. showed that cervical carcinoma cells subjected to a sublethal dose of proton irradiation secreted exosomes enriched in the anti-apoptotic protein surviving (83). These data are in support of the concept that the EV pathway is involved in cancer cell self-protection under stressful conditions.
Intercellular communication of stress via EVs
It is now well established that directly irradiated cells elicit a plethora of biological effects in neighbouring cells. This so-called radiation-induced bystander effect (RIBE) manifests in various ways, including genomic instability, a variety of damage-inducible stress responses and apoptosis (107). Interestingly, this cross-talk may be protective and non-irradiated cells can acquire properties that prepare them for possible future stresses. In support of this concept, it has been shown that human glioblastoma cells with a functional TP53 gene exhibited increased radioresistance when co-incubated with irradiated cells of the same line transfected with mutated TP53 gene, or incubated with the conditioned medium from irradiated cells. The protective effects in bystander cells exposed to the subsequent challenge were explained by nitric oxidemediated accumulation of HSP72 and p53 protein (108). Based on a recent article by Dickey et al. the cellular machinery required to induce the RIBE might also be used to transmit signals to neighbouring cells following exposure to other forms of stress, both exogenous and endogenous (109). So far, mechanisms eliciting transfer of bystander signals involve direct cell-to-cell contact mediated by gap junctions, and indirect communication by means of soluble factors released to the extracellular space (110). In this context, EVs harbouring stressderived molecular cargo may provide a new route of intercellular communication involved in stress-mediated bystander effect. In support of this concept, a recent study by Wang et al. demonstrated that treatment of platelets with oxidised low-density lipoproteins resulted in secretion of microvesicles, which amplified oxidative stress in recipient platelets and evoked pro-coagulant effects (93). In addition, it has been shown that mast cells exposed to oxidative stress may release exosomes with the capacity to communicate a protective signal and to induce tolerance to oxidative stress in recipient cells (62). Hence, various types of stressors may induce EV-mediated preconditioning that prepares various cells of the tumour milieu to survive and recover from the subsequent severe, otherwise lethal circumstances. This EV-mediated preconditioning effect may play important roles in tumour progression by providing resistance to various forms of stress.
Conclusions and future directions
EVs provide an attractive signalling organelle for the demonstration of impressive functional effects in various biological systems; however, due to the complexity and heterogeneity of EV composition, deciphering the exact mechanisms behind functional data poses a great challenge. Future research is clearly warranted to understand how hypoxia and other microenvironmental stressors affect EV trafficking in the tumour microenvironment and how stress-mediated changes of recipient cells modulate their responsiveness to EVs. Such studies should significantly advance our general understanding of tumour biology and provide novel therapeutic strategies in the fight against cancer.
Conflict of interest and funding
The work has been supported by grants from the Swedish Cancer Fund; the Swedish Research Council; the Swedish Society of Medicine; the Physiographic Society, Lund; the Gunnar Nilsson and Kamprad Foundations; the Lund University Hospital donation funds; and the Governmental funding of clinical research within the national health services (ALF).
|
v3-fos-license
|
2022-02-12T16:25:26.919Z
|
2022-01-01T00:00:00.000
|
246765570
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/05/shsconf_eccw2022_01004.pdf",
"pdf_hash": "40f53d8e388889d29495415300809a4091397434",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44254",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"sha1": "4c1006756caa25e4a9651112be8fcd3a00f44f52",
"year": 2022
}
|
pes2o/s2orc
|
Digital Transformation in HR
The onset of the fourth industrial revolution and lack of human capital puts pressure on the development of own staff in terms of improving their qualifications. In the last four years, there has thus been introduced new methods in the management of staff development. The paper builds on the available texts in the area of theory and digital transformation of talent management and recruiting and puts them into the context of requirements arising from the principles of the fourth industrial revolution. Using matrix analysis, it examines the degree of compliance with the available implementation possibilities. The analysis performed leads to the conclusion that the requirements are best met by the method of exact description of partial tasks in the working process including its critical features and their subsequent assignment to workers according to the degree of compliance with their skills by means of combinatorics. This method thus also enables the management of worker development by assigning appropriate tasks.
Introduction
With the onset of the Fourth Industrial Revolution, generally known as Industry 4.0, there are new requirements which literally force production companies to reconsider their current approaches to processes so that they are able to survive in the environment of ever-increasing competition whose success is based on the automation tools, AI or marketing using social networks. New challenges must respect both production and supporting processes. In the area of human resources management, these changes are reflected mainly in the change of the approach to staff development. In the times when the unemployment rate is extremely low, the labour market does not offer enough qualified workers. Companies thus need to obtain them by training low qualified or unqualified human capital using various growth and training programmes. In the last three years, there has been lack of even unqualified workers. They are often replaced by labour force from economically less developed countries which usually have high unemployment rate.
The focus of HR managers´ efforts to improve the qualification of human resources must thus dynamically shift given the current situation in the human capital market. Today we can thus see a shift from recruiting experts to working with human capital. Companies are aware of the fact that their potential success is generated mainly by appropriately allocated and qualified staff; there is thus a literal renaissance of skills monitoring, the so-called talent management. The driver of these transformation processes is represented by two main aspects in the long rundemographic and technological [1]. The demographic aspect refers mainly to the increasing average life expectancy. Combined with ever-increasing onset of automation technologies, it will bring an increase in unqualified labour force in the labour market. In the USA as the world´s strongest economy, unqualified labour force accounts for 47 % of all working people. About 60 % of working Americans perform tasks of which up to 30 % are subject to automation even now [1]. It is thus clear that organizations will have to succeed in monitoring, management and development of employee skills. It must be able to allocate a sufficient number of qualified workers in management positions and fill key medium-skilled positions with its own workers on the basis of their transformation from less qualified positions, while unqualified labour force will be gradually reduced as a result of the Fourth Industrial Revolution.
A question thus arises of how to cope with this emerging pressure and work effectively with available human capital, how to monitor the skills and competencies of employees, measure and develop them, and properly interpret the effects of growth or decline in skills. The specific benefit of this paper is the effort to find a comprehensive system that would facilitate and enable the mapping and development of employee skills using modern tools of database systems.
Literary research
The 21st century is a period of innovative technologies and digitization. The transformation of HRM by means of digital tools puts an external pressure on changes in human behaviour, skills and competencies in organizations [2]. Technological development enables streamlining of production processes and brings changes in human labour, which might threaten employee wellbeing and challenge their existing skills and knowledge [3]. Claus [1] addressed the need for introducing talent management due to the unfavourable development in the human capital market. Using the application of scientific knowledge from other scientific disciplines closely related to HR, the author identified the need for introducing talent management in order to become competitive in the context of the fourth industrial revolution. Mothe and Nguyen-Thi [4] analysed the relationship of age diversity of employees and the degree of technological innovations in a company. On the basis of in-depth interviews with top managers, Whysall, Owtram and Brittain [5] conclude that the fast pace of technological changes created a gap between employee skills and dynamically changing requirements for their roles. This is confirmed by Dahlbom, Siikanen, Sajasalo and Jervenpää [6], who use qualitative interviews to describe the need for interconnecting data analysts and HR specialists, who usually do not have analytical skills. The application of HRA (Human resources analysis) shall also have a positive impact on reducing the subjectivity of evaluating employees, which may result in increased willingness to optimize work performance, as confirmed by Sharma, Sharma [7] using the analysis of available professional literature in the field of HR and HRA. Zhou, Liu, Chang and Wang [8] use a questionnaire survey implemented on a sample of 211 (mostly) manufacturing companies operating in China as the world´s largest labour market to describe the relationship between economic advance and digitization of HRM (Human resources management) and the improving performance of the company. The analysis of the obtained data indicates the existence of a positive correlation between a company´s performance and advanced digitized HRM system. Agarwal and Maurya [9] examine the relationship of quality and the power of corporate brand the way it is perceived by public in dependence on the degree of development of talent management in the organization. Using a questionnaire survey and subsequent regression and correlation analysis, they confirm the existence of a strong dependence between the degree of development of talent management and a positive perception of the organization by the public. Jones, Hutcheson and Camba [10] argue that the current Covid-19 pandemic can positively contribute to the development and digitization of systems (not only) in the field of HR. Companies were often forced to start investing in the digitization of their processes, mainly because of work from home.
It follows from the above findings that in the context of the Fourth Industrial Revolution, the world calls for the integration of HRA, talent management, and skills management in the everyday activities of organizations. However, the integration has been addressed in few publications only. Nunes, Pinto and Sousa [11,12] propose a development framework for employee technical skills by applying a structured training system. The effectiveness of the training system is then verified by mapping the OEE (Overall equipment effectiveness) indicator. Nunes, Pinto and Sousa [11,12] represent the method of improving competencies and skills of employees in production by means of continuous improvement of production process quality. By applying the method of QC Story (The process of control and improving the quality in production), subsequent analysis of key indicators, and questionnaire survey, they conclude that the focus on the process of quality improvement leads to the improvement of key process indicators as well as to the improvement of qualification, i.e. employee skills and knowledge in the long run. Anh and Lee [13] propose the application of the knowledge and skills of current employees as a model profile of a suitable applicant for a job. By mapping and subsequent analysis of excellent employees´ skills provides a certain model profile of an ideal potential job seeker. For creating the model and the compliance verification, analytical AI tools are used. It uses specific knowledge for the recruitment of employees, whose integration into the work process will be seamless; however, no future development of such employees is not provided by the system. Stadnicka, Arkhipov, Battaïa and Chandima [14] apply the model of knowledge and skills in the optimization of aircraft maintenance process as a critical process in terms of the quality and safety. By analysing the process steps in repairing a transport aircraft and subsequent assigning the necessary specific skills and knowledge from the field of technology and safety, they provide a comprehensive overview of requirements for a worker suitable for performing the specific critical task. For assigning the task to a specific employee, the so-called Hall's marriage theorem is used, i.e. combinatorial methods analysing the compliance of the requirements with the skills of a given worker. Kataoka et al. [15] focus on the application of skills management as a tool to improve the required skills of students taking their practical training. The application of the model is implemented in three steps. First, the expected skills are defined across all activities of the production process. Second, the verification methods and evaluation criteria are selected. The last step is the implementation of the model in the syllabus of specific training. The application of the training framework has a form of the so-called experiential learning. The course is repeated cyclically and its outputs are analysed until the individual reaches the required level of skills.
The available publications mostly reflect a specific, narrow field of the applicability of the system of mapping and development of employee skills. A comprehensive system applicable across processes and industries is not described.
Methodology
By studying available professional publications, we may conclude that there is a global need for the monitoring of employee skills and knowledge for the purpose of their management and development mainly in the case of low-skilled or unskilled positions. A matrix analysis is used to compare four selected systems of employee skills monitoring and their applicability for the manufacturing company. The selection criteria reflect the needs of the coming fourth industrial revolution and digital age. For the purpose of analysing available professional publication, the following research question was formulated: Which of the available methods best meets the requirements of the coming fourth industrial revolution?
The input alternatives for the evaluation using the matrix analysis are the systems by Nunes, Pinto and Sousa [11,12], i.e. the application of the training framework and subsequent verification of the effects on the production process by means of monitoring the OEE indicator, the systems by Stadnicka, Arkhipov, Battaïa and Chandima [14], i.e. the analysis of the partial process tasks and subsequent assigning of a suitable worker using the Hall´s marriage theorem, the method used by Ahn, Lee [13], i.e. the creation of an ideal job seeker´s profile by analysing the skills of excellent workers, and method used by Nunes, Pinto and Sousa [11,12], i.e. employee development by means of the application of QC story methodcontinuous improvement of the production process quality.
Available alternatives are compared in the following criteria: a) multidisciplinary applicability -the method is applicable across all production and non-production processes regardless of the field the organization operates in
b) forced costs -implementation of the method does not incur any costs; mainly the acquisition costs of software in which the method is implemented c) need for IT experts from practice (implementation and maintenance autonomy)
-implementation does not require hiring IT experts, the entity is able to manage the implementation without any help of IT experts → development of IT skills d) permanent sustainability -the methodology is repeatedly applicable; it reflects the current development in the field of IT as well as the automation of the production processes in the context of Industry 4.0
e) focus on personal growth
the method is used for the development and management of an individual´s talent; it is applicable as motivation connected with the corporate remuneration system
f) comprehensibility at all organizational levels -all employees in the organization understand the basis of the methodology, understand what is being monitored, and for which purpose g) reduced evaluation subjectivity -the methodology is transparent enough so that it does not diminish employee trust and personal engagement
The strength of the relationship between the variant and evaluation criterion is presented at four levels (strong, medium, weak, and no relationship), where no relationship represents the minimal or no match of the solution variant with the required criterion, while strong relationship refers to the absolute match. In the case of the evaluation criteria forced costs and need for IT experts from practice, the logic of assigning characteristic strengths of the features is logically reversed, i.e. the higher the need, the weaker the relationship and the lower degree of match with the required evaluation criterion. The strength of the relationship is then converted into a point scale (0,5,10,15 points). Summing of the points provides an accurate selection of the most suitable method showing the highest degree of match across the evaluation criteria. Table 1 shows a matrix analysis, i.e. the expression of the strength of the relationship between the selection criterion on the horizontal axis and the solution variant on the vertical axis. The strength of the relationship is then assigned based on the knowledge obtained from the specific professional papers on the scale of no relationshipstrong relationship. Table 2 shows the exact numerical expression based on the strength of the relationship (0-15 points). The last column of the table contains the overall sum of the points. The available methods under review show the compliance of the results of the criterion permanent sustainability, where three of them show a strong relationship; on the contrary, there is weak or no relationship in the case of the criterion reduced evaluation subjectivity. It can thus be concluded that the methods are applicable in the context of the fourth industrial revolution; however, it is necessary to develop the transparency leading to the reduced subjectivity of the evaluation of the employee, especially if the evaluation is connected with the remuneration system. The criterion of forced costs also shows match across all implementation schemes. This finding can be interpreted as a need for high input costs of the implementation of the method, mainly due to the dynamically changing requirements combined with the lack of HR staff skills, as confirmed by Dahlbom, Jervenpää, Sajasalo and Siikanen [6]. Particularly high implementation costs can be seen in the case of the implementation scheme No. 3 by Lee and Anh, which compares the candidate´s compliance with the profile of an ideal worker using AI analytical tools; autonomous implementation and administration from the side of the organization cannot be assumed there. On the contrary, the second implementation scheme shows the best results in the comparison with the requirements of the fourth industrial revolution, as at least medium strength of the relationship is achieved in the case of five out of seven criteria. The total sum of the points of expressed strength of the relationship between the criterion and implementation variant indicates the most suitable method, which turns out to be the method No. 2, system developed by Stadnicka, Arkhipov, Battaïa and Chandima [14] based on the analysis of individual partial tasks and their subsequent assigning to individual workers according to their skills and knowledge, as it best meets the requirements arising from the coming fourth industrial revolution. The system is applicable across industries and processes. In its implementation and management, the participation of IT experts is not necessary. The system is sustainable and sufficiently flexible, focusing on the development of individuals and its transparency enables the reduction of the evaluation subjectivity. Also, the system of assigning specific tasks to employees on the basis of a detailed analysis of the task requirements and employee skills appears to be the most suitable available solution. The system enables the management of employee development by means of assigning tasks of various degrees of difficulty and critical features. The weakness is the overall transparency leading to increasing trust. This is represented by the need to apply the combinatorial method, the so-called Hall´s marriage theorem, to exactly assign a task to a suitable worker.
Results
The imaginary last position is occupied by the systems used by Nunes, Pinto and Sousa [11,12]. These systems show the lowest degree of compliance especially with the criteria of focus on individual and the overall transparency of the system. Development is mapped by means of monitoring the indicators which are not directly related to the development of employees. Employee development is a secondary effect to the improvement of the production in the area of production quality or increasing the usability of machinery. The system is thus not fully transparent for employees and a question arises of "What is the share of individual employees in improving the indicators?" Therefore, this is not a suitable method of skills mapping with the potential connection to the organization remuneration system.
The system developed by Anh, Lee [13] shows the highest possible degree of compliance for the criterion "focus on individuals", since the system creates the profile of an ideal employee on the basis of actual knowledge from practice and subsequently seeks for a suitable candidate with the highest degree of compliance with the profile. The main disadvantages can be seen in a very complex creation of the profile, which requires the participation of the IT experts from practice as well as high additional acquisition costs of software. Another disadvantage is its single-purpose use only in the stage of recruitment.
Conclusion
The objective of the paper was to determine how to meet the requirements of the HR system transformation, especially in the field of employee development in the context of the fourth industrial revolution and coming digital transformation. For this purpose, available professional texts concerning the given issue were analysed. From these texts, four specific implementation criteria were selected, which were put into the context of seven criteria meeting the basic ideas of the fourth industrial revolution and digital transformation, namely multidisciplinary applicability, minimal costs, reduction of the need for experts, permanent sustainability of the system, focus on the growth of individuals, transparency of the whole system, and efforts for the reduction of subjectivity in evaluating employees. The strength of the relationship between the implementation scheme and evaluation criterion was divided into four levels (strong, medium, weak, and no relationship), which were assigned relevant number of points on the scale of 0-15. Subsequently, matrix analysis of the strength of the relationship between the requirements arising from the fourth industrial revolution in HR and available solutions was performed. By summing the points assigned to the strength of the relationships, it can be concluded that the requirements of the fourth industrial revolution are best reflected and met by the implementation scheme No. 2, i.e. the authors Stadnicka, Arkhipov, Battaïa and Chandima [14], specifically, the system based on the exact knowledge of partial steps of a task including the definition of critical features and the subsequent assignment of the tasks to individual operators using the combinatorial method, the so-called Hall´s marriage theorem. However, the method of assigning a task to a specific employee may pose pitfalls in the case of its implementation in a small company due to its complexity and worse transparency of the system. After the exact analysis of partial steps of the working process, a question arises of how to find a suitable worker in a form understandable to the employees themselves. Therefore, there is some space for further research aimed at improving the transparency of the system across the organizational levels of the company in order to increase the trustworthiness of the system.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2012-11-06T00:00:00.000
|
1144768
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://ojs.ptbioch.edu.pl/index.php/abp/article/download/2101/737",
"pdf_hash": "49e090c0d895fc3707b04b04c61ab250aa9cb97e",
"pdf_src": "CiteSeerX",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44257",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "49e090c0d895fc3707b04b04c61ab250aa9cb97e",
"year": 2012
}
|
pes2o/s2orc
|
the HIV-1 genome
Although two strand transfer events are indispensable for the synthesis of double-stranded DNA and establishing HIV-1 infection, the molecular basis of these phenomena is still unclear. The first obligatory template switching event occurs just at the beginning of the virus replication cycle and involves two copies of the 97-nucleotide long R region, located one each at the both ends of the HIV-1 genome (HIV-1 R). Thus, one can expect that the molecular mechanism of this process is similar to the mechanism of homologous recombination which operates in RNA viruses. To verify the above-mentioned hypothesis, we attempted to assess the recombination activity of HIV-1 R. To this end, we tested in vitro, how effectively it induces template switching by HIV-1 RT in comparison with another well-characterized sequence supporting frequent homologous crossovers in an unrelated virus (R region derived from Brome mosaic virus--BMV R). We also examined if the RNA sequences neighboring HIV-1 R influence its recombination activity. Finally, we tested if HIV-1 R could cause BMV polymerase complex to switch between RNA templates in vivo. Overall, our results have revealed a relatively low recombination activity of HIV-1 R as compared to BMV R. This observation suggests that different factors modulate the efficiency of the first obligatory strand transfer in HIV-1 and the homology-driven recombination in RNA viruses.
INTRODucTION
In retroviruses, template switching events are inseparable elements of their replication cycle (Holland et al., 1992).At least two strand transfers are required to convert HIV-1 (family Retroviridae, subfamily Orthoretroviridae, family Lentivirus, species Human immunodeficiency virus 1) single-stranded genomic RNA into functional doublestranded DNA that is subsequently integrated into the host genome as proviral DNA.The template switching events are also indispensable elements of RNA recombination, which commonly occurs in all types of RNA viruses and retroviruses, and is considered one of the main driving forces of their rapid evolution (Chare & Holmes, 2006;Urbanowicz et al., 2005;Figlerowicz et al., 2003;Borja et al., 1999;Worobey & Holmes, 1999;Nagy & Simon, 1997;Greene & Alison, 1996, 1994;Lai, 1992;Strauss & Strauss, 1988).There are several lines of evidence that template switching by viral polymerase (a copy-choice mechanism) is responsible for the creation of the numerous recombinant forms of HIV-1 circulating worldwide and for the existence of HIV-1 quasi-species (Smyth et al., 2012;Fisher et al., 2010;Powell et al., 2010).The copy-choice model of RNA recombination assumes that two factors are of special importance: (i) the template switching capacity of viral polymerase and (ii) the presence of recombinationally active sequences within donor and acceptor templates.Accordingly, it has been demonstrated that one can affect the crossover frequency as well as the location of recombinant junction sites by introducing specific mutations either in viral polymerase (Alejska et al., 2001;Figlerowicz & Bujarski, 1998;Figlerowicz et al., 1998Figlerowicz et al., , 1997;;Cornelissen et al., 1997;) or in the donor and acceptor RNA molecules (Alejska et al., 2005, Nagy & Bujarski, 1998, 1997).So far only a few studies comparing the recombination activity of different viruses have been carried out (Chare & Holmes, 2006;Alejska et al., 2005;Shapka & Nagy, 2004;Cheng & Nagy, 2003).Consequently, the question if there are any general rules controlling RNA recombination remains open.
In this paper, we focused on the first obligatory strand transfer involving two copies of HIV-1 R region placed one each at the 5′ and 3′ ends of viral genomic RNA (Fig. 1A).After the reverse transcription the R regions become a part of 5′ and 3′ long terminal repeats (5′ and 3′ LTRs) which enable the integration of the viral double stranded DNA (dsDNA) with the host genome.After the integration, 5′ LTR serves as a promoter binding transcription factors and other regulatory proteins indispensable for the expression of HIV-1 genes.Under natural conditions, the first obligatory strand transfer occurs just at the beginning of the HIV-1 replication cycle.This process is initiated when the virus-encoded reverse transcriptase (RT) starts the minus DNA strand synthesis using host tRNALys3 as a primer.The primer binds to the PBS (Fig. 1A) located about 180 nucleotides from the 5′-end of the RNA genome and RT undertakes the synthesis of so-called minus strand strongstop DNA (Marque et al.,1995).After reaching the 5′-end of the genome, RT most probably dissociates from the template and the single stranded DNA (ssDNA) is transferred to the 3′-end of the RNA genome (Muchiri et al., 2011;Berkhout et al.,2001).Upon the strand transfer, RT continues the synthesis until the full-length minus DNA strand is completed.The first strand transfer, also called the minus strand transfer, involves hybridization between the ssDNA and the 5′-as well as 3′-terminal R region (Berkhout et al., 2001) (Fig. 1A).In addition to the local homology, the other factors considered to be necessary for the first strand transfer are the DNA polymerase and RNase H activity of HIV-1 RT.Due to the RNase H activity of HIV-1 RT, the copied RNA template is cleaved and removed from the nascent DNA.As a result, the unpaired ssDNA fragment is available for hybridization with the acceptor template (3′-R region).Another important element involved in the first strand transfer is HIV-1-encoded nucleocapsid protein (NC).NC has been proven to strongly facilitate the complementary nucleic acid annealing and strand exchange by destabilizing the secondary structures present in R region (Guo et al.,1997) (Fig. 1A, B).
Taking into account a macroscopic description of the first obligatory strand transfer, its high similarity to the homology-driven recombination occurring in RNA viruses can be postulated (Urbanowicz et al., 2005).Both processes involve homologous RNA templates and are mediated by viral polymerases.Earlier it has also been shown that homologous RNA recombination strongly depends on RNA primary and secondary structures.Accordingly, several recombinationally active RNA motifs located in viral genomes have been identified (Alejska et al., 2005;Bruyere et al., 2000;Nagy & Bujarski, 1998, 1997).On the basis of these observations, we attempted here to determine to what extent the effectiveness of the first obligatory strand transfer relies on the recombination activity of HIV-1 R.
To this end, we investigated the in vitro recombination activity of HIV-1 R and assessed if this activity is affected by the conformational dynamism of the HIV-1 leader sequence.In addition, we tested whether HIV-1 R can efficiently induce in vivo recombination if inserted into the genome of a non-related plant RNA virus (family Bromoviridae, genus Bromovirus, species Brome mosaic virus -BMV).Moreover, to assess the template switching capacity of HIV-1 RT, we have analyzed whether this enzyme is capable of producing recombinants in a reaction involving a well-characterized recombinationally active sequence derived from the BMV genome (BMV R).In general, the collected data suggest a relatively low recombination activity of HIV-1 R.
Plasmids.In order to obtain plasmids containing cDNA of the donor and acceptor RNA templates to be tested in vitro, appropriate fragments of the HIV or BMV genome were inserted into pUC19 vector under the control of T7 RNA polymerase promoter.As a result, the following plasmids were generated: pHIV-1Rd containing a 137-nt fragment of HIV-1 5′ UTR (97-nt HIV-1 R with 40-nt portion of downstream region U5); pHIV-1Ra containing a 116-nucleotide part of HIV-1 3′ UTR cDNA (97-nt HIV-1 R with 19-nt portion of upstream region U3); pBMVR2d containing a BMV-derived sequence located in RNA2 between positions 2640 and 2865 (fragment including BMV R); pBMVR3a containing a BMV-derived sequence located in RNA3 between To prepare pMatHIVR-RNA3 (containing a full-length cDNA of BMV Mat-HIVR-RNA3), plasmid pMat0-RNA3 (Alejska et al., 2005) was digested with MluI and EcoRV endonucleases and the deleted fragment was replaced with MluI-EcoRV-digested 137-nt cDNA corresponding to the 5′ end of HIV-1 genomic RNA.The latter was amplified by PCR from the pHIVR-1d plasmid with primers HIVD5Mlu and HIVD3Eco.Modified pMat0-RNA3 was then digested with SpeI and the deleted fragment was replaced by the SpeI-cut cDNA fragment corresponding to the 116-nucleotide part of 3′ HIV-1 UTR cDNA, amplified by PCR from the pHIVRa plasmid with primers HIV A5Spe and HIVA3Spe.
Synthesis of RNA donor and acceptor templates.All RNAs were obtained via in vitro transcription from corresponding DNA templates.The DNA templates for transcriptions were synthesized by PCR involving specific primers and above-mentioned plasmids containing selected fragments of the HIV-1 or BMV genomes.
In vitro recombination assay.The template switching capacity of HIV-1 RT was tested in primer extension reactions.They were carried out in a final volume of 40 μl, with 30 pmol of primer, 10 pmol of the donor template and different amounts of the acceptor template.In each set of the experiments involving the HIV-1Rd/HIV-1Ra and BMVR2d/BMVR3a templates, the donor:acceptor molar ratio was as follows: 1:0.5, 1:1, 1:2 and 1:5.In the case of two pairs of templates, BMHd/ BMHa and LDId/LDIa, only the 1:5 donor:acceptor ratio was applied.As a control, reactions involving only a donor or acceptor template were carried out.Before each reaction, the primer and donor (primer HIV3-REC for HIV-1Rd, Lys21 for LDId, BMHd and BMHd and BM-V3REC for BMVR2d) were denatured at 95ºC in 0.05 M Tris/HCl (pH 8.3), 0.01 M MgCl 2 , and 0.075 M KCl and slowly cooled (with a cooling rate of 1.0°C/min).When the reaction mixture reached 37ºC, 6 μg of HIV-1 RT was added and the mixture was incubated for further 5 min.Finally, an acceptor, 20 U of ribonuclease inhibitor and a dNTPs mixture (50 μM dATP/dGTP/dTT-Pmix, 25 μM dCTP, 0.3 μl [α 32 P]dCTP 3000 Ci/mmol) were added.After a 45-minute incubation at 37°C, the reactions were stopped by the addition of one volume of the urea loading dye (0.25% bromophenol blue and 0.25% xylene cyanoll FF in 7 M urea).For every pair of RNA templates, the experiment was repeated at least three times.Products of the primer extension reactions were denatured by heating at 95°C and fast cooling on ice, separated by electrophoresis in an 8% polyacrylamide denaturing gel (in case of > 100-nt products) or in a 12% polyacrylamide denaturing gel (in case of < 100nt products) and visualized using a Typhoon phosphorimager.The efficiency of recombination (% of transfer product in total product of primer extension reaction) was quantified by the ImageQuant software.Recombination products were extracted from the gel and amplified with donor-, acceptor-or recombinant-specific primers containing EcoRI and PstI restriction sites.The PCR products were digested with EcoRI and PstI, and ligated into pUC19 vector.The ligation mixture was used to transform DH5α E. coli cells and individual clones were sequenced.
In vivo recombination assays.The recombination activity of HIV-1 R was tested in a well-established BMV-based recombination system according to previously described procedure (Alejska et al., 2005).Briefly, BMV genomic RNAs (RNA1, RNA2 and RNA3) were obtained by in vitro transcription for which EcoRI-lin-earized plasmids pB1TP3, pB2TP5, and pMat-HIVR-RNA3 were used.Chenopodium quinoa plants (a local lesion host for BMV) were mechanically inoculated with mixtures containing BMV RNA1, RNA2 and Mat-HIVR-RNA3.Two weeks post-inoculation, the number of lesions developed on each inoculated leaf was counted and compared with the number of lesions that appeared after wtBMV infection.To test the recombination activity of the BMV mutant, individual local lesions were excised from the plant leaves and total RNA was extracted separately from each lesion.The isolated RNA was subjected to RT-PCR involving primer 1st and primer 2nd specific for the 3′-portion of RNA3 (the fragment where the tested sequences were located).As a control, RT-PCR reactions involving either the parental Mat-HIVR-RNA3 transcript (positive control) or water (negative control) were carried out.The RT-PCR products were analyzed by electrophoresis in a 1.5% agarose gel, cloned into pUC19 vector and sequenced.
HIV-1 R recombination activity
The ability of HIV-1 R to mediate template switching by HIV-1 RT was tested in reactions involving: 137-nt HIV-1Rd donor RNA (representing 5′ end of HIV-1 genome between positions 1 and 137), different amounts of 116-nt HIV-1Ra acceptor RNA (representing 3′ end of HIV-1 genome between positions 9516 and 9632) and donor-specific primer HIV3REC (complementary to the 3′ end of the donor template between positions 108-132, Fig. 2A-C).In the individual reactions, the donor:acceptor molar ratio was changed as follows: 1:0,5, 1:1, 1:2 and 1:5.Both the donor and acceptor shared the 97-nt HIV-1 R sequence, thus during the primer extension reactions HIV-1 RT could synthesize either a 137-nt ssDNA complementary to the donor template (product F HIV-1 R ) or it could switch to the acceptor, within HIV-1 R, and produce a 178-nt recombinant (product T HIV-1 R ).
To estimate the locations of the crossovers, four singlenucleotide marker mutations were introduced into the acceptor template.They were distributed along the entire R region (Fig. 2D).
A polyacrylamide gel electrophoresis of the products formed in the primer extension reactions revealed that the amount of T HIV-1 R (putative recombinant) grew proportionally with the increase in the acceptor concentration (donor concentration was constant).The maximal efficiency of T HIV-1 R formation was ca.8% of the total product (Fig. 2B, C).To better characterize the two main products of the primer extension reactions (product F HIV-1 R and T HIV-1 R ), they were extracted from the gel and used as templates in PCR involving three pairs of primers: donor-specific, acceptor-specific or recombinant-specific.The PCR with F HIV-1 R generated products only for the donor-specific pair of primers, while in the PCR with T HIV-1 R only the recombinant-specific pair of primers worked.Both PCR products were cloned and 25 individual clones of each product were sequenced.This confirmed that product F HIV-1 R had an identical sequence as the donor template, and product T HIV-1 R was recombinant -its 5′-portion derived from the donor and the 3′-portion from the acceptor template.A detailed analysis of the recombined sequences showed that all the crossovers could be classified as precisely homologous.Sixty percent of recombinant junction sites were placed within the 5′-terminal part of R region, after the last marker mutation, 25% were in the penultimate region, in the middle region no crossovers were observed, while in the first and second regions, respectively, 5% and 10% of crossovers were located (Fig. 2D).
conformational changes within the HIV-1 leader sequence do not affect its recombination activity
The HIV-1 leader sequence encompasses nucleotides 1-368 of genomic RNA and contains several regulatory elements: the TAR hairpin, the polyA hairpin, PBS, the dimerization initiation site (DIS), the splice donor (SD), the core packaging signal (Ψ) and the hairpin containing the start codon and part of the GAG open reading frame (Fig. 1A).As it was mentioned before, the leader sequence can exist in two alternative conformations (Huthoff & Berkhout, 2001, Fig. 1C).The first more stable LDI is formed by extensive long-distance base pairing between polyA and DIS.The second one, BMH, is adopted in the presence of nucleocapsid protein and consists of several hairpin motifs.The BMH conformation exposes the polyA and the DIS hairpins, enabling the dimerization of HIV-1 genomic RNAs.Considering the fact that conformational changes occurring within the leader sequence can regulate HIV-1 replication (Berkhout et al., 2001;Huthoff & Berkhout, 2001), we attempted to determine if they are also capable of affecting recombination activity of HIV-1 R. Two primer extension reactions were carried out: the one involving a pair of templates with the donor RNA in the LDI conformation, and the second with an analogous pair of templates with the donor adopting the BMH conformation.Both donors (LDId and BMHd) corresponded to nucleotides 1-368 and the acceptors (LDIa and BMHa) to the last 125 nucleotides (nucleotides 9507-9632) of HIV-1 genomic RNA.Since the wild type leader sequence folds in the LDI structure, a few modifications were introduced into BMHd to enforce alternative structure formation, namely: U99 was substituted by C and U91 and C96 were deleted (Berkhout et al., 2002;2001).Electrophoresis of the LDId and BMHd templates in a native polyacrylamide gel proved that they indeed existed in differ-ent conformations (Fig 3A).The in vitro recombination assays were done according to a previously described procedure.In all reactions the same donor:acceptor ratio (1:5) was used.The expected length of the product of primer extension on the donor template was 202 nt (F HIV BMH or F HIV LDI ), and the length of the recombination product was 235 nt (T HIV BMH or T HIV LDI ).
The reaction mixtures were separated by polyacrylamide gel electrophoresis and radioactive products were visualized and quantified with a phosphorimager (Fig. 3B, C).Each band was cut out from the gel, DNA was extracted, amplified and cloned.The sequencing of individual clones revealed that the bands marked as F HIV BMH and F HIV LDI contained amplified donor templates (BMHd and LDId), while bands marked as T HIV BMH and T HIV LDI contained recombinants.The band marked as S HIV LDI contained the so-called LDId self-priming product.Its formation has been described previously (Discroll et al., 2000).The analysis of T HIV BMH and T HIV LDI accumulation showed that the conformation adopted by HIV-1 leader sequence did not influence its recombination activity.However, the entire leader sequence facilitated template switching by HIV-1 RT much better than did HIV-1 R alone (20% compared to 8%; Fig. 3D and Fig. 2C, respectively).
Template switching capacity of HIV-1 RT
There are two major factors affecting the efficacy of RNA-RNA recombination: the structure of the donor and acceptor templates and the template switching capacity of viral polymerase.In order to determine which of them was responsible for the relatively low recombination frequency observed during our in vitro experiments with HIV-1Ra/HIV-1Rd templates, we used a hybrid recombination system.It was composed of HIV-1 RT and well-characterized, recombinationally active RNA motifs identified earlier in BMV (Nagy & Bujarski, 1995).These BMV-derived motifs are located within 3′UTRs of BMV RNA2 and RNA3 and are called BMV R2 and BMV R3, respectively (Fig. 4A).Both BMV Rs share a 60-nt highly homologous sequence followed by Reactions were performed with increasing acceptor to donor molar ratio.Full-length products of primer extension reactions are marked F HIV-1 R , while transfer products are marked T HIV-1 R .(C) Efficiency of synthesis of strand transfer products.Full-length products of primer extension reactions are marked F, while transfer products are marked T. The efficiency of strand transfer on the graph is shown as the percentage of total reaction product (y axis).The donor:acceptor molar ratio was changed as follows: 1:0.5, 1:1, 1:2, and 1:5 (x axis).(D) Distribution of homologous crossovers in HIV-1 R. Marker substitutions are in bold, the distribution of crossovers is shown between the donor and acceptor sequence.Low recombination activity of HIV-1 R region a 7-nt sequence of reduced homology.Similar to HIV-1 R, a putative secondary structure adopted by BMV R2/3 contains two hairpins called G and H (Nagy & Bujarski, 1995;Fig 4B).
The template switching capacity of HIV-1 RT was tested in vitro in primer extension reactions involving two RNA templates: a 225-nt donor RNA (BM-VR2d corresponding to nucleotide 2640-2865 of BMV RNA2) and a 214-nt acceptor RNA (BMVR3a encompassing nucleotide 1763-1961 of BMV RNA3, plus a 16-nucleotide sequence added by PCR involving primers T 7 and MB3RNA3) (Fig. 4C-E).The expected recombined product was 364 nt long.Four single-nucleotide mutations present in BMV R3: 1960 C→A, 1946 U→G, 1921 A→G and 1912 A→G (the number of nucleotide corresponds to its location within genomic RNA3) let us estimate the crossover location within the highly homologous 60nt region (Fig. 4F).Reactions were performed with an increasing acceptor to donor molar ratio.Full-length products of the primer extension reactions are marked F BMV R , while transfer products are marked T BMV R .(E) Efficiency of synthesis of strand transfer products.Full-length products of primer extension reactions are marked F, while transfer products are marked T. The efficiency of strand transfer on the graph is shown as the percentage of total reaction product (y axis).The donor:acceptor molar ratio was changed as follows: 1:0.5, 1:1, 1:2, and 1:5 (x axis).(F) Distribution of homologous crossovers in the BMV R region (60-nt region of high homology is underlined).Marker mutations are in bold, the distribution of crossovers is shown between the donor and acceptor sequence.
The products of the primer extension reactions were separated in a polyacrylamide gel and analyzed with a Phosphorimager (Fig. 4D, E).Two main products of the primer extension reactions (products F BMV R and T BMV R ) were extracted from the gel and amplified by PCR involving donor-, acceptor-and recombinant-specific pairs of primers.The PCR with F BMV R generated products only for the donor-specific pair of primers, while the PCR with T BMV R worked only if the recombinant-specific pair of primers was used.Both PCR products were cloned and sequenced.This confirmed that product F BMV R had an identical sequence as the donor template, and product T BMV R was recombinant -its 5′-portion derived from the donor and 3′-portion from the acceptor template.The average yield of recombined product synthesis calculated for the reactions with the highest amount of acceptor template (donor:acceptor molar ratio 1:5) reached approximately 30% (Fig. 4D, E).An analysis of the marker mutations present within cloned PCR products demonstrated that all the crossovers were precisely homologous.Sixty percent of the crossovers occurred at the very end of BMV R region within the short sequence (7 nt) showing the reduced level of homology.Twentyseven percent and 16% of the crossovers took place in the third and fourth region, respectively (Fig. 4F).
HIV-1 R recombination activity in heterologous in vivo system
To further characterize the recombination activity of HIV-1 R, we applied a described-earlier BMV-based in vivo recombination system (Alejska et al., 2005) (Fig. 5A).The BMV genome consists of three single stranded RNA molecules, RNA1, RNA2 and RNA3.Based on our earlier observations, we created a BMV RNA3 mutant, called Mat-HIVR-RNA3, containing two copies of 97-nt HIV-1 R located within its 3′ UTR (5′HIVR and 3′HIVR).The 5′ and 3′ HIVRs were separated by a spacer sequence of about 350 nt.Additionally, marker substitutions were introduced within 5′HIV-1 R (the same as in HIV1Ra, Fig 2D) to allow us to map the location of recombinant junction sites.We had demonstrated earlier that analogous Mat-RNA3 mutants carrying recombinationally active sequences (located in the same positions as 5′HIVR and 3′HIVR were placed in Mat-HIVR-RNA3) effectively supported homologous recombination in BMV (Alejska et al., 2005).As a result, RNA3 recombinants lacking one homologous region and the whole 350-nt spacer were formed.Because they replicated and accumulated better than the parental RNA3 mutants, the latter were out-competed from the infected plants.
Chenopodium quinoa plants (a local lesion host for BMV) were inoculated with wtRNA1, wtRNA2 and Mat-HIVR-RNA3 transcripts, all the viral components necessary to initiate BMV infection.After two weeks, when the symptoms of infection developed, the lesions formed on every inoculated leaf were counted to estimate the infectivity of the BMV mutant.Forty individual local lesions were excised and total RNA was extracted separately from each of them.Then, the 3′-portion of RNA3 progeny accumulating in examined lesions was selectively amplified by RT-PCR involving RNA3-specific primers 1st and 2nd, flanking the analyzed sequence.The reaction products were separated in a 1.5% agarose gel and their length was determined.The formation of about 800-or 400-500-nt DNA fragments indicated that the lesion contained parental or recombinant RNA3, respectively.The parental-type molecules were detected only in ten local lesions, four lesions contained RNA3 recombinants, while no virus was detected in the remaining 26 local lesions.The RT-PCR-amplified 3′-fragments of RNA3 recombinants (the region where crossovers occur) were cloned and sequenced.As a result, we identified two different types of recombinants.An analysis of their sequences showed that none of them was formed according to the anticipated scenario (HIV-1 R-mediated homologous recombination, see Fig. 5B).Instead, both of them were classified as nonhomologous recombinants and contained a different-size deletion in their 3′ UTRs.In the first type the whole spacer sequence was deleted.The second type encompassed the last 120 nucleotides of the spacer sequence, 3′R and 155 nucleotides of 3′ UTR.
DIscussION
HIV-1 R plays a key role in the first obligatory stand transfer during the conversion of retroviral genomic RNA into ssDNA.Consequently, HIV-1 R has been considered a homologous recombination hot-spot (Moumen et al., 2001).To verify this presumption, we constructed an experimental system allowing us to test the HIV-1 R recombination activity in separation from other factors contributing in the first obligatory strand transfer (Fig. 2A).Surprisingly, our experiments revealed a rela- In our studies we used the described-earlier Mat-BMV mutant (Alejska et al., 2005).In this mutant specific modifications were introduced only in genomic BMV RNA3.Thus, the system is composed of wtRNA1, wtRNA2 and altered RNA3.Mat-RNA3 has unchanged 5'-noncoding, intergenic and coding regions.The tested homologous sequences (white boxes) are inserted into the 3'-noncoding region and separated by a spacer.As a result, the 3'-noncoding region in Mat-RNA3 is much longer than in wtRNA3.To assess recombination activity of HIV-1 R we constructed MatHIVR-RNA3.This molecule contains 5' and 3' HIV1-R as tested sequences.(B) The anticipated scenario of HIV-1R-mediated recombination.BMV polymerase initiates nascent RNA strand (dotted line) synthesis at the 3'end of MatHIVR-RNA3.When the polymerase reaches R region it can switch to the homologous region located in the same or other Mat-RNA3 molecule.The resulting RNA3 recombinant lacks one homologous sequence and whole spacer.Because the recombinant replicates and accumulates much better than the parental MatHIVR-RNA3, the latter is easily outcompeted.Low recombination activity of HIV-1 R region tively low recombination activity of the 97-nt HIV-1 R (approximately 8%, Fig. 2B, C), compared to the recombination activity of 60-nt BMV R (approximately 30%, Fig. 4D, E).HIV-1 RT-mediated recombination was the more frequent (up to 20%, Fig. 3B-D) when expanded templates containing the whole HIV-1 leader were used.
We also found that the HIV-1 R sequence was not capable of supporting BMV polymerase-mediated homologous recombination in vivo.Certainly, one can question if there is any biological meaning behind this observation.There is no doubt that the BMV-based in vivo system cannot be used to study the complex mechanisms underlying the formation of HIV-1 recombinants in human cells.This system, however, seems to be perfect for the assessment of the recombination activity of isolated RNA sequences.It has been designed in such a way that the tested sequence-induced recombination repairs a defective BMV RNA3 molecule (Fig. 5).The repaired RNA3 recombinants are favored by the selective pressure.They replicate and accumulate better than the parental RNA3 and can be easily detected.Accordingly, one can expect that in the employed BMV-based system recombinants are generated provided that the tested sequences are capable of inducing template switching by the viral polymerase.
Earlier it was postulated that the structural polymorphism of the HIV-1 leader may influence the first strand transfer (Berkhout et al.,2002(Berkhout et al., , 2001;;Huthoff & Berkhout, 2001).However, in our in vitro experiments the frequency of recombination was not affected by the leader conformation (BMH or LDI).In natural conditions, the LDI-BMH transformation is enforced by the chaperone activity of viral NC protein (Huthoff & Berkhout, 2001).Recently, it has also been demonstrated that some cellular proteins may bind HIV-1 RNA/DNA and facilitate the first strand transfer (Warrilow et al., 2010).Therefore, one can conclude that several other factors in addition to the local sequence homology and leader structure are involved in this process.The leader region and the nascent DNA might function as a platform enabling the attachment of viral and host proteins modulating the template switch by HIV-1 RT.It seems that the first obligatory strand transfer is a much more complex process than a simple RNA structure-driven RNA recombination.
The data collected so far suggest that stable secondary structures are the major factor that reduces the recombination activity of homologous RNA molecules (Nagy & Bujarski, 1998, 1997, 1995).Two hairpins exist in the both tested sequences, HIV-1 R and BMV R (Fig. 1C, 4B).However, the hairpins present in HIV-1 R are much more stable than those formed within BMV R. The conditions of in vivo experiments could additionally increase the stability of the HIV-1 R secondary structure and, consequently, completely inhibit homologous recombination in planta.Earlier it was shown that the insertion of a stable stem-loop structure between the G and H hairpins in BMV R negatively influenced the frequency of BMV recombination (Olsthoorn et al., 2002).A positive correlation between the efficiency of template switching by HIV-1 RT and the increasing temperature of primer extension reaction has also been observed.Moreover, mutations destabilizing the TAR and polyA hairpins stimulated strand transfer, while it was inhibited by mutations stabilizing the polyA hairpin (Berkhout et al., 2001;Klavier & Berkhout, 1994).Congenial results were obtained when the recombination activity of two hepatitis virus C (HCV)-derived sequences (highly structured region X and less stable hypervariable region 1) was tested in the BMV-based re-combination system described above.The HCV region X adopting a very stable structure did not support recombination and the less structured HVR1 efficiently mediated homologous crossovers (55% of progeny RNA was classified as recombinants (Alejska et al., 2005).All four BMV RNA3 recombinants identified in our in vivo assays were products of nonhomologous recombination.This rare process occurs in BMV approximately 10-times less often than homologous recombination (Nagy & Bujarski, 1992).The nonhomologous recombinants were fitter than the parental Mat-HIVR-RNA3 (they probably replicated better than the parental molecules), thus the latter was outcompeted from plant cells.The absence of the virus in more than half of the local lesions (26 out of 40 lesions) also indicated that parental type Mat-HIVR-RNA3 was unable to replicate effectively and accumulated poorly in the plant.Most probably, the HIV-1 R secondary structure was too stable to allow not only the homologous template switching during BMV RNA replication, but it also negatively influenced the replication itself by blocking BMV RdRp movement.
Our in vitro assays showed that the HIV-1 RT-mediated crossovers predominantly occurred in 5′-terminal parts of the HIV-1 R and BMV R regions (Fig. 2D and 4F).The majority of the crossovers were located just at the very end of the donor templates.Their location might indicate that HIV-1 RT most effectively cleaves the donor template when it stops after reaching the HIV-1Rd/ BMVRd 5′ end.In such a situation, when other factors enhancing recombination are absent, template switching by viral polymerase depends on the acceptor capacity to replace the partially digested donor.Earlier it was demonstrated that the latter process can be effectively improved by AU-rich regions, within which donor-nascent strand interactions are weaker.The 5′ portion of BMV R is a region of this type (Fig. 4F).In contrast, the 5′ portion of HIV-1 R cannot be classified as an AU-rich sequence (Fig. 2D).Consequently, the 5′ fragment of BMV R much better supports homologous recombination than the analogous fragment of HIV-1 R.
In the case of HIV-1 R, a number of crossovers were also located in the proximal part of the sequence, namely 5% in the first and 10% in the second part (Fig. 2D).In vitro assays carried out by Berkhout′s group showed that approximately half of the identified crossovers occurred in the 3′ portion of HIV R -before the HIV-1 RT reached the 23rd nucleotide of the HIV R region (Berkhout et al., 2001).This might be a consequence of the reverse transcriptase being stopped by the strong polyA or TAR hairpins (Fig. 1B) and thus allowing extensive RNA cleavage by RNase H (Purohit et al., 2005).RNA secondary structures impede the movement of reverse transcriptase along the template and thus enhance its digestion, since the rate of cleavage is up to 10 times lower than the rate of DNA synthesis (Hanson et al., 2005;Kati et al., 1992).The generated fragments of unpaired ssDNA could then be invaded by the acceptor RNA and the synthesis would be continued using the acceptor RNA (Chen et al., 2005).In the case of the BMV R region, the G and H hairpins are much weaker (Fig. 4B).They could not efficiently slow down HIV-1 RT and enhance the donor template digestion by RNase H which would have increased the possibility of hybridization between a the nascent ssDNA strand and an acceptor RNA template.This fact may explain why crossovers were not found in the 3′ portion of the BMV R region.
Here we provided a new piece of evidence that the recombination activity of the HIV-1 R region contrib-utes little to the first obligatory strand transfer during HIV-1 genomic RNA reverse transcription.Doubtless, several other factors are also involved in the studied phenomenon.Accordingly, it should be considered as a well-synchronized complex process composed of the following events: enforcement of a proper leader structure by NC; RT pausing at a stable hairpin structure or at the HIV-1 R 5′ end; RNA template degradation by RNase H; RT stalling and ssDNA exposing; nascent ssDNA strand hybridization with 3′-R region; DNA/RNA hybrid propagation; template replacement.It seems that the relatively low recombination activity of HIV-1 R might in fact be necessary to prevent unwanted template switching events.Numerous factors involved in the first obligatory strand transfer can ensure an effective and strict regulation of this important step of HIV-1 replication.
Figure 1 .
Figure 1.HIV-1 leader sequence and R region.(A) Organization of the leader sequence and its location in the HIV-1 genome (coding region is marked with a black line, noncoding regions are marked with a grey line).(B) HIV-1 R folds in TAR and polyA hairpins.Red arrows point out the marker substitutions in HIV-1Ra.(C) Two conformations adopted by HIV-1 leader sequence.More stable structure (long distance interaction, LDI) is formed by longdistance base pairing between polyA and dimerization signal (DIS) regions.A branched multiple structure (BMH), whose formation is assisted by NC protein, exposes the polyA and DIS regions, enabling the formation of RNA dimer.
Figure 2 .
Figure 2. Primer extension reaction with HIV-1 RT and HIV-1Rd/HIV-1Ra.(A) Schematic description of the HIV-1 RT-based recombination system.Both donor and acceptor templates share HIV-1 R (grey line), the arrow shows the direction of DNA synthesis.(B) Example polyacrylamide gel electrophoresis of primer extension reaction products.Reactions were performed with increasing acceptor to donor molar ratio.Full-length products of primer extension reactions are marked F HIV-1 R , while transfer products are marked T HIV-1 R .(C) Efficiency of synthesis of strand transfer products.Full-length products of primer extension reactions are marked F, while transfer products are marked T. The efficiency of strand transfer on the graph is shown as the percentage of total reaction product (y axis).The donor:acceptor molar ratio was changed as follows: 1:0.5, 1:1, 1:2, and 1:5 (x axis).(D) Distribution of homologous crossovers in HIV-1 R. Marker substitutions are in bold, the distribution of crossovers is shown between the donor and acceptor sequence.
Figure 3 .
Figure 3. Primer extension reactions with HIV-1 RT and BMHd/BMHa or lDId/lDIa.(A) The formation of two different structures by LDId and BMHd RNAs was confirmed by their electrophoretic analysis in a native polyacrylamide gel.(B, C) Example polyacrylamide gel electrophoresis of products of primer extension reactions with RNA templates adopting BMH (B) or LDI (C) conformation.Reactions were performed with the donor to acceptor molar ratio of 1:5.Full-length products of the primer extension reactions are marked F HIV BMH or F HIV LDI , while transfer products are marked T HIV BMH or T HIV LDI , self priming products are marked with S HIV LDI .(D) Efficiency of synthesis of strand transfer products.Full-length products of primer extension reactions are marked F, while transfer products are marked T. The efficiency of strand transfer on the graph is shown as the percentage of total reaction product (y axis).Only the 1:5 donor:acceptor ratio was applied.
Figure 4 .
Figure 4. Primer extension reaction with HIV-1 RT and BMVR2d/BMVR3a.(A) BMV genome.It consists of three RNA molecules, RNA1, RNA2, RNA3, with tRNA-like structures at the 3' ends and the cap structure at the 5' ends (coding regions are marked with black lines, noncoding regions are marked with grey lines, R regions are marked as R2 and R3.(B) BMV R adopts secondary structure containing two hairpins G and H. Red arrows point out the marker substitutions in BMVR3a.(C) Donor and acceptor templates used in the reaction.Both donor and acceptor templates share BMV R (grey lines), the arrow shows the direction of DNA synthesis.(D) Example polyacrylamide gel electrophoresis of primer extension reaction products.Reactions were performed with an increasing acceptor to donor molar ratio.Full-length products of the primer extension reactions are marked F BMV R , while transfer products are marked T BMV R .(E) Efficiency of synthesis of strand transfer products.Full-length products of primer extension reactions are marked F, while transfer products are marked T. The efficiency of strand transfer on the graph is shown as the percentage of total reaction product (y axis).The donor:acceptor molar ratio was changed as follows: 1:0.5, 1:1, 1:2, and 1:5 (x axis).(F) Distribution of homologous crossovers in the BMV R region (60-nt region of high homology is underlined).Marker mutations are in bold, the distribution of crossovers is shown between the donor and acceptor sequence.
Figure 5 .
Figure 5. Assessment of HIV-1 R recombination activity in BMVbased in vivo system.(A)In our studies we used the described-earlier Mat-BMV mutant(Alejska et al., 2005).In this mutant specific modifications were introduced only in genomic BMV RNA3.Thus, the system is composed of wtRNA1, wtRNA2 and altered RNA3.Mat-RNA3 has unchanged 5'-noncoding, intergenic and coding regions.The tested homologous sequences (white boxes) are inserted into the 3'-noncoding region and separated by a spacer.As a result, the 3'-noncoding region in Mat-RNA3 is much longer than in wtRNA3.To assess recombination activity of HIV-1 R we constructed MatHIVR-RNA3.This molecule contains 5' and 3' HIV1-R as tested sequences.(B) The anticipated scenario of HIV-1R-mediated recombination.BMV polymerase initiates nascent RNA strand (dotted line) synthesis at the 3'end of MatHIVR-RNA3.When the polymerase reaches R region it can switch to the homologous region located in the same or other Mat-RNA3 molecule.The resulting RNA3 recombinant lacks one homologous sequence and whole spacer.Because the recombinant replicates and accumulates much better than the parental MatHIVR-RNA3, the latter is easily outcompeted.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.