added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2021-05-04T22:05:18.465Z
2021-04-07T00:00:00.000
233559881
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/13/8/4094/pdf", "pdf_hash": "5e24d4920204269f96f1a21cfbe656d57767752c", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2426", "s2fieldsofstudy": [ "Engineering" ], "sha1": "82c08e98a61dedfc2d45f1a771019782b2753cd0", "year": 2021 }
pes2o/s2orc
A Novel FCS-MPC Method of Multi-Level APF Is Proposed to Improve the Power Quality in Renewable Energy Generation Connected to the Grid : When photovoltaic, wind, energy storage batteries, and other new forms of energy are connected to the grid, power electronic converters are needed, and there are a lot of nonlinear devices in the grid. The characteristics of sustainable energy generation determine the variability and intermittency, which will produce harmonic components. Active power filters (APF) are commonly used in industry for harmonic compensation, so it is of great significance to control APF quickly and effectively. The multi-objective, single-factor, multistep finite control set model predictive control (FCS-MPC) of an APF proposed in this paper is suitable for a multi-objective, multi-level converter control. This method is applied to the three-level APF structure, which changes the traditional three-level FCS-MPC control method. The traditional three-level FCS-MPC includes four control objectives, stable control of the DC-side voltage, power grid harmonic currents generated under non-linear loads, and balance of the capacitor voltage on the DC side when switching frequency. This method uses the redundant switching state of the three-level structure to achieve the voltage balance of the two capacitors on the DC side, which reduces the difficulty of target optimisation caused by the selection of weight factors. Based on the multi-step prediction, power feedback control is added on the DC side to increase the DC side’s reaction speed, eliminate the influence of uncertainty, and realise better dynamic performance. According to the simulation results, we can observe that the proposed method has good followability, can compensate for the harmonics of the power grid, reduces the harmonic content to less than 5%, and can balance the DC-side capacitor voltage. Introduction The application of grid-connected technology can convert solar energy and wind energy into electric energy resources to meet energy demand and realise the rapid development of the power grid [1]. However, in the specific application process of grid-connected technology, the general control mode is not suitable for the obvious voltage, and current harmonics can easily appear in the process of grid connection. The grid connection is also affected by the lighting, angle, wind speed, and other factors, which increase the harmonic pollution of the power system [2]. The integration of new energy leads to the diversification of energy, but it affects the power quality of the power system to a certain extent, so there are certain security threats in the operation of the power system, and there are many hidden dangers of power accidents. If it is serious, it will even lead to the paralysis of the entire power system and affect people's normal production and life. Necessary measures must be taken to improve the application efficiency of new energy integration, such as active power filters (APF). An APF is a device that can compensate for both harmonic and reactive power. The principle of APF is to inject the detected harmonics 2 of 14 in a reverse direction to achieve the purpose of harmonic cancellation [3]. Since APF was put forward, many control methods have appeared, including proportional integral (PI) control, which is widely used in traditional industry. However, with the PI control method, it is difficult to achieve multi-objective control, and the parameter tuning is difficult to adjust. It has a certain lag through the error feedback adjustment at the current moment, and the industrial environment has a certain variability, so it is difficult to adjust the PI parameters in real time [4,5]. Due to the transformation between AC and DC and the intermittent renewable sources in the power grid, researchers [6] have proposed a metaheuristic-based vector-decoupled algorithm to balance the control and operation of hybrid microgrids in the presence of stochastic renewable energy sources and electric vehicles' charging structures. It ensures the stability of the voltage and frequency level under the harsh conditions of island operation and high pulse demand, as well as the variability of the renewable energy production, in order to improve the power quality of renewable energy. The literature [7] has proposed a hybrid fuzzy back-propagation control scheme for a unified power quality conditioner (UPQC). The reference current of the controller is controlled by the inverse propagation algorithm, and the reference voltage of the controller is controlled by fuzzy logic, which effectively improves the power quality. The literature [8] has also proposed a single-phase unified power quality conditioner based on a modular multilevel matrix converter (M3C). In this topology, DC circulating current is used to balance the instantaneous active power of each arm, to prevent voltage divergence between the arms and within the arms, to realise voltage balance between the capacitors. Model predictive control (MPC) is a control method developed from practice to theory with the participation of industry, and includes dynamic matrix control (DMC), model algorithm control (MAC), and generalised predictive control (GPC). With the development of microprocessors, MPC has been widely used in power electronics and converters. General power electronic devices have strong nonlinearity and a limited switching state [9,10]. Rodriguez et al. proposed the finite set model predictive control (FCS-MPC) [11] using the value function to select the optimal switch combination. However, the problem with traditional FCS-MPC is online calculation, and the switching frequency is not fixed [12,13]. Concerning the problems that existed in the control method of FCS-MPC, many scholars in related fields have proposed different methods [14]. Studies [15] have considered an MPC method with time-delay compensation. The reference current is predicted by Lagrange' interpolation. In the future, multiple prediction periods will be calculated. Due to the irregular and rapid variation of harmonics, the prediction corresponding to the predicted reference value has a large error at the change. Previous work [16] has pointed out that FCS-MPC was applied to APF to compensate for the harmonic current and reactive power. In the traditional FCS-MPC method of the three-level converter, it is necessary to perform a weighted control on multiple objectives. In [17], which synthesises voltage vectors with different duty cycles, a multi-objective optimisation strategy is used to find the best duty cycle to balance the capacitor voltage on the DC side. The algorithm has the problem of extensive computation. In order to simplify the selection of the multi-level weighting factors in a photovoltaic grid connection, researchers [18] proposed a search scheme to divide the traditional FCS-MPC into three steps to achieve the optimal control of each step, which is ineffective in solving the FCS-MPC delay problem. The literature [14] proposed an FCS-MPC control method using a redundant switching state; however, because of the high sampling frequency and time-delay of FCS-MPC, it cannot provide better compensation and resulting performance. The method proposed in this paper improves the power quality of a power grid, improves the compensation performance of APF by using a multi-level structure, avoids the multi-weight factor selection of the multi-step FCS-MPC algorithm, and further improves the control effect by using multi-step prediction. In this paper, a multi-step prediction based on a single weighting factor is proposed, and a power feedforward control method is proposed to accelerate the dynamic charac- teristics of APF and the stability of the DC side. The FCS-MPC with a redundant switch combination omits the selection of the weight coefficients in the case of the DC-side capacitor balance in the three-level APF, and the use of redundant vectors has different effects on the capacitor voltage without changing the output so that the selection of the optimal switching state can completely follow the harmonic reference current. It is difficult to balance the optimisation of many objectives and select a weighting factor. If there are too many objectives, the calculation time will increase, and the control effect will be delayed. The problem of how to accomplish the optimisation of these control objectives is the selection of the weight factors, and limiting their use can reduce the calculation of the system and improve the response speed. In this paper, a simulation model of the entire system is established and compared with the multi-step, multi-weight factor control method to verify the superiority of the single-factor control method under a power feedforward in a multi-level APF [18,19]. The structure of this paper includes six parts: the second section introduces the threephase NPC-APF model and the harmonic current detection method [19]; the third section introduces the FCS-MPC; the fourth section introduces the NPC-APF under a single weight factor; and the fifth section shows the performance and results of the multi-step FCS-MPC control method through simulation results. The sixth part is the conclusion. Figure 1 shows the structure of the NPC-type three-level APF [20]. e a , e b , and e c are the power grid voltages. i La , i Lb , and i Lc are the currents on the load side of the grid. i Ca , i Cb , and i Cc are the compensation currents of APF, and R and L are the filter inductance and resistance, respectively. In the NPC-type APF, each phase is composed of two diodes and four (Insulated Gate Bipolar Transistor) IGBT devices. The midpoint of the two diodes is connected to the midpoint of the DC side. The DC side is composed of capacitors C 1 and C 2 . The voltages of C 1 and C 2 need to be controlled in a relatively balanced fashion. In this paper, a multi-step prediction based on a single weighting factor is proposed, and a power feedforward control method is proposed to accelerate the dynamic characteristics of APF and the stability of the DC side. The FCS-MPC with a redundant switch combination omits the selection of the weight coefficients in the case of the DC-side capacitor balance in the three-level APF, and the use of redundant vectors has different effects on the capacitor voltage without changing the output so that the selection of the optimal switching state can completely follow the harmonic reference current. It is difficult to balance the optimisation of many objectives and select a weighting factor. If there are too many objectives, the calculation time will increase, and the control effect will be delayed. The problem of how to accomplish the optimisation of these control objectives is the selection of the weight factors, and limiting their use can reduce the calculation of the system and improve the response speed. Mathematical Model of Three-Phase Parallel APF In this paper, a simulation model of the entire system is established and compared with the multi-step, multi-weight factor control method to verify the superiority of the single-factor control method under a power feedforward in a multi-level APF [18,19]. The structure of this paper includes six parts: the second section introduces the three-phase NPC-APF model and the harmonic current detection method [19]; the third section introduces the FCS-MPC; the fourth section introduces the NPC-APF under a single weight factor; and the fifth section shows the performance and results of the multi-step FCS-MPC control method through simulation results. The sixth part is the conclusion. Figure 1 shows the structure of the NPC-type three-level APF [20]. ea, eb, and ec are the power grid voltages. iLa, iLb, and iLc are the currents on the load side of the grid. iCa, iCb, and iCc are the compensation currents of APF, and R and L are the filter inductance and resistance, respectively. In the NPC-type APF, each phase is composed of two diodes and four (Insulated Gate Bipolar Transistor) IGBT devices. The midpoint of the two diodes is connected to the midpoint of the DC side. The DC side is composed of capacitors C1 and C2. The voltages of C1 and C2 need to be controlled in a relatively balanced fashion. Operating Principle of Three-Level APF For a three-level APF, each bridge arm consists of four IGBTs, and there can be a total of 27 different switching state combinations. Each bridge arm is defined as three states of P, 0, and N in Equation (1), which correspond to three outputs. Operating Principle of Three-Level APF For a three-level APF, each bridge arm consists of four IGBTs, and there can be a total of 27 different switching state combinations. Each bridge arm is defined as three states of P, 0, and N in Equation (1), which correspond to three outputs. where x = a, b, c represents three bridge arms; and S x1 , S x2 , S x3 , and S x4 are the switches of each IGBT. These 27 switching states can generate 19 types of vectors, including three Among them, V1 to V6 are the voltage vectors composed of two sets of different switch states, each containing redundant switch states. V0 contains three sets of zero vectors: (0, 0, 0), (P, P, P), and (N, N, N). According to the voltage switch state, the vector output can be obtained as (2). There are three groups of bridge arms that determine the output voltage. Mathematical Model of APF The FCS-MPC of the current will cause errors when tracking under a sinusoidal signal. The higher the frequency, the greater the error; therefore, the collected reference signal is outputted in the α-β coordinate system after the prediction model is outputted, and APF performs tracking reference prediction. The signal can be convenient and intuitive. If the grid voltage were balanced, according to Kirchhoff's current law, the APF output currents ia, ib, ic; the grid voltages ea, eb, ec; and the output voltages ua, ub, uc would converted to α-β with Equation (3). where Among them, V 1 to V 6 are the voltage vectors composed of two sets of different switch states, each containing redundant switch states. V 0 contains three sets of zero vectors: (0, 0, 0), (P, P, P), and (N, N, N). According to the voltage switch state, the vector output can be obtained as (2). There are three groups of bridge arms that determine the output voltage. Mathematical Model of APF The FCS-MPC of the current will cause errors when tracking under a sinusoidal signal. The higher the frequency, the greater the error; therefore, the collected reference signal is outputted in the α-β coordinate system after the prediction model is outputted, and APF performs tracking reference prediction. The signal can be convenient and intuitive. If the grid voltage were balanced, according to Kirchhoff's current law, the APF output currents i a , i b , i c ; the grid voltages e a , e b , e c ; and the output voltages u a , u b , u c would converted to α-β with Equation (3). where Sustainability 2021, 13, 4094 of 14 In the α-β coordinate system, the mathematical model of the α-β axis of the APF can be obtained as i p -i q Harmonic Detection Method The harmonic detection of APF is mainly based on the i p -i q method of instantaneous reactive power theory, which extracts the distorted current harmonics from the threephase load current [21]. The specific principle diagram is shown in Figure 3. If the load currents i La , i Lb, and i Lc are changed to the d-q coordinate system and pass the low-pass filter (LPF), only the fundamental current remains, and then inversely transforms to the a, b, c coordinate system. Only the residual harmonic current (the load current minus the fundamental current) is compensated as the command current to the APF. The PI controller is used to obtain the DC-side reference current to make it stable at the reference voltage setting. In the α-β coordinate system, the mathematical model of the α-β axis of the APF can be obtained as ip-iq Harmonic Detection Method The harmonic detection of APF is mainly based on the ip-iq method of instantaneous reactive power theory, which extracts the distorted current harmonics from the threephase load current [21]. The specific principle diagram is shown in Figure 3. If the load currents iLa, iLb, and iLc are changed to the d-q coordinate system and pass the low-pass filter (LPF), only the fundamental current remains, and then inversely transforms to the a, b, c coordinate system. Only the residual harmonic current (the load current minus the fundamental current) is compensated as the command current to the APF. The PI controller is used to obtain the DC-side reference current to make it stable at the reference voltage setting. where sin cos cos sin FCS-MPC of Three-Level APF As shown in Figure 2, in the α-β coordinate system, Vx(x = 0, 1,…,7) is the 27 voltage vectors, while Jmin is the voltage vector with the smallest objective function. To get the switching signal combination state to directly control the APF output, take the tracking principle of the FCS-MPC algorithm at a certain sampling moment, calculate the cost function by comprehensively switching the frequency, the tracking error, and so on, and seek to obtain the optimal target. APF Control Principle The principle diagram of APF's current predictive control is shown in Figure 4. The harmonic source is replaced by a nonlinear load. io is the predicted active current component value of APF, and i*(k) is the existing harmonic reference current. Generally, the difference between the DC side and the reference value is used. In this paper, the power where FCS-MPC of Three-Level APF As shown in Figure 2, in the α-β coordinate system, V x(x = 0, 1, . . . ,7) is the 27 voltage vectors, while J min is the voltage vector with the smallest objective function. To get the switching signal combination state to directly control the APF output, take the tracking principle of the FCS-MPC algorithm at a certain sampling moment, calculate the cost function by comprehensively switching the frequency, the tracking error, and so on, and seek to obtain the optimal target. APF Control Principle The principle diagram of APF's current predictive control is shown in Figure 4. The harmonic source is replaced by a nonlinear load. i o is the predicted active current component value of APF, and i*(k) is the existing harmonic reference current. Generally, the difference between the DC side and the reference value is used. In this paper, the power control is performed on the DC side, the voltage regulator is controlled by the PI regulator, and the compensation is performed together with the harmonics [22]. After the objective function, the optimal switch combination is directly applied to the APF to control the APF's output compensation current to track the harmonics at the current moment and predict the next time reference signal. control is performed on the DC side, the voltage regulator is controlled by the PI regulator, and the compensation is performed together with the harmonics [22]. After the objective function, the optimal switch combination is directly applied to the APF to control the APF's output compensation current to track the harmonics at the current moment and predict the next time reference signal. Power Feedback Control The principle diagram of FCS-MPC for the power feedforward control is shown in Figure 5. Compared with the traditional direct current PI adjustment, power feedforward control directly adjusts the PI power on the DC side. After proportional signal amplification, the dynamic performance of the DC-side voltage regulator control is improved. In the figure above, GC(s) is the current closed-loop transfer function, and U* is the reference value of the DC-side voltage. U is the current voltage value of the DC side, idc is the current value of DC side, and kg is the gain coefficient. Single-Step FCS-MPC of Traditional NPC Converter The two capacitors C1 and C2 on the DC side are dynamically modelled and described by the difference Equation (6), where the capacitance values of C1 and C2 are equal, C is the value of C1 and C2, uc1 and uc2 are the voltage values of the capacitors, and ic1 and ic2 are the current values of the two capacitors. Power Feedback Control The principle diagram of FCS-MPC for the power feedforward control is shown in Figure 5. Compared with the traditional direct current PI adjustment, power feedforward control directly adjusts the PI power on the DC side. After proportional signal amplification, the dynamic performance of the DC-side voltage regulator control is improved. control is performed on the DC side, the voltage regulator is controlled by the PI regulator, and the compensation is performed together with the harmonics [22]. After the objective function, the optimal switch combination is directly applied to the APF to control the APF's output compensation current to track the harmonics at the current moment and predict the next time reference signal. Power Feedback Control The principle diagram of FCS-MPC for the power feedforward control is shown in Figure 5. Compared with the traditional direct current PI adjustment, power feedforward control directly adjusts the PI power on the DC side. After proportional signal amplification, the dynamic performance of the DC-side voltage regulator control is improved. In the figure above, GC(s) is the current closed-loop transfer function, and U* is the reference value of the DC-side voltage. U is the current voltage value of the DC side, idc is the current value of DC side, and kg is the gain coefficient. Single-Step FCS-MPC of Traditional NPC Converter The two capacitors C1 and C2 on the DC side are dynamically modelled and described by the difference Equation (6), where the capacitance values of C1 and C2 are equal, C is the value of C1 and C2, uc1 and uc2 are the voltage values of the capacitors, and ic1 and ic2 are the current values of the two capacitors. In the figure above, G C (s) is the current closed-loop transfer function, and U* is the reference value of the DC-side voltage. U is the current voltage value of the DC side, i dc is the current value of DC side, and k g is the gain coefficient. Single-Step FCS-MPC of Traditional NPC Converter The two capacitors C 1 and C 2 on the DC side are dynamically modelled and described by the difference Equation (6), where the capacitance values of C 1 and C 2 are equal, C is the value of C 1 and C 2 , u c1 and u c2 are the voltage values of the capacitors, and i c1 and i c2 are the current values of the two capacitors. Substituting Equation (7) into Equation (6) yields Equation (8). Equation (8) is a discrete equation after the Euler forward difference, which can predict the next moment. Among them, i c1 (k) and i c2 (k) are the current values determined by the switch and the output current, which is specifically determined by Equation (9). where i dc (k) is the DC-side current at time k; i a (k), i b (k), and i c (k) are the three-phase output currents at time k; and G 1x and G 2x are determined by the inverter's switching state at the current time, as per Equation (10). A dynamic discrete model, Equation (11), is established for the APF system to predict the current output at the next moment. i p o (k + 1) is the predicted output current at the next moment; L is the filter inductance; R is the equivalent resistance; i o (k) is the current output value at the present moment k; and u out (k) is the voltage determined by the switch state. The value is determined by Equation (2). e(k + 1) is the grid-side voltage, and it does not change much in the adjacent sampling points, so e(k + 1) = e(k). The objective function of Equation (12) can be established. where In the α-β coordinate system, ih α and ih β are the reference currents, ip α and ip β are the predicted outputs, and up c1 and up c2 are the predicted values of the capacitor voltage, making them equal by the weighting factors λ 1 , λ 2 , λ 3 , and γ i. Multi-Step Predictive In a very small sampling period, T s is very small, so the term RTs in Equation (11) can be ignored. The grid voltage e(k + 1) and the reference current i*(k + 1) need to be estimated. With Lagrange's interpolation method (Equation (13)), the reference current using the prediction method to obtain the grid voltage and the reference current at the next moment are usually taken as the second or third order. For accuracy, the third-order interpolation is used, as shown in the following Equation (14). Similarly, the reference value at k + 2 can also be obtained. Due to the conservative problem of a single-step FCS-MPC, which affects the compensation performance of the APF, this paper makes the calculation of the optimal state at the next moment of the two optimal control states generated by the objective function at the sampling moment. The prediction output of the APF at the next moment from the single-step prediction is i p o (k + 1). The optimal two switch states S opt1 (k + 1) and S opt2 (k + 1) are obtained according to the optimal solution of the objective function, and the obtained two switch states are applied to Equation (11). i p (k + 1) and v(k + 1) are obtained, and Equation (11) is subjected to backward difference to obtain Equation (15). In Equation (15), the two optimal states i p 1 (k + 1) and i p 2 (k + 1) produced by single-step predictive control continue to predict the output. Single Objective There are six sets of redundant vectors in the three-level APF. Each of these six sets of redundant vectors produces the same output voltage in the APF, but has different effects on the current at the midpoint of the DC side. Thus, it has different effects on the voltage trend of the two capacitors. As shown in the voltage vector V1 of Figure 6, the output voltage vector of Equation (16) is the same, but the balance effect on the charging and discharging of the DC-side capacitor is different. Due to the conservative problem of a single-step FCS-MPC, which affects the compensation performance of the APF, this paper makes the calculation of the optimal state at the next moment of the two optimal control states generated by the objective function at the sampling moment. The prediction output of the APF at the next moment from the single-step prediction is i p o (k + 1). The optimal two switch states Sopt1(k + 1) and Sopt2(k + 1) are obtained according to the optimal solution of the objective function, and the obtained two switch states are applied to Equation (11). ip(k + 1) and v(k + 1) are obtained, and Equation (11) is subjected to backward difference to obtain Equation (15). In Equation (15), the two optimal states 1 ( 1) p i k + and i p 2 (k + 1) produced by single-step predictive control continue to predict the output. Single Objective There are six sets of redundant vectors in the three-level APF. Each of these six sets of redundant vectors produces the same output voltage in the APF, but has different effects on the current at the midpoint of the DC side. Thus, it has different effects on the voltage trend of the two capacitors. As shown in the voltage vector V1 of Figure 6, the output voltage vector of Equation (16) is the same, but the balance effect on the charging and discharging of the DC-side capacitor is different. At the sampling time k, the required output voltage vector is the same, according to Table 1. For example, when (P,0,0) is selected, the midpoint voltage iN is −ia. When ia > 0, uc1 decreases; when ia < 0, uc1 increases, and when selecting V1, ia < 0, if uc1> uc2, then select (0, N, N). At the sampling time k, the required output voltage vector is the same, according to Table 1. For example, when (P,0,0) is selected, the midpoint voltage i N is −i a . When i a > 0, u c1 decreases; when i a < 0, u c1 increases, and when selecting V 1 , i a < 0, if u c1 > u c2 , then select (0, N, N). Switch State When u c1 = u c2 , predict the voltage charging behaviour of the DC-side capacitors C 1 and C 2 and always choose a suitable vector to make the two capacitor voltages more balanced. When making a multi-step prediction, employ the first step: loop 19 voltage vectors to find the optimal two switching states that make the first step minimise the objective function. If it is a redundant vector, according to the magnitude of u c1 and u c2 , select the vectors that tend to be equal. In the second step, substitute the two switch states obtained in the first step to obtain the current at the time k + 1 and cycle to the previous step. Under the switch state minimised by the objective function, if it is a redundant vector, select the redundant voltage of the capacitor voltage to balance the I vector. The principle diagram of the algorithm is shown in Figure 7. Simulation and Results Analysis Assuming that the three phases are symmetrical, phase A is selected as the analysis object, and the parameters are shown in Table 2. Table 2. Parameters of the simulation model. Figure 7. Algorithm implementation principle. Simulation and Results Analysis Assuming that the three phases are symmetrical, phase A is selected as the analysis object, and the parameters are shown in Table 2. Table 2. Parameters of the simulation model. Parameter Value Grid voltage and f 380 V, 50 Hz DC voltage 800 V Capacitor C 1 , C 2 4000 µF Inductor 2 mH Resistor 0.01 Ω Load resistor 8 Ω Through simulation, the current of the system before the compensation is shown in Figure 8a. The load of the system changes suddenly in 0.4 s. The harmonics detected by the i p -i q method are shown in Figure 8b, while Figure 8c shows that the different values of n affect the prediction results of the reference values. Figure 1 shows the predictive results in two different ways. When n = 2, the amount of the calculation is small but poor dynamic performance. When n = 3, a large amount of calculation is required but excellent dynamic performance is achieved. For the traditional FCS-MPC method, in order to make the tracking current dominant, the following weight factor parameters are set: λ 1 = 0.4, λ 2 = 0.4, λ 3 = 0.2, λ 1 = 0.5, λ 2 = 0.4, and λ 3 = 0.1. However, the method requires a lot of experience, and it is unable to select the optimal weight factor. For the method proposed in this paper, the a-axis and b-axis are equal, λ 1 = λ 2 = 0.5; no other weight factors need to be set. Through simulation, the current of the system before the compensation is shown in Figure 8a. The load of the system changes suddenly in 0.4 s. The harmonics detected by the ip-iq method are shown in Figure 8b, while Figure 8c shows that the different values of n affect the prediction results of the reference values. Figure 1 shows the predictive results in two different ways. When n = 2, the amount of the calculation is small but poor dynamic performance. When n = 3, a large amount of calculation is required but excellent dynamic performance is achieved. For the traditional FCS-MPC method, in order to make the tracking current dominant, the following weight factor parameters are set: λ1 = 0.4, λ2 = 0.4, λ3 = 0.2, λ1 = 0.5, λ2 = 0.4, and λ3 = 0.1. However, the method requires a lot of experience, and it is unable to select the optimal weight factor. For the method proposed in this paper, the a-axis and b-axis are equal, λ1 = λ2 = 0.5; no other weight factors need to be set. When λ1 = 0.4, λ2 = 0.4, and λ3 = 0.2 are selected, the performance of APF as shown in Figure 9a is the compensated grid current, which becomes a sine wave. See Figure 9b for the change of the DC side-voltage when the load changes. The power feedforward method has better dynamic performance. Since the DC-side control method is the same, it is only given in Figure 9. In Figure 9c, the voltages of the capacitors C1 and C2 on the DC side can When λ 1 = 0.4, λ 2 = 0.4, and λ 3 = 0.2 are selected, the performance of APF as shown in Figure 9a is the compensated grid current, which becomes a sine wave. See Figure 9b for the change of the DC side-voltage when the load changes. The power feedforward method has better dynamic performance. Since the DC-side control method is the same, it is only given in Figure 9. In Figure 9c, the voltages of the capacitors C 1 and C 2 on the DC side can be rebalanced when the load changes. Compared with the other two weighting factors, the balance is better. As shown in Figure 9c,d, the THD (total harmonic distortion) equals 2.13%. When λ1 = 0.5, λ2 = 0.4, and λ3 = 0.1, as shown in Figure 10a, the compensated grid current is worse than when λ1 = 0.4, λ2 = 0.4, and λ3 = 0.2. In Figure 10b, the voltage balance of the DC-side capacitors C1 and C2 is worse than the former, and the THD rate is higher than the former, which is THD = 3.95%. When λ 1 = 0.5, λ 2 = 0.4, and λ 3 = 0.1, as shown in Figure 10a, the compensated grid current is worse than when λ 1 = 0.4, λ 2 = 0.4, and λ 3 = 0.2. In Figure 10b, the voltage balance of the DC-side capacitors C 1 and C 2 is worse than the former, and the THD rate is higher than the former, which is THD = 3.95%. The results of the multi-objective single-factor method for multi-step FCS-MPC are shown in Figure 11a as the compensation result, which is a sine wave, and the effect is better than the multi-step multi-objective control method. As seen in Figure 11b, the voltage balance of the DC-side capacitors C1 and C2 are slightly worse than when λ1 = 0.4, λ2 = 0.4, and λ3 = 0.2; and better than when λ1 = 0.5, λ2 = 0.4, and λ3 = 0.1; however, the convergence is better. As shown in Figure 11c, the THD = 1.28%. The results of the multi-objective single-factor method for multi-step FCS-MPC are shown in Figure 11a as the compensation result, which is a sine wave, and the effect is better than the multi-step multi-objective control method. As seen in Figure 11b, the voltage balance of the DC-side capacitors C 1 and C 2 are slightly worse than when λ 1 = 0.4, λ 2 = 0.4, and λ 3 = 0.2; and better than when λ 1 = 0.5, λ 2 = 0.4, and λ 3 = 0.1; however, the convergence is better. As shown in Figure 11c, the THD = 1.28%. The results of the multi-objective single-factor method for multi-step FCS-MPC are shown in Figure 11a as the compensation result, which is a sine wave, and the effect is better than the multi-step multi-objective control method. As seen in Figure 11b, the voltage balance of the DC-side capacitors C1 and C2 are slightly worse than when λ1 = 0.4, λ2 = 0.4, and λ3 = 0.2; and better than when λ1 = 0.5, λ2 = 0.4, and λ3 = 0.1; however, the convergence is better. As shown in Figure 11c, the THD = 1.28%. The A-phase harmonic changes under the three different weighting factors when the 0.4s load changes are shown in Figure 12. In the single-factor control method, its HTD changes have good stability, and the THD is kept low, which is better than the control method for two different weighting factors under the multi-objective version. Conclusions This paper presents a method of simplifying the cost function of multi-step FCS-MPC based on multi-level APF. By using the three-level redundant switching characteristics, harmonic reference is completed, and DC side capacitance voltage is balanced, which eliminates the reduction of control effect caused by the selection of the weighting factors. The simplification of the cost function also reduces the calculation amount. The simulation results show that the control method has a good control effect, reduces the harmonic content of the power grid, and meets the standard of the power grid. This method has lower THD than the multi-weighting factor method, which enables the output to follow the reference current, all because the optimal weighted distribution is not needed. The multi-level structure is closer to the sine wave output, and it lowers the harmonic content, which is the future development trend. The three-level APF single-target multistep FCS-MPC proposed in this paper is also suitable for AC motors, (Active Front End) AFE, (Static Var Generator) SVG, and so on. It can also be used in a five-level, or even seven-level structure. It reduces the complexity of the vector selection of the multi-level converter and balances the voltage of DC-side capacitance. The A-phase harmonic changes under the three different weighting factors when the 0.4s load changes are shown in Figure 12. In the single-factor control method, its HTD changes have good stability, and the THD is kept low, which is better than the control method for two different weighting factors under the multi-objective version. The A-phase harmonic changes under the three different weighting factors when the 0.4s load changes are shown in Figure 12. In the single-factor control method, its HTD changes have good stability, and the THD is kept low, which is better than the control method for two different weighting factors under the multi-objective version. Conclusions This paper presents a method of simplifying the cost function of multi-step FCS-MPC based on multi-level APF. By using the three-level redundant switching characteristics, harmonic reference is completed, and DC side capacitance voltage is balanced, which eliminates the reduction of control effect caused by the selection of the weighting factors. The simplification of the cost function also reduces the calculation amount. The simulation results show that the control method has a good control effect, reduces the harmonic content of the power grid, and meets the standard of the power grid. This method has lower THD than the multi-weighting factor method, which enables the output to follow the reference current, all because the optimal weighted distribution is not needed. The multi-level structure is closer to the sine wave output, and it lowers the harmonic content, which is the future development trend. The three-level APF single-target multistep FCS-MPC proposed in this paper is also suitable for AC motors, (Active Front End) AFE, (Static Var Generator) SVG, and so on. It can also be used in a five-level, or even seven-level structure. It reduces the complexity of the vector selection of the multi-level converter and balances the voltage of DC-side capacitance. Conclusions This paper presents a method of simplifying the cost function of multi-step FCS-MPC based on multi-level APF. By using the three-level redundant switching characteristics, harmonic reference is completed, and DC side capacitance voltage is balanced, which eliminates the reduction of control effect caused by the selection of the weighting factors. The simplification of the cost function also reduces the calculation amount. The simulation results show that the control method has a good control effect, reduces the harmonic content of the power grid, and meets the standard of the power grid. This method has lower THD than the multi-weighting factor method, which enables the output to follow the reference current, all because the optimal weighted distribution is not needed. The multi-level structure is closer to the sine wave output, and it lowers the harmonic content, which is the future development trend. The three-level APF single-target multistep FCS-MPC proposed in this paper is also suitable for AC motors, (Active Front End) AFE, (Static Var Generator) SVG, and so on. It can also be used in a five-level, or even seven-level structure. It reduces the complexity of the vector selection of the multi-level converter and balances the voltage of DC-side capacitance.
v3-fos-license
2018-12-07T21:31:58.661Z
2017-01-01T00:00:00.000
55196003
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/54/matecconf_iceesi2017_01008.pdf", "pdf_hash": "d42ac00ca489b2a12fab6f0121796125d7be3fe8", "pdf_src": "Unpaywall", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2427", "s2fieldsofstudy": [ "Business" ], "sha1": "d42ac00ca489b2a12fab6f0121796125d7be3fe8", "year": 2017 }
pes2o/s2orc
Performance Comparison Of Triangle Antenna of 60 GHz for 5G Wireless Communication Network In this paper microstrip triangle with slot antenna for 5G wireless communication network are proposed. The microstip triangle antenna is design and operating 60 GHz milimeter-wave frequency band and it's suitable for 5G wireless communication. The substrates are chosen in the design, which are RogerRT5880 with copper thickness 0.035 mm to analyze their effect toward milimeter-wave performance on the designed. The designed and analysis is performed by using CST Microwave Studio. The lowest return loss of the antenna is -24.75dB which is triangle with slot and the maximum gain obtained is 6.82 db at the 59.68GHz for this antenna. The antenna is considering the gain, return loss and size, the microstrip antenna can be a suitable candidate for the 5G wireless application for short range high speed communication. Introduction The current status of the 5G technology for wireless systems is very much in recent development stages.5G technology for wireless system will probably start to come to awareness around 2020 with deployment following on afterwards. The wireless communication is development of target microstrip antenna is a thrilling research interest nowadays and many techniques have been proposed to improve their performance. The 4G wireless communication systems have already been initiated in some of the countries and are going to be in other soon. However, the problem and challenges of spectrum scarcity and power consumption still persist even with the presence of 4G systems [1]. Therefore the need for 5G wireless system came in order to solve the issues and the requirement of high data rate and mobility to solve the challenges the research on 5 th generation wireless system is going on the expected to be accomplished by 2020 [1]. Most of the technologies to be used for 5G will commence to come in the systems used for 4G and then as the new 5G cellular system starts to formulate in a more concrete manner, they will incorporated into the new 5G cellular system [5]. The main issues with 5G technology is that there is such a extremely wide variation in the requirement, superfast downloads to small data requirements for IoT than any one system will not capable to meet these needs. Accordingly a layer approach is likely to be adopted. As well say, 5G is not just a mobile technology, it is ubiquitous access to high & low data rate services. Through the second half 2014, 5G technology include all sort of advance features which will make it most powerful and in huge demand in near future. Nevertheless, the several between 4G and 5G techniques from a user point of view is increased data rate and less power usage with better coverage. 5G system may take the wireless signals to a higher frequency range of 30 to 300 gigahertz (GHz) and will reduce the wavelength from centimeter to millimeter [2]. One of the challenges is the technology may face is attenuation of line of sight communication is not possible between transmitter and receiver. Proposed antenna could provide the communication for future 5G network application which has a faster data rate. Furthermore millimeter wave bands could relieve congestion and reduce demand for spectrum in frequency bands below 5 GHz. Microstrip antennas for wireless system application in operates 60GHz is proposed which has a faster data rate and high gain. The gain is enhanced by inserting the slot into the patch and the most optimized results are discussed in the following sections. To compute the resonant frequency of an equilateral triangle patch, at the lowest-order resonant frequency fr, the side length a, can be given by (1) and placed at the distance of h mm (substrate thickness) away from the ground plane. The chosen substrate material is 0.127mm thick RogerRT5880 dielectric board with a dielectric constant of 2.2 and loss tangent 0.0009. The printed circuit board (PCB) material has some advantage such as low dielectric torelance and loss, stable electric property against frequency and thus it is better choice for high frequency operation. (1) C = 3 x 10 8 (2) Antenna and Design Geometry Based on the formula above, theoritically calculated both side (a) of top radiating patch for 60 GHz resonance are found to be 2 mm and 0.5 mm, respectively. However, the dimension have been adjusted and optimized to meet requrement of the resonant frequency and other characteristics. The design paramaters are obtained from several parametric studies and suitable patch and slot size are selected for a high gain, wideband 60 GHz antenna. Simulation and Result The simulation of the proposed antenna is performed using Computer Simulation Technology (CST) Microwave Studio commercial software program. The simulated results of the reflection coefficients |S11| for the proposed milimeter-wave antenna are illustrated in Fig. 4. It is apparent that the proposed antenna can cover millimeter-wave bands of 60 GHz for |S11| less than -10. The simulated maximum realized gain of the proposed antenna . A stable gain for triangle without slot antenna with a value 4.55 dB in the frequency at 61.75 GHz and with return loss -14.76dB is observed and gain at 6.82 db for triangle antenna with slot frequency at 59.68 GHz with -24.75 db return loss, Simulated results demonstrate that the antenna is characterized by omnidirectional patterns. Conclusion In this paper, the triangle antenna for wireless application at 60 GHz is proposed. The antenna configuration is designed and analyzed by using the CST Studio Suite 2016 based on the finite element method. Several parametric studies have been performed to obtain a better combination of design parameters. From the anaylsis, its can be concluded that the insert the slot at the antenna can influence the return loss, gain and frequency. The triangle with slot antenna has a good result of return loss ,S11 compare to triangle without antenna. As we can see, the result of simulation antenna are achieved for the 5G wireless communication
v3-fos-license
2019-06-27T16:22:00.009Z
2019-10-28T00:00:00.000
195658535
{ "extfieldsofstudy": [ "Medicine", "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jocn.14977", "pdf_hash": "09625ad892d6ad4cad4211bf03bb69cde118b349", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2428", "s2fieldsofstudy": [ "Medicine" ], "sha1": "acf89eda316786358deabb85b93939bf26f872ec", "year": 2019 }
pes2o/s2orc
12‐hr shifts in nursing: Do they remove unproductive time and information loss or do they reduce education and discussion opportunities for nurses? A cross‐sectional study in 12 European countries Abstract Aims and objectives To examine the association between registered nurses' (referred to as “nurses” for brevity) shifts of 12 hr or more and presence of continuing educational programmes; ability to discuss patient care with other nurses; assignments that foster continuity of care; and patient care information being lost during handovers. Background The introduction of long shifts (i.e., shifts of 12 hr or more) remains controversial. While there are claims of efficiency, studies have shown long shifts to be associated with adverse effects on quality of care. Efficiency claims are predicated on the assumption that long shifts reduce overlaps between shifts; these overlaps are believed to be unproductive and dangerous. However, there are potentially valuable educational and communication activities that occur during these overlaps. Design Cross‐sectional survey of 31,627 nurses within 487 hospitals in 12 European countries. Methods The associations were measured through generalised linear mixed models. The study methods were compliant with the STROBE checklist. Results When nurses worked shifts of 12 hr or more, they were less likely to report having continuing educational programmes; and time to discuss patient care with other nurses, compared to nurses working 8 hr or less. Nurses working shifts of 12 hr or more were less likely to report assignments that foster continuity of care, albeit the association was not significant. Similarly, working long shifts was associated with reports of patient care information being lost during handovers, although association was not significant. Conclusion Working shifts of 12 hr or more is associated with reduced educational activities and fewer opportunities to discuss patient care, with potential negative consequences for safe and effective care. Relevance to clinical practice Implementation of long shifts should be questioned, as reduced opportunity to discuss care or participate in educational activities may jeopardise the quality and safety of care for patients. shifts is also a time that nurses have traditionally used to access both formal and informal education opportunities (Baillie & Thomas, 2019). Some early small scale reports suggested that nurses moving to shifts of 12 hr or more had fewer opportunities to participate in continuing education programmes compared to nurses working 8-hr shifts (McGettrick & O'Neill, 2006;Reid, Todd, & Robinson, 1991). Furthermore, there is a growing number of studies suggesting that shifts of 12 hr or more are associated with adverse outcomes for both patients and nurses, with nurses reporting both lower quality of care and increased omissions in necessary care to be associated with working long shifts (Ball et al., 2017;Dall'Ora, Ball, Recio-Saucedo, & Griffiths, 2016;Griffiths et al., 2014;Stimpfel & Aiken, 2013). These findings raise the possibility that far from reducing unproductive time, which adds little value to nursing care, the move to shifts of 12 hr or more may negatively impact nurses' ability to deliver safe and effective care. The potential benefits of reduced number of handovers resulting from implementing shifts of 12 hr or more have never been formally tested; therefore, in this study, we aimed to examine the association between nurses' shifts of 12 hr or more and time for active staff development or continuing education activities; opportunity to discuss patient care with other nurses; assignments that foster continuity of care; and important patient information being lost during handover. | ME THODS This was a cross-sectional survey study using nurse-reported data from a large European survey, the RN4CAST study, which was Relevance to clinical practice: Implementation of long shifts should be questioned, as reduced opportunity to discuss care or participate in educational activities may jeopardise the quality and safety of care for patients. K E Y W O R D S 12-hr shifts, communication, continuity of patient care, education, continuing, nursing, patient handoff, shift work schedule What does this paper contribute to the wider global clinical community? • Reducing handovers through the introduction of shifts of 12 hr or more is not associated with enhanced continuity of patient care or reduced loss of patient information. • Shorter 8-hr shifts, which typically offer an extended overlap between early and late shifts, are associated with higher likelihood for nurses to have continuing educational programmes and to discuss patient care with colleagues. • The assumption that implementing a two (shifts of 12 hr or more) shift system removes unproductive time and decreases risk of patient information loss is unwarranted. Shifts of 12 hr or more shifts appear to introduce a series of unintended consequences in terms of quality of patient communication. conducted in 12 countries: Belgium, England, Switzerland, Germany, Spain, Finland, Greece, Ireland, The Netherlands, Norway, Poland and Sweden (Sermeus et al., 2011). The main aim of the RN4CAST study was to derive nurse forecasting models that consider how work environments' characteristics impact on nurse and patient outcomes. The RN4CAST study protocol was approved by either central ethical committees or local ethical committees, depending on each country's regulatory requirements. The study methods were compliant with the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) checklist (Appendix S1). | Data Data were collected from registered nurses (referred to as "nurses" in this study for brevity) working in general hospitals within surgical, medical or mixed medical-surgical wards between 2009-2010. Data collection differed between countries: a hospital field manager administered questionnaires to nurses; a hospital field manager held visits of the RN4CAST team to the sampled wards, who explained the study and distributed the questionnaires; nurses received the questionnaires via e-mail; nurses received questionnaires by mail at their home address. Nurses were asked to return responses within 3 weeks. For the detailed RN4CAST study methodology, please see Sermeus et al., (2011). | Measurements There were 118 questions in the RN4CAST survey, combined in different sections: "About your job," which included questions around opportunities to engage in continuing education programmes and in discussions around patient care, and around continuity of care; "Quality and safety," including statements relating to patient issues; "About your most recent shift at work in this hospital," which measured length of shift and staffing levels; "About you," recording demographic variables including age, gender and education level; and "Your job," enquiring aspects including job title, play band and ward type. Nurses' length of shift was captured by a question asking the number of hours worked on their most recent shift (i.e., the last shift they worked before filling in the questionnaire). To perform multilevel regression analysis, we categorised shift length into the following four groups: 8 hr or less; between 8.1-10 hr; between 10.1-11.9 hr; 12 hr or more. We created a variable to categorise day and night shifts and removed all subjects who provided invalid responses. Reponses to a question enquiring about overtime on the last shift were recorded as "yes" or "no," and nurses indicated whether they were working full time or part-time at the hospital. When nurses had indicated that a shift lasted 18 hr or more, we removed their responses from their dataset. Absolute numbers of these responses were low (n = 507, 1.3%). We chose this cut-off because reported shifts longer than 18 hr are likely invalid answers based on nurses' total weekly hours. Four measures were drawn from the survey as study outcomes. Nurses were asked to what extent they agreed with the following statements: "there are active staff development or continuing education programmes for nurses in my current job"; "there is enough time and opportunity to discuss patient care problems with other nurses"; "there are patient care assignments that foster continuity of care (i.e., the same nurse cares for the patient from one day to the next)." These statements were rated on a 4-item Likert scale, where 1 indicated "strongly disagree" and 4 "strongly agree." For analysis, we grouped "somewhat agree" and "strongly agree" responses to reflect positive evaluations. The final question was "Important patient care information is often lost during shift changes" with responses "strongly disagree," "disagree," "neither," "agree," "strongly agree." We grouped "strongly agree" and "agree" to reflect a negative evaluation (i.e., agree that important care is missed). | Data analysis We first performed descriptive analyses, where outcomes were described using frequencies and percentages by shift length category. The association between shift length and nurses' reports of information and communication activities was explored using generalised linear mixed models, first by including country, hospital and ward as random effects. We then added potential confounding variables to the models, including timing of last shift (i.e., day or night); presence of overtime; ward nurse staffing levels; hospital size; hospital technology status; hospital teaching status; full-time/part-time work; and nurses' age and gender. All models included country, hospital and ward as random effects. To ensure no multicollinearity was present between the control variables, we computed the variance inflation factor (VIF) with VIF <5 indicating no multicollinearity (Dormann et al., 2013). All statistical analysis was undertaken with RStudio version 1.1.442 (R Development Core Team, 2018) and the lme4 package (Bates, Mächler, Bolker, & Walker, 2015). We adopted the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) to evaluate relative model fit, prioritising models with lower values of AIC/BIC. | RE SULTS In total, 54,140 questionnaires were distributed and 33,659 (62%) nurses in 487 hospitals responded. After removing shift work invalid responses, our analytical sample totalled 31,627 nurses. Mean age of respondents was 38 years, and 93% were female. Detailed demographic description of the sample can be found elsewhere (Dall'Ora, Griffiths, Ball, Simon, & Aiken, 2015). Half of the nurses in Europe worked shifts of 8 hr or less (n = 15,930, 50%). Overall, 9,963 nurses (31%) had worked between 8.1-10 hr on their last shift, while shifts of between 10.1 and less than 12 hr were reported only by 1,159 nurses (4%). Overall, 4,574 nurses (14%) reported that their last shift lasted 12 hr or more. The majority of nurses' worked day shifts (n = 24,627, 78%), and 8,606 nurses (27%) reported working beyond their contracted hours (i.e., overtime) on their last shift. Frequency of different shift length categories on the country level can be found elsewhere (Griffiths et al., 2014). Fifty-five per cent of nurses agreed that there were staff development or continuing education programmes offered within their work environments (n = 17,246), and 46% of the nurses agreed that there was enough time and opportunity to discuss patient care problems with other nurses (n = 14,481). In this sample, 21% agreed that important patient care information was lost during shift changes (n = 6,452); and 57% (n = 17,987) agreed that there were assignments that foster continuity of care in their job. Table 1 reports the nurses' responses by shift length category. Long shifts were associated with decreases in the odds of reporting beneficial outcomes (Table 2). Working shifts of 12 hr or more was associated with a decrease in the odds of nurses agreeing that there were enough active staff development or continuing education programmes, when compared to working 8 hr or shorter (adjusted odds ratio [aOR]: 0.69; 95% confidence interval [CI]: 0.59-0.80). The odds of nurses agreeing to have enough time and opportunity to discuss patient care problems with colleagues were reduced for nurses working shifts of 8 hr or more compared to nurses working shifts of 8 hr or less. For nurses working shifts of 12 hr or more, the odds of reporting being able to discuss patient care were decreased by 24%, in comparison with nurses working 8 hr or less (aOR: 0.76; 95% CI: 0.66-0.87). Working shifts of 12 hr or more was not associated with increases in the odds of nurses reporting assignments that foster continuity of care (aOR: 0.97; 95% CI: 0.83-1.12), when compared to working 8 hr or less. Working shifts of 12 hr or more was associated with nurses' reports of important patient care information being lost during shift handovers in comparison with working shifts of 8 hr or less (odds ratio [OR]: 1.28; 95% CI: 1.11-1.47), although the association was attenuated when controlling for other shift variables, nurses' demographics and ward/hospital characteristics (aOR: 1.11; 95% CI: 0.95-1.30). All models' variance inflation factors confirmed that multicollinearity was not present at a problematic level. | D ISCUSS I ON This study is one of the first to examine the association between long shifts and aspects of nursing work, including ability to participate in continuing education activity and discuss patient care; and aspects of quality of care, including continuity of care and information loss during handover in Europe using a multi-country multilevel design. Shifts of 12 hr or more are common in some European countries, especially in Poland, Ireland and the United Kingdom (Griffiths et al., 2014), where they are increasingly being implemented based on assumptions of cost savings and improved quality of care (NHS Evidence, 2010). This study challenged this assumption. Drawing on a large and diverse European sample of 31,627 nurses, and controlling for a number of potential confounders, we found that nurses working long shifts were less likely to report having the opportunity to participate in continuing educational opportunities and to discuss patient care. Although nurses who worked long shifts were more likely to report that important patient information was being lost during handovers and not having assignments that fostered continuity of care, these associations were not statistically significant. Our findings confirmed those of small scale qualitative studies, highlighting that working on long shift patterns and, therefore, losing the long overlap between shifts, leads to fewer opportunities to engage in educational activities and in discussions around patient care (McGettrick & O'Neill, 2006). Contrary to reports that long shifts foster continuity of care and reduce information loss (Haller et al., 2018;NHS Evidence, 2010;Wootten, 2000), our study found no significant associations between working shifts of 12 hr or more and continuity of care and information loss. TA B L E 1 Outcomes by shift length category There is evidence that some nurses perceive long shifts as beneficial (Stone et al., 2006). The main reasons for nurses preferring long shifts were the ability to compress the working week into 3 days rather than five, thus benefitting from more days off; better worklife balance; and reduced travel costs (Harris, Sims, Parr, & Davies, 2015). Our results suggest that nurses may choose to work longer but fewer shifts, but this appears to be at the expense of continuing education programmes, and the ability of engaging in conversations around patient care. Nurses have indicated that participating in continuous professional development is pivotal to their job satisfaction and the quality of care they provide (Price & Reichert, 2017), outcomes that have been reported to be affected by long shifts (Dall'Ora et al., 2015;Griffiths et al., 2014). Our study shows that shifts of 12 hr or more were associated with missed opportunities to discuss patient care amongst nurses. This mirrors evidence that nurses' long shifts are associated with increased missed care (Ball et al., 2017;Griffiths et al., 2014). When nurses are experiencing competing demands during a shift, they may choose to prioritise clinical activities, including patient surveillance and treatments and procedures, at the expense of planning and discussing patient care (Griffiths et al., 2018). If nurses working shifts of 12 hr or more do not have sufficient time, energy and opportunity to participate in educational activities and to discuss patient care, the quality of care they provide may be lower. A recent study found that shifts of 12 hr or more are not associated with reduced staffing hours per patient day and staffing costs (Griffiths, Dall'Ora, Sinden, & Jones, 2019), and there is evidence that shifts of 12 hr or more are associated with increased sickness absence for nurses (Dall'Ora, . The evidence that long shifts do not lead to a decrease in resource use, combined with our study's findings, suggests that the hypothesised beneficial effect of long shifts is not achieved. TA B L E 2 Outputs of generalised linear mixed models measuring the association between shift characteristics and outcomes of information and communication flow
v3-fos-license
2014-10-01T00:00:00.000Z
2013-06-30T00:00:00.000
14174755
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://cjsbs.sljol.info/articles/10.4038/cjsbs.v42i1.5898/galley/4661/download/", "pdf_hash": "a18f78fc99e70dcbbdaff984ddd609c3f9eb1801", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2429", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "a18f78fc99e70dcbbdaff984ddd609c3f9eb1801", "year": 2013 }
pes2o/s2orc
DOI: 10.4038/cjsbs.v42i1.5898 The Association between Body-size and Habitat-type in Tiger Beetles (Coleoptera, Cicindelidae) of Sri Lanka Body size is an important feature of an animal that is linked with its life history, morphology, physiology and ecology. Understanding the association between body size and habitat type of an animal and the underlying causes for this are important for determining their distribution. The present study examines the association between body size and habitat types of tiger beetles in Sri Lanka and the environmental correlations for these associations. Morphometric parameters of tiger beetles and environmental parameters (air temperature, solar radiation, relative humidity, wind speed and, the colour, temperature, moisture content, pH and salinity of soil) in each habitat were recorded. Ten tiger beetle species were found from 37 locations which included coastal, riverine, urban and reservoir habitats. Species with larger body and mandible sizes prefer coastal and reservoir habitats with high wind speed, low soil moisture and high soil pH, whereas species with smaller body and mandible sizes prefer riverine habitats with low wind speed, high soil moisture and low soil pH. Species with smaller body size may also prefer urban habitats which may be due to similarity of environmental conditions that prevails in these two sites. These findings will assist in predicting the distribution of tiger beetles within various habitat types of Sri Lanka. DOI: http://dx.doi.org/10.4038/cjsbs.v42i1.5898 Ceylon Journal of Science ( Bio. Sci. ) 42 (1): 41-53, 2013 INTRODUCTION Body size is a well-known feature of an animal that influences its energy requirements, potential resource exploitation and susceptibility to predation (Cohen et al., 1993;Principe, 2008).Another aspect of body size that has been widely discussed is its relationship with habitat type, habitat succession, habitat degradation level and environmental quality (Linzmeier and Ribeiro-Costa, 2011).A positive relationship between body size and home range has been identified in many taxonomic groups of animals (Pyron, 1999).Small-bodied species are expected to be more specialized for habitat type than largebodied species and, tend to occur in less-open habitats and appears as less social than larger species (Pyron, 1999;Perry and Garland, 2002;Tershy, 1992).For instance, ground-dwelling monitor lizards have larger body sizes, rockdwellers have smaller body sizes while arboreal forms possess an intermediate body size (Collar et al., 2011).Crocodylian species that are larger in size are common in water bodies with steep sloping banks which lack a mat cover of heavy floating grass, while smaller species occur in small streams that flow through dense tropical forests (Farlow and Pianka, 2002). Among insects, chrysomelid beetles show a decrease in their body size, from areas of early successional stages to late successional stages but coleopteran predators with large body sizes are more numerous in forest edge areas with sparse vegetation at early successional stages (Linzmeier and Ribeiro-Costa, 2011).Further, increased human-induced disturbances alter the distribution of coleopterans towards a prevalence of smallsized species in highly disturbed habitats, a hypothesis which however cannot be generalized (Ulrich et al., 2007).Body size of aquatic insect larvae is known to be associated with flow velocity and large individuals are more numerous in habitats with high flow velocity conditions (Sagnes et al., 2008).Tiger beetles (Coleoptera, Cicindelidae) are highly habitat specific (Adis et al., 1998;Cardoso and Vogler, 2005;Dangalle et al., 2012a;Knisley and Hill, 1992;Morgan et al., 2000;Satoh et al., 2006;Pearson and Cassola, 2007;Rafi et al., 2010).Each species tends to be restricted to a narrow and unique habitat such as coastal sand dunes (Morgan et al., 2000;Satoh et al., 2004;Neil and Majka, 2008;Dangalle et al., 2012a), riverine habitats (Ganeshaiah and Belavadi, 1986;Satoh et al., 2006;Dangalle et al., 2011a;Dangalle et al., 2011b), reservoirs (Dangalle et al., 2012b), forests (Adis et al., 1998), agroecosystems (French et al., 2004;Sinu et al., 2006), parks, areas with human disturbances (Bhardwaj et al., 2008;Mosley, 2009), open areas with sparse vegetation (Schiefer, 2004) and grasslands (Acorn, 2004).The association of tiger beetle species with habitat has been related to their preferences for mating and oviposition sites, food availability, seasonality, vegetation cover and physical, chemical and climatic qualities of the habitat (Pearson et al., 2006).However, the association between the body sizes of tiger beetles and habitat type has not been studied, though body size has been estimated for many other species for various habitats.The present study intends to, (1) analyse the association of body size with habitat type of tiger beetles of Sri Lanka, and (2) determine the climatic and soil parameters of the habitat that may influence body sizehabitat type association. Surveying and collection of tiger beetles Ninety four locations in Sri Lanka were surveyed for tiger beetles from May 2002 to December 2006.When beetles were encountered, a sample of three to five beetles was collected using an insect net and preserved in 70% alcohol.Permission to make collections of tiger beetles was obtained from the Department of Wildlife Conservation, Sri Lanka. Identification of tiger beetles Tiger beetles were identified using keys for Cicindela of the Indian subcontinent (Acciavatti and Pearson, 1989) and descriptions of Horn (1904), Fowler (1912) and Naviaux (1984).Identifications were confirmed by comparing the specimens with type specimens available in the National Museum of Colombo, Sri Lanka and British Natural History Museum of London, United Kingdom.Nomenclature is based upon Wiesner (1992) except for the use of Calomera instead of Lophyridia, based upon Lorenz (1998). Measurement of morphological parameters of tiger beetles Body lengths and mandible lengths were measured and recorded for all tiger beetle specimens.Body length was estimated by measuring the distance from the frons of the head to the elytral apex when the head was in the normal feeding position.Caudal spines on the elytral apex were disregarded.Based on the references of Acciavatti andPearson (1989), McCairns et al. (1997) and Zerm and Adis (2001), the body length of beetles was categorized as follows: Less than 8 mmvery small 8 to 10 mmsmall 10 to 15 mmmedium 15 to 20 mmlarge More than 20 mmvery large Mandible length was estimated by measuring the distance from the articulation point to the tip of the left mandible.Broken and worn out mandibles were disregarded.Based on Pearson and Juliano (1993) and, Satoh and Hori (2004) mandibles of beetles were categorized according to the following size groups.< 2 mmsmall 2 to 3 mmmedium > 3 mmlarge Measurements of both body length and mandible length were taken using a dissecting microscope (Nikon Corporation SE, Japan) with an eyepiece graticule (Nikon, Tokyo, Japan) which has calibrated by an objective micrometer (Olympus, Japan). Measurements of habitat variables Habitat parameters were collected in locations where tiger beetles were recorded during the period from 10:00 h to 15:00 h.The air temperature, solar radiation, relative humidity and wind speed were measured using a portable integrated weather station (Health Enviro-Monitor, Davis Instrument Corp., Hayward, USA).In addition, the habitat type and vegetation distribution in each location were recorded.The soil temperature [using a soil thermometer SG 680-10], soil pH [using portable soil pH meter Westminister No.259], soil salinity [using a YSI model 30 hand-held salinity meter] and soil colour [measured by comparison with a Munsell soil colour chart (Year 2000 revised edition)] were estimated in each selected habitat.Soil moisture content was detected using the gravimetric method [determined by collecting five random samples to a depth of 10 cm and estimating the difference in fresh weight upon oven drying to 107-120 °C in the laboratory]. Statistical Analyses The body length and mandible length of tiger beetles of different habitats and habitat parameters were compared using One-Way Analysis of Variance and Tukey's multiple comparison method using the Minitab 16.0 statistical software package.Habitat parameters of urban habitats were not included in the statistical analyses as tiger beetles were found only in two urban locations. The correlations (Pearson's correlation coefficient) between body length of beetles and selected habitat parameters (which showed significant differences among habitat types) were investigated using Minitab 16.0 statistical package. Locations and habitat types of tiger beetles Tiger beetles were recorded from thirty seven locations in Sri Lanka in four different habitat types, namely; coastal, riverine, urban and reservoir (Figure 1; Table 1 in the Appendix I).The majority of the beetles were found from coastal, riverine and reservoir habitats.However, tiger beetles were found only from two urban habitat locations.A total of ten species were collected from the 37 locations, of which five were found in riverine habitats; four in reservoir habitats; three in coastal habitats and three in urban habitats.Four species were found in more than one habitat type (Tables 1 and 2 in the Appendix I). The ten coastal habitats where tiger beetles inhabited were characterized by broad beaches with sparse vegetation cover.Such sites were exposed to sunlight or strong wind conditions.However, these beaches were fringed with Ipomoea spp., Pandanus spp., Mimosa spp., grasses and coconut trees (Cocos nucifera). The fourteen riverine habitats where tiger beetles were found, consisted of large trees such as Artocarpus sp., Areca sp., Mangifera indica, Hevea sp. and bamboo sp.bordering the river bank.Moreover, ferns, Colocasia sp., Mimosa sp. and tall grasses in such sites provided a thick undergrowth.Moist rocks were also found in most of the riverine habitats. Figure 1. Sampling locations of tiger beetles (See the Table 1 in appendix I). The urban habitats of tiger beetles included landscaped gardens with ornamental plants that provided sufficient shade as well as moisture.The reservoir habitats consisted of sparsely vegetated sandy banks covered with Mimosa sp., Desmodium sp. and grasses with scattered Azadirachta indica and Limonia acidissima trees. Morphological parameters of tiger beetles According to the body lengths, the beetles of coastal and reservoir habitats were medium in size (coastal 10.475-13.625 mm; reservoir 8.1-12.6 mm).In contrast, most of the beetles in riverine and urban habitats were smaller (riverine 7.05 -14.0 mm; urban 7.375-11.9mm) (Figure 2a and Table 2 in Appendix I).Mandibles of beetles in coastal and reservoir habitats were significantly larger than those in riverine and urban habitats (p<0.01)(Figure 2 b and Table 2 in Appendix I).Medium sized mandibles were found in beetles of coastal and reservoir habitats (coastal 1.500-2.575mm; reservoir 1.750-2.825mm), while beetles of riverine and urban habitats were found to be having smaller mandibles (riverine 1.075-2.500mm; urban 1.275-2.550mm). Habitat variables The wind speed, soil moisture and soil pH of coastal and reservoir habitats were significantly different from that of riverine habitats (Table 1).In contrast, the temperature, solar radiation, relative humidity, soil temperature and soil colour of coastal, riverine and reservoir habitats did not vary significantly from each other. Wind speed was significantly high in coastal and reservoir habitats than that in riverine habitats (p<0.01)(Table 1).Median wind speed was the highest for reservoir habitats (7.0) and the lowest for riverine habitat types (0).Coastal habitats also exhibited a high median wind speed (3.5) but had the greatest variability, with an interquartile range of 11.5.Riverine habitats demonstrated the least variability, with an interquartile range of only 0.5. Soil moisture was significantly varied among coastal, riverine and reservoir habitats (p<0.05)(Table 1).Median soil moisture was the highest for riverine habitats (10.29), though it demonstrated the greatest variability (interquartile range of 22-39).Coastal and reservoir habitats had similar soil moisture levels (median = 3.5 and 2.92, respectively) with somewhat high variabilities (interquartile ranges being 8.3695 and 8.0965, respectively). Soil pH was also significantly high in coastal and reservoir habitats than that in riverine habitats (p<0.01)(Table 1).Median soil pH was the highest in coastal habitats (7.5) and the lowest in riverine habitats (6.2).Reservoir habitats also exhibited a high median soil pH (7.0) and had the lowest variability (interquartile range of 0.3).Riverine habitats demonstrated the highest variability, with an interquartile range of 2.2. Soil salinity was significantly high in coastal habitats when compared with riverine and reservoir habitats (p<0.01)(Table 1). Wind speed and soil pH of urban habitats was very similar to that of riverine habitats.However, soil moisture of urban habitats resembled that of coastal and reservoir habitats (Table 1).Further collections of tiger beetles from urban sampling locations were required to understand the relationship between habitat variables of urban and other studied habitat types. Correlation between body size and habitat variables A moderately positive correlation was observed between wind speed of the habitats and body length of tiger beetles at the 0.01 level of significance (Figure 3a).Similarly, the mandible length positively correlated with the wind speed of the habitats (Figure 3b).The soil pH of habitats also positively correlated with body and mandible lengths of tiger beetles at 0.01 level of significance (Figures 3c and d). A negative but weak correlation was observed between soil moisture of habitats and body length of tiger beetles (Figure 3e).Mandible length too showed a week negative correlation with soil moisture at 0.05 level of significance (Figure 3f).Thus, tiger beetles with larger body sizes can be expected to be found in open habitats (coastal and reservoir) with high wind speed, low soil moisture and neutral to alkaline soils, while smaller species can expected to be found in habitats with low wind speed, high soil moisture and acidic soils.It has also been reported that tiger beetles with larger body sizes inhabit in bareground sites such as mountain climbing paths, trails and areas used for livestock grazing in China and Korea (Hori, 1982) while species with body lengths ranging from 12.61 -13.30mm occupy beach habitats in Northeast Arizona (Schultz and Hadley, 1987).Cicindela hirticollis (body length = 12-15 mm) occurs on water-edge habitats of Atlantic and Pacific coasts, major estuaries and the Great Lakes of the United States and Canada (Allen and Acciavatti, 2002;Knisley and Fenster, 2005).Smaller species (with body lengths less than 10 mm), such as Cicindela cursitans have been recorded from bunch-grass prairies near rivers (Brust et al., 2005).Cicindela viridicollis having a body length of 6-7 mm inhabits in grassy fields of Cuba (Schiefer, 2004). Further, tiger beetles are prey to predators such as insectivorous birds, insectivorous lizards, water scorpions, and dragonflies, wasps of the family Tiphiidae, beeflies of the family Bombyliidae and robberflies of the family Asilidae (Bhargav and Uniyal, 2008;Pearson, 1990;Schultz, 1983;Sinu et al., 2006).Body size of tiger beetles is related with predation from various predator types and small tiger beetles are consumed by insectivorous lizards, spiders and robber flies while large species are predated upon by insectivorous birds (Pearson, 1990).Robber flies are the most regular predators of tiger beetles and an inverse correlation is found between prey size and attack rate by robber flies (Shelly and Pearson, 1978;Choate, 2010).Robber flies are known to occur in wet habitats of forest ecosystems and their occurrence reflects the availability of prey in the habitat (Vogler and Kelley, 1996;Cannings, 1998).As robber flies prefer tiger beetle species with small body sizes (Shelly and Pearson, 1978), and as their occurrence reflects the availability of prey in the habitat, we can assume that small tiger beetles frequent wet habitats with vegetation such as the shaded banks of rivers. Soil moisture is a key factor that influences oviposition site selection and habitat segregation of tiger beetles (Cornelisse and Hafernik, 2009;Ganeshaiah and Belavadi, 1986).The female tiger beetle selects sites for oviposition and burrow formation using its posterior abdominal segments that are sensitive to soil moisture content of the habitat.Certain tiger beetle species prefer low moisture levels while others are attracted to high moisture levels.For instance, Cicindela limbata albissima, a sand dune specialist, prefers habitats with less than 4% moisture, while Cicindela tranquebarica which inhabit riparian and other water edge habitats is always attracted to high moisture levels (Romey and Knisley, 2002).Females of Cicindela cursitans and Cicindela hirticollis oviposit strictly in moist soils (Brust et al., 2005;Brust et al., 2006).Further, Cicindela denverensis and Cicindela limbalis which often co-occur in Nebraska are mostly geographically separated as a result of differing moisture preferences (Brust et al., 2012). Likewise, the tiger beetle species of Sri Lanka may have different soil moisture preferences and species requiring high moisture levels may occupy riverine habitats while species requiring low moisture levels may occupy coastal and reservoir habitats.In the current study the body size of tiger beetles was inversely but weakly correlated with the soil moisture of the habitat (Figures 3 e and f), featuring the presence of large sized tiger beetles in soils of low moisture and smaller species in soils of high moisture. There are only a few studies that focus on the effects of soil pH differences on tiger beetles.Cornelisse and Hafernik (2009) report that Cicindela oregona prefers acidic soil as it is devoid of harmful fungi and bacteria that could infect larvae.However, present study revealed that large populations of tiger beetles were found in coastal habitats with alkaline soils and reservoir habitats with neutral soils.Fungi are known to grow more readily in alkaline sands (Cornelisse and Hafernik, 2009), and coastal habitats that are significantly more alkaline than reservoir habitats (pH = 7.08) can be expected to be unfavourable to the occurrence of tiger beetles.However, as the sandy soils of coastal habitats are significantly saline and as increased salinity negatively affects growth and infectivity of various fungi (Cornelisse and Hafernik, 2009), the low occurrence of tiger beetle species in coastal habitats can be justified.The present study further reports that tiger beetles with large body sizes are found in habitats with neutral to alkaline soils, while species that are small occupy habitats with acidic soils. Wind speed is also an important feature of tiger beetle habitat, though this has not been studied in detail.Periodic disturbances by wind prevents vegetation from encroaching upon habitats and provides sparsely vegetated areas that are preferred by tiger beetles for foraging and oviposition (Brust et al., 2006;Warren and Buettner, 2008).At Calamus Reservoir, Nebraska, strong winds blow and trap insects such as grasshoppers, bees, wasps and other beetles within sand dunes which are preyed upon by tiger beetles inhabiting these dunes (Thoma and MacRae, 2008).Trapping of insects may be more favourable for larvae of tiger beetles, which are sedentary predators that feed on small arthropods that they capture from the mouth of their burrows (Fenster et al., 2006).Rearing experiments both in the laboratory and the field by Hori (1982) have shown that the size of the adult tiger beetle depends on the quantity of prey animals consumed during the larval period.Thus, larval tiger beetles that have access to a rich source of insects trapped within sand dunes by high winds may result in adults with larger body sizes.Further, wind speed plays an important role in modifying the environment close to the ground and higher wind speeds can have a desiccating effect through mechanical mixing of the air adjacent to the soil surface (McGinn, 2010).According to Renault and Coray (2004), as the size of an insect increases, there is a proportional decrease in the relative surface area.Hence, an insect of smaller size has a larger relative surface area compared with one of larger size.Therefore, water will be lost through evaporation at a higher rate, through the body of smaller insects compared to that of larger insects.Thus, larger body size will be more favoured in windy places as was seen in the present study where a positive correlation was evident between body size of tiger beetles and wind speed of habitats. We conclude that tiger beetles of coastal and reservoir habitats of Sri Lanka have large body sizes while species of riverine and urban habitats are small-bodied.The wind speed, soil moisture and soil pH of the habitats may affect the body size of species by influencing prey abundance, oviposition site selection, desiccation and habitat suitability.This study is the first to examine the association of body size of tiger beetles with habitat type and the causes for this association.However, more field studies on prey availability in different habitat types, tiger beetlepredator relationships and larval studies are needed to explain the observed patterns more realistically. Figure 2 . Figure 2. Average (a) body and (b) mandible lengths of tiger beetles in different habitat types of Sri Lanka Average values (± standard error of mean) and the range (within brackets) of the measured habitat microclimatic parameters common letter (s) within the same column are not significantly different according to Tukey's multiple comparison test.Urban locations were not included in the statistical analyses. Figure 3 . Figure 3. Scatterplots (with regression) showing correlation between (a) wind speed and the body length; (b) wind speed and the mandible length; (c) soil pH and the body length; (d) soil pH and mandible length; (e) soil moisture and the body length and (f) soil moisture and the mandible length of tiger beetles. Table 2 . Average body and mandible lengths of tiger beetles in different habitats of Sri Lanka with standard error values (n=number of individuals) values for body length and mandible length exists as broken and worn out mandibles have been disregarded.
v3-fos-license
2021-01-18T02:15:46.770Z
2021-01-15T00:00:00.000
250144738
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-024-46959-5.pdf", "pdf_hash": "e309fc40c7092104bc767d78a3cff03447a4cdd9", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2430", "s2fieldsofstudy": [ "Computer Science", "Physics", "Mathematics" ], "sha1": "e309fc40c7092104bc767d78a3cff03447a4cdd9", "year": 2024 }
pes2o/s2orc
Enhancing combinatorial optimization with classical and quantum generative models Devising an efficient exploration of the search space is one of the key challenges in the design of combinatorial optimization algorithms. Here, we introduce the Generator-Enhanced Optimization (GEO) strategy: a framework that leverages any generative model (classical, quantum, or quantum-inspired) to solve optimization problems. We focus on a quantum-inspired version of GEO relying on tensor-network Born machines, and referred to hereafter as TN-GEO. To illustrate our results, we run these benchmarks in the context of the canonical cardinality-constrained portfolio optimization problem by constructing instances from the S&P 500 and several other financial stock indexes, and demonstrate how the generalization capabilities of these quantum-inspired generative models can provide real value in the context of an industrial application. We also comprehensively compare state-of-the-art algorithms and show that TN-GEO is among the best; a remarkable outcome given the solvers used in the comparison have been fine-tuned for decades in this real-world industrial application. Also, a promising step toward a practical advantage with quantum-inspired models and, subsequently, with quantum generative models I. INTRODUCTION Along with machine learning and the simulation of materials, combinatorial optimization is one of top candidates for practical quantum advantage.That is, the moment where a quantum-assisted algorithm outperforms the best classical algorithms in the context of a real-world application with a commercial or scientific value.There is an ongoing portfolio of techniques to tackle optimization problems with quantum subroutines, ranging from algorithms tailored for quantum annealers (e.g., Refs.[1,2]), gate-based quantum computers (e.g., Refs.[3,4]) and quantum-inspired (QI) models based on tensor networks (e.g., Ref. [5]). Regardless of the quantum optimization approach proposed to date, there is a need to translate the real-world problem into a polynomial unconstrained binary optimization (PUBO) expression -a task which is not necessarily straightforward and that usually results in an overhead in terms of the number of variables.Specific real-world use cases illustrating these PUBO mappings are depicted in Refs.[6] and [7].Therefore, to achieve practical quantum advantage in the near-term, it would be ideal to find a quantum optimization strategy that can work on arbitrary objective functions, bypassing the translation and overhead limitations raised here. In our work, we offer a solution to these challenges by proposing a novel generator-enhanced optimization (GEO) * alejandro@zapatacomputing.com framework which leverage the power of (quantum or classical) generative models.This family of solvers can scale to large problems where combinatorial problems become intractable in real-world settings.Since our optimization strategy does not rely on the details of the objective function to be minimized, it is categorized in the group of so-called black-box solvers.Another highlight of our approach is that it can utilize available observations obtained from attempts to solve the optimization problem.These initial evaluations can come from any source, from random search trials to tailored state-of-theart (SOTA) classical or quantum optimizers for the specific problem at hand. Our GEO strategy is based on two key ideas.First, the generative-modeling component aims to capture the correlations from the previously observed data (step 0-3 in Fig. 1).Second, since the focus here is on a minimization task, the (quantum) generative models need to be capable of generating new "unseen" solution candidates which have the potential to have a lower value for the objective function than those already "seen" and used as the training set (step 4-6 in Fig. 1).This exploration towards unseen and valuable samples is by definition the fundamental concept behind generalization: the most desirable and important feature of any practical ML model.We will elaborate next on each of these components and demonstrate these two properties in the context of the tensor-network-based generative models and its application to a non-deterministic polynomial-time hard (NP-hard) version of the portfolio optimization in finance. To the best of our knowledge, this is the first optimization strategy proposed to do an efficient blackbox exploration of the objective-function landscape with the help of generative models.Although other proposal leveraging generative models as a subroutine within the optimizer have appeared recently since the publication of our manuscript (e.g., see GFlowNets [8] and the variational neural annealing [9] algorithms), our framework is the only capable of both, handling arbitrary cost functions and also with the possibility of swapping the generator for a quantum or quantum-inspired implementation.GEO also has the enhanced feature that the more data is available, the more information can be passed and used to train the (quantum) generator. In this work, we highlight the different features of GEO by performing a comparison with alternative solvers, such as Bayesian optimizers and generic solvers like simulated annealing.In the case of the specific real-world large-scale application of portfolio optimization, we compare against the SOTA optimizers and show the competitiveness of our approach.These results are presented in Sec.III.Next, in Sec.II, we present the GEO approach and its range of applicability. II. QUANTUM-ENHANCED OPTIMIZATION WITH GENERATIVE MODELS As shown in Fig. 1, depending on the GEO specifics we can construct an entire family of solvers whose generative modeling core range from classical, QI or quantum circuit (QC) enhanced, or hybrid quantum-classical model.These options can be realized by utilizing, for example, Boltzmann machines [10] or Generative Adversarial Networks (GAN) [11], Tensor-Network Born Machines (TNBM) [12], Quantum Circuit Born Machines (QCBM) [13] or Quantum-Circuit Associative Adversarial Networks (QC-AAN) [14] respectively, to name just a few of the many options for this probabilistic component. QI algorithms come as an interesting alternative since these allow one to simulate larger scale quantum systems with the help of efficient tensor-network (TN) representations.Depending on the complexity of the TN used to build the quantum generative model, one can simulate from thousands of problem variables to a few tens, the latter being the limit of simulating an universal gate-based quantum computing model.This is, one can control the amount of quantum resources available in the quantum generative model by choosing the QI model. Therefore, from all quantum generative model options, we chose to use a QI generative model based on TNs to test and scale our GEO strategy to instances with a number of variables commensurate with those found in industrial-scale scenarios.We refer to our solver hereafter as TN-GEO.For the training of our TN-GEO models we followed the work of Han et al. [15] where they proposed to use Matrix Product States (MPS) to build the unsupervised generative model.The latter extends the scope from early successes of quantum-inspired models in the context of supervised ML [16][17][18][19]. In this paper we will discuss two modes of operation for our family of quantum-enhanced solvers: • In TN-GEO as a "booster" we leverage past observa-tions from classical (or quantum) solvers.To illustrate this mode we use observations from simulated annealing (SA) runs.Simulation details are provided in Appendix A 5. • In TN-GEO as a stand-alone solver all initial cost function evaluations are decided entirely by the quantum-inspired generative model, and a random prior is constructed just to give support to the target probability distribution the MPS model is aiming to capture.Simulation details are provided in Appendix A 6. Both of these strategies are captured in the algorithm workflow diagram in Fig. 1 and described in more detail in Appendix A. III. RESULTS AND DISCUSSION To illustrate the implementation for both of these settings we tested their performance on an NP-hard version of the portfolio optimization problem with cardinality constraints.The selection of optimal investment on a specific set of assets, or portfolios, is a problem of great interest in the area of quantitative finance.This problem is of practical importance for investors, whose objective is to allocate capital optimally among assets while respecting some investment restrictions.The goal of this optimization task, introduced by Markowitz [20], is to generate a set of portfolios that offers either the highest expected return (profit) for a defined level of risk or the lowest risk for a given level of expected return.In this work, we focus in two variants of this cardinality constrained optimization problem.The first scenario aims to choose portfolios which minimize the volatility or risk given a specific target return (more details are provided in Appendix A 1.) To compare with the reported results from the best performing SOTA algorithms, we ran TN-GEO in a second scenario where the goal is to choose the best portfolio given a fixed level of risk aversion.This is the most commonly used version of this optimization problem when it comes to comparison among SOTA solvers in the literature (more details are provided in Appendix A 2). A. TN-GEO as a booster for any other combinatorial optimization solver In Fig. 2 we present the experimental design and the results obtained from using TN-GEO as a booster.In these experiments we illustrate how using intermediate results from simulated annealing (SA) can be used as seed data for our TN-GEO algorithm.As described in Fig. 2, there are two strategies we explored (strategies 1 and 2) to compare with our TN-GEO strategy (strategy 4).To fairly compare each strategy, we provide each with approximately the same computational wall-clock time.For strategy 2, this translates into performing additional restarts of SA with the time allotted for TN-GEO.In the case of strategy 1, where we explored different settings for SA from the start compared to those used in strategy 2, this amounts to using the same total number ---tr aining seed 2 3 FIG. 1. Scheme for our Generator-Enhanced Optimization (GEO) strategy.The GEO framework leverages generative models to utilize previous samples coming from any quantum or classical solver.The trained quantum or classical generator is responsible for proposing candidate solutions which might be out of reach for conventional solvers.This seed data set (step 0) consists of observation bitstrings {x (i) } seed and their respective costs {σ (i) } seed .To give more weight to samples with low cost, the seed samples and their costs are used to construct a softmax function which serves as a surrogate to the cost function but in probabilistic domain.This softmax surrogate also serves as a prior distribution from which the training set samples are withdrawn to train the generative model (steps 1-3).As shown in the figure between steps 1 and 2, training samples from the softmax surrogate are biased favoring those with low cost value.For the work presented here, we implemented a tensor-network (TN)-based generative model.Therefore, we refer to this quantum-inspired instantiation of GEO as TN-GEO.Other families of generative models from classical, quantum, or hybrid quantum-classical can be explored as expounded in the main text.The quantum-inspired generator corresponds to a tensor-network Born machine (TNBM) model which is used to capture the main features in the training data, and to propose new solution candidates which are subsequently post selected before their costs {σ (i) }new are evaluated (steps 4-6).The new set is merged with the seed data set (step 7) to form an updated seed data set (step 8) which is to be used in the next iteration of the algorithm.More algorithmic details for the two TN-GEO strategies proposed here, as a booster or as a stand-alone solver, can be found in the main text and in A 5 and A 6 respectively. of number of cost functions evaluations as those allocated to SA in strategy 2. For our experiments this number was set to 20,000 cost function evaluations for strategies 1 and 2. In strategy 4, the TN-GEO was initialized with a prior consisting of the best 1,000 observations out of the first 10,000 coming from strategy 2 (see Appendix A 5 for details).To evaluate the performance enhancement obtained from the TN-GEO strategy we compute the relative TN-GEO enhancement η, which we define as Here, C cl min is the lowest minimum value found by the classical strategy (e.g., strategies 1-3) while C TN−GEO min corresponds to the lowest value found with the quantum-enhanced approach (e.g., with TN-GEO).Therefore, positive values reflect an improvement over the classical-only approaches, while negative values indicate cases where the classical solvers outperform the quantum-enhanced proposal. As shown in the Fig. 2, we observe that TN-GEO outperforms on average both of the classical-only strategies imple-mented.The quantum-inspired enhancement observed here, as well as the trend for a larger enhancement as the number of variables (assets) becomes larger, is confirmed in many other investment universes with a number of variables ranging from N = 30 to N = 100 (see Appendix B for more details).Although we show an enhancement compared to SA, similar results could be expected when other solvers are used, since our approach builds on solutions found by the solver and does not compete with it from the start of the search.Furthermore, the more data available, the better the expected performance of TN-GEO is.An important highlight of TN-GEO as a booster is that these previous observations can come from a combination of solvers, as different as purely quantum or classical, or hybrid. The observed performance enhancement compared with the classical-only strategy must be coming from a better exploration of the relevant search space, i.e., the space of those bitstring configurations x representing portfolios which could yield a low risk value for a specified expected investment return.That is the intuition behind the construction of TN-GEO.The goal of the generative model is to capture the important correlations in the previously observed data, and to use its generative capabilities to propose similar new candidates. Generating new candidates is by no means a trivial task in ML and it determines the usefulness and power of the model since it measure its generalization capabilities.In this setting of QI generative models, one expects that the MPS-based generative model at the core of TN-GEO is not simply memorizing the observations given as part of the training set, but that it will provide new unseen candidates.This is an idea which has been recently tested and demonstrated to some extent on synthetic data sets (see e.g., Refs.[21], [22] and [23].In Fig. 3 we demonstrate that our quantum-inspired generative model is generalizing to new samples and that these add real value to the optimization search.To the best of our knowledge this is the first demonstration of the generalization capabilities of quantum generative models in the context of a real-world application in an industrial scale setting, and one of our main findings in our paper. Note that our TN-based generative model not only produces better minima than the classical seed data, but it also generates a rich amount of samples in the low cost spectrum.This bias is imprinted in the design of our TN-GEO and it is the purpose of the softmax surrogate prior distribution shown in Fig. 1.This richness of new samples could be useful not only for the next iteration of the algorithm, but they may also be readily of value to the user solving the application.In some applications there is value as well in having information about the runnersup.Ultimately, the cost function is just a model of the system guiding the search, and the lowest cost does not translate to the best performance in the real-life investment strategy. B. Generator-Enhanced Optimization as a Stand-Alone Solver Next, we explore the performance of our TN-GEO framework as a stand-alone solver.The focus is in combinatorial problems whose cost functions are expensive to evaluate and where finding the best minimum within the least number of calls to this function is desired.In Fig. 4 we present the comparison against four different classical optimization strategies.As the first solver, we use the random solver, which corresponds to a fully random search strategy over the 2 N bitstrings of all possible portfolios, where N is the number of assets in our investment universe.As second solver, we use the conditioned random solver, which is a more sophisticated random strategy compared to the fully random search.The conditioned random strategy uses the a priori information that the search is restricted to bitstrings containing a fixed number of κ assets.Therefore the number of combinatorial possibilities is M = N κ , which is significantly less than 2 N .As expected, when this information is not used the performance of the random solver over the entire 2 N search space is worse.The other two competing strategies considered here are SA and the Bayesian optimization library GPyOpt [24].In both of these classical solvers, we adapted their search strategy to impose this cardinality constraint with fixed κ as well (details in Appendix.A 4).This raises the bar even higher for TN-GEO which is not using that a priori information to boost its performance [25].As explained in Appendix A 6, we only use this information indirectly during the construction of the artificial seed data set which initializes the algorithm (step 0, Fig. 1) , but it is not a strong constraint during the construction of the QI generative model (step 3, Fig.In orange we represent samples coming from our quantum generative model at the core of TN-GEO.The green dash line is positioned at the best risk value found in the seed data.This mark emphasizes all the new outstanding samples obtained with the quantum generative model and which correspond to lower portfolio risk value (better minima) than those available from the classical solver by itself.The number of outstanding samples in the case of N = 50 is equal to 31, while 349 outstanding samples were obtained from the MPS generative model in the case of N = 100. Conditioned Random Simulating Annealing GPyOpt TN-GEO Random FIG. 4. TN-GEO as a stand-alone solver: In this comparison of TN-GEO against four classical competing strategies, investment universes are constructed from subsets of the S&P 500 with a diversity in the number of assets (problem variables) ranging from N = 30 to N = 100.The goal is to minimize the risk given an expected return which is one of the specifications in the combinatorial problem addressed here.Error bars and their 95% confidence intervals are calculated from bootstrapping over 100 independent random initializations for each solver on each problem.The main line for each solver corresponds to the bootstrapped median over these 100 repetitions, demonstrating the superior performance of TN-GEO over the classical solvers considered here.As specified in the text, with the exception of TN-GEO, the classical solvers use to their advantage the a priori information coming from the cardinality constraint imposed in the selection of valid portfolios.Fig. 1).Post selection can be applied a posteriori such that only samples with the right cardinality are considered as valid candidates towards the selected set (step 5, Fig. 1). In Fig. 4 we demonstrate the advantage of our TN-GEO stand-alone strategy compared to any of these widely-used solvers.In particular, it is interesting to note that the gap between TN-GEO and the other solvers seems to be larger for larger number of variables. The test data used by the vast majority of researchers in the literature who have addressed the problem of cardinality-constrained portfolio optimization come from OR-Library [35], which correspond to the weekly prices between March 1992 and September 1997 of the following indexes: Hang Seng in Hong Kong (31 assets); DAX 100 in Germany (85 assets); FTSE 100 in the United Kingdom (89 assets); S&P 100 in the United States (98 assets); and Nikkei 225 in Japan (225 assets). Here we present the results obtained with TN-GEO and its comparison with the nine different SOTA metaheuristic algorithms mentioned above and whose results are publicly available from the literature.Table I shows the results of all algorithms and all performance metrics for each of the 5 index data sets (for more details on the evaluation metrics, see Appendix A 2).Each algorithm corresponds to a different column, with TN-GEO in the rightmost column.The values are shown in red if the TN-GEO algorithm performed better or equally well compared to the other algorithms on the corresponding performance metric.The numbers in bold mean that the algorithm found the best (lowest) value across all algorithms. From all the entries in this table, 67% of them correspond to red entries, where TN-GEO either wins or draws, which is a significant percentage giving that these optimizers are among the best reported in the last decades. In Table II we show a pairwise comparison of TN-GEO against each of the SOTA optimizers.This table reports the number of times TN-GEO wins, loses, or draws compared to results reported for the other optimizer, across all the performance metrics and for all the 5 different market indexes.Note that since not all the performance metrics are reported for all the solvers and market indexes, the total number of wins, draws, or losses varies.Therefore, we report in the same table the overall percentage of wins plus draws in each case.We see that this percentage is greater than 50% in all the cases.Furthermore, in Table II, we use the Wilcoxon signed-rank test [36], which is a widely used nonparametric statistical test used to evaluate and compare the performance of different algorithms in different benchmarks [37].Therefore, to statistically validate the results, a Wilcoxon signed-rank test is performed to provide a meaningful comparison between the results from TN-GEO algorithm and the SOTA metaheuristic algorithms.The Wilcoxon signed-rank test tests the null hypothesis that the median of the differences between the results of the algorithms is equal to 0. Thus, it tests whether there is no significant difference between the performance of the algorithms.The null hypothesis is rejected if the significance value (p) is less than the significance level (α), which means that one of the algorithms performs better than the other.Otherwise, the hypothesis is retained. As can be seen from the table, the TN-GEO algorithm significantly outperforms the GTS and PBILD methods on all performance metrics rejecting the null hypothesis at the 0.05 significance level.On the other hand, the null hypotheses are accepted at α = 0.05 for the TN-GEO algorithm over the other remaining algorithms.Thus, in terms of performance on all metrics combined, the results show that there is no significant difference between TN-GEO and these remaining seven SOTA optimizers (IPSO, IPSO-SA, GRASP, ABCFEIT, HAAG, VNSQP, and RCABC) Overall, the results confirm the competitiveness of our quantum-inspired proposed approach against SOTA metaheuristic algorithms.This is remarkable given that these metaheuristics have been explored and fine-tuned for decades. IV. OUTLOOK Compared to other quantum optimization strategies, an important feature of TN-GEO is its algorithmic flexibility.As shown here, unlike other proposals, our GEO framework can be applied to arbitrary cost functions, which opens the possibility of new applications that cannot be easily addressed by an explicit mapping to a polynomial unconstrained binary optimization (PUBO) problem.Our approach is also flexible with respect to the source of the seed samples, as they can come from any solver, possibly more efficient or even application-specific optimizers.The demonstrated generalization capabilities of the generative model that forms its core, helps TN-GEO build on the progress of previous experiments with other state-of-the-art solvers, and it provides new candidates that the classical optimizer may not be able to achieve on its own.We are optimistic that this flexible approach will open up the broad applicability of quantum and quantum-inspired generative models to real-world combinatorial optimization II.Pairwise comparison of TN-GEO against each of the SOTA optimizers.The asymptotic significance is part of the Wilcoxon signedrank test results.The null hypothesis that the performance of the two algorithms is the same is tested at the 95% confidence level (significance level: α = .05).Results show that TN-GEO is on par with all the SOTA algorithms, and in two cases, GTS and PBILD, it significantly outperforms them.We also report the count for TN-GEO wins, losses, and ties, compared to each of the other algorithms.Although we have limited the scope of this work to tensor network-based generative quantum models, it would be a natural extension to consider other generative quantum models as well.For example, hybrid classical quantum models such as quantum circuit associative adversarial networks (QC-AAN) [14] can be readily explored to harness the power of generative quantum models with so-called noisy intermediate-scale quantum (NISQ) devices [38].In particular, the QC-AAN framework opens up the possibility of working with a larger number of variables and going beyond discrete values (e.g., variables with continuous values).Both quantum-inspired and hybrid quantum-classical algorithms can be tested in this GEO framework in even larger problem sizes of this NP-hard version of the portfolio optimization problem or any other combinatorial optimization problem.As the number of qubits in NISQ devices increases, it would be interesting to explore generative models that can utilize more quantum resources, such as Quantum Circuit Born Machines (QCBM) [13]: a general framework to model arbitrary probability distributions and perform generative modeling tasks with gate-based quantum computers. TN-GEO vs Increasing the expressive power of the quantum-inspired core of MPS to other more complex but still efficient QI approaches, such as tree-tensor networks [39], is another interesting research direction.Although we have fully demonstrated the relevance and scalability of our algorithm for industrial applications by increasing the performance of classical solvers on industrial scale instances (all 500 assets in the S&P 500 market index), there is a need to explore the performance improvement that could be achieved by more complex TN representations or on other combinatorial problems. Although the goal of GEO was to show good behavior as a general black-box algorithm without considering the specifics of the study application, it is a worthwhile avenue to exploit the specifics of the problem formulation to improve its performance and runtime.In particular, for the portfolio optimization problem with a cardinality constraint, it is useful to incorporate this constraint as a natural MPS symmetry, thereby reducing the effective search space of feasible solutions from the size of the universe to the cardinality size. Finally, our thorough comparison with SOTA algorithms, which have been fine-tuned for decades on this specific application, shows that our TN-GEO strategy manages to outperform a couple of these and is on par with the other seven optimizers.This is a remarkable feat for this new approach and hints at the possibility of finding commercial value in these quantum-inspired strategies in large-scale real-world problems, as the instances considered in this work.Also, it calls for more fundamental insights towards understanding when and where it would be beneficial to use this TN-GEO framework, which relies heavily on its quantum-inspired generative ML model.For example, understanding the intrinsic bias in these models, responsible for their remarkable performance, is another important milestone on the road to practical quantum advantage with quantum devices in the near future.The latter can be asserted given the tight connection of these quantuminspired TN models to fully quantum models deployed on quantum hardware.And this question of when to go with quantum-inspired or fully quantum models is a challenging one that we are exploring in ongoing future work.for a particular asset at time t).The solution to Eq. A1 for a given return level ρ corresponds to the optimal portfolio strategy w * and the minimal value of this objective function σ(w) correspond to the portfolio risk and will be denoted by σ * ρ .Note that the optimization task in Eq.A1 has the potential outcome of investing small amounts in a large number of assets as an attempt to reduce the overall risk by "over diversifying" the portfolio.This type of investment strategy can be challenging to implement in practice: portfolios composed of a large number of assets are difficult to manage and may incur in high transaction costs.Therefore, several restrictions are usually imposed on the allocation of capital among assets, as a consequence of market rules and conditions for investment or to reflect investor profiles and preferences.For instance, constraints can be included to control the amount of desired diversification, i.e., modifying bound limits per asset i, denoted by {l i , u i }, to the proportion of capital invested in the investment on individual assets or a group of assets, thus the constraint l i < w i < u i could be considered. Additionally, a more realistic and common scenario is to include in the optimization task a cardinality constraint, which limits directly the number of assets to be transacted to a pre-specified number κ < N .Therefore, the number of different sets to be treated is M = N κ .In this scenario, the problem can be formulated as a Mixed-Integer Quadratic Program (MIQP) with the addition of binary variables x i ∈ {0, 1} per asset, for i = 1, ..., N , which are set to "1" when the i-th asset is included as part of the κ assets, or "0" if it is left out of this selected set.Therefore, valid portfolios would have a number κ of 1's, as specified in the cardinality constraint.For example, for N = 4 and κ = 2, the six different valid configurations can be encoded as {0011, 0101, 0110, 1001, 1010, 1100}. The optimization task can then be described as follows min w,x σ 2 (w) : In this reformulated problem we denote by σ * ρ,κ the minimum portfolio risk outcome from Eq. A2 for a given return level ρ and cardinality κ.The optimal solution vectors w * and x * define the portfolio investment strategy.Adding the cardinality constraint and the investment bound limits transforms a simple convex optimization problem (Eq.A1) into a much harder non-convex NP-hard problem .For all the problem instance generation in this work we chose κ = N/2 and the combinatorial nature of the problems lies in the growth of the search space associated with the binary vector x, which makes it intractable to exhaustively explore for a number of assets in the few hundreds.The size of the search space here is It is important to note that given a selection of which assets belong to the portfolio by instantiating x (say with a specific x (i) ), solving the optimization problem in Eq.A2 to find the respective investment fractions w (i) and risk value σ (i) ρ,N/2 can be efficiently achieved with conventional quadratic programming (QP) solvers.In this work we used the python module cvxopt [40] for solving this problem.Note that we exploit this fact to break this constrained portfolio optimization problem into a combinatorial intractable one (find best asset selection x), which we aim to solve with GEO, and a tractable subroutine which can be solved efficiently with available solvers. The set of pairwise (σ κ ρ , ρ), dubbed as the efficient frontier, is no longer convex neither continuous in contrast with the solution to problem in Eq. (A1). Problem formulation for comparison with state-of-the-art algorithms To carry out the comparison with State-of-the-Art Algorithms, in line with the formulation used there, we generalizes the problem in Eq.A2 releasing the constraint of a fix level of portfolio return, instead directly incorporating the portfolio return in the objective function, encompassing now two terms: the one on the left corresponding to the portfolio risk as beforeand the one on the right corresponding to the portfolio return.The goal is to balance out both terms such that return is maximized and risk minimized.Lambda is a hyperparameter, named risk averse, that controls if an investor wants to give more weight to risk or return.The new formulation reads as follows, min w,x {λσ 2 (w) − (1 − λ) r(w) : With the rest of constraints and variables definition as in Appendix A 1. a. Performance Metrics To compare the performance of the proposed GEO with the SOTA metaheuristic algorithms in the literature, the most commonly used performance metrics for the cardinality constrained portfolio optimization problem are used.These metric formulations compute the distance between the heuristic efficient frontier and the unconstrained efficient frontier.Thus, the performance of the algorithms can be evaluated. Four of these performance metrics (the Mean, Median, Minimum and Maximum in Table I) are based on the so-called Performance Deviation Errors (P DE).These P DE metrics were formulated by Chang [26] as follows: where the pair (X l , Y l )(l = 1, ..., ε * ) represents the point on the standard efficient frontier and the pair (x i , y i )(i = 1, ..., ε) represents the point on the heuristic efficient frontier.Here, ε * denotes the number of points on the standard efficient frontier while ε denotes the number of points on the heuristic efficient frontier.The mean, median, minimum, and maximum of the P DE can be used to compare the performance of the algorithms. Later, three additional performance measures (MEUCD: Mean Euclidean Distance, VRE: Variance of Return Error, MRE: Mean Return Error) were formulated by Cura [41] as follows: where (X * i , Y * i ) is the standard point closest to the heuristic point (x i , y i ). Figure 5 shows a graphical representation of the indices used to calculate the performance metrics for the convenience of the reader and the values for TN-GEO and all the other SOTA optimizers are reported in Table I. Quantum-Inspired Generative Model in TN-GEO The addition of a probabilistic component is inspired by the success of Bayesian Optimization (BO) techniques, which are among the most efficient solvers when the performance metric aims to find the lowest minimum possible within the least number of objective function evaluations.For example, within the family of BO solvers, GPyOpt [24] uses a Gaussian Process (GP) framework consisting of multivariate Gaussian distributions.This probabilistic framework aims to capture relationships among the previously observed data points (e.g., through tailored kernels), and it guides the decision of where Although the GP framework in BO techniques is not a generative model, we explore here the powerful unsupervised machine learning framework of generative modeling in order to capture correlations from an initial set of observations and evaluations of the objective function (step 1-4 in Fig. 1). For the implementation of the quantum-inspired generative model at the core of TN-GEO we follow the procedure proposed and implemented in Ref. [15].Inspired by the probabilistic interpretation of quantum physics via Born's rule, it was proposed that one can use the Born probabilities |Ψ(x)| 2 over the 2 N states of an N qubit system to represent classical target probability distributions which would be obtained otherwise with generative machine learning models.Hence, with Ψ(x) = x|Ψ and x ∈ {0, 1} ⊗N are in one-to-one correspondence with decision variables over the investment universe with N assets in our combinatorial problem of interest here.In Ref. [15] these quantum-inspired generative models were named as Born machines, but we will refer to them hereafter as tensor-network Born machines (TNBM) to differentiate it from the quantum circuit Born machines (QCBM) proposal [13] which was developed independently to achieve the same purpose but by leveraging quantum wave functions from quantum circuits in NISQ devices.As explained in the main text, either quantum generative model can be adapted for the purpose of our GEO algorithm.On the grounds of computational efficiency and scalability towards problem instances with large number of variables (in the order of hundreds or more), following Ref.[15] we implemented the quantum-inspired generative model based on Matrix Product States (MPS) to learn the target distributions MPS is a type of TN where the tensors are arranged in a one-dimensional geometry.Despise its simple structure, MPS can efficiently represent a large number of quantum states of interest extremely well [42].Learning with the MPS is achieved by adjusting its parameters such that the distribution obtained via Born's rule is as close as possible to the data distribution.MPS enjoys a direct sampling method that is more efficient than other Machine Learning techniques, for instance, Boltzmann machines, which require Markov chain Monte Carlo (MCMC) process for data generation. The key idea of the method to train the MPS, following the algorithm on paper [15], consists of adjusting the value of the tensors composing the MPS as well as the bond dimension among them, via the minimization of the negative log-likelihood function defined over the training dataset sampled from the target distribution.For more details on the implementation see Ref. [15] and for the respective code see Ref. [43]. Classical Optimizers a. GPyOpt Solver GPyOpt [24] is a Python open-source library for Bayesian Optimization based on GPy and a Python framework for Gaussian process modelling.For the comparison exercise in TN-GEO as a stand-alone solver here are the hyperparameters we used for the GPyOpt solver: • Domain: to deal with the exponential growth in dimensionality, the variable space for n number of assets was partitioned as the cartesian product of n 1-dimensional spaces. • Constraints: we added two inequalities in the number of assets in a portfolio solution to represent the cardinality condition. • This solver corresponds to the simplest and most naive approach, while still using the cardinality information of the problem.In the conditioned random solver, we generate, by construction, bitstrings which satisfy the cardinality constraint.Given the desired cardinality κ = N/2 used here, one starts from the bitstring with all zeros, x 0 = 0 • • • 0, and flips only N/2 bits at random from positions containing 0's, resulting in a valid portfolio candidate x with cardinality N/2. d. Random Solver This solver corresponds to the simplest approach without even using the cardinality information of the problem.In the random solver, we generate, by construction, bitstrings randomly selected from the 2 N bitstrings of all possible portfolios, where N is the number of assets in our investment universe. Algorithm Methodology for TN-GEO as a booster As explained in the main text, in this case it is assumed that the cost of evaluating the objective function is not the major computational bottleneck, and consequently there is no practical limitations in the number of observations to be considered. Following the algorithmic scheme in Fig. 1, we describe next the details for each of the steps in our comparison benchmarks: 0 Build the seed data set, {x (i) } seed and {σ For each problem instance defined by ρ and a random subset with N assets from the S&P 500, gather all initial available data obtained from previous optimization attempts with classical solver(s).In our case, for each problem instances we collected 10,000 observations from the SA solver.These 10,000 observations corresponding to portfolio candidates {x (i) } init and their respective risk evaluations {σ (i) ρ,N/2 } init were sorted and only the first n seed = 1, 000 portfolio candidates with the lowest risks were selected as the seed data set.This seed data set is the one labeled as {x (i) } seed and {σ (i) ρ,N/2 } seed in the main text and hereafter.The idea of selecting a percentile of the original data is to provide the generative model inside GEO with samples which are the target samples to be generated.This percentile is a hyperparameter and we set it 10% of the initial data for our purposes. 1 Construct of the softmax surrogate distribution: Using the seed data from step 0, we construct a softmax multinomial distribution with n seed classes -one for each point on the seed data set.The probabilities outcome associated with each of these classes in the multinomial is calculated as a Boltzmann weight, Here, σ ρ,κ = σ ρ,κ (x (i) )/T , and T is a "temperature" hyperparameter.In our simulations, T was computed as the standard deviation of the risk values of this seed data set.In Bayesian optimization methods the surrogate function tracks the landscape associated with the values of the objective function (risk values here).This softmax surrogate constructed here by design as a multinomial distribution from the seed data observations serves the purpose of representing the objective function landscape but in probability space.That is, it will assign higher probability to portfolio candidates with lower risk values.Since we will use this softmax surrogate to generate the training data set, this bias imprints a preference in the quantum-inspired generative model to favor low-cost configurations. 2 Sample from softmax surrogate.We will refer to these samples as the training set since these will be used to train the MPS-based generative model.For our experiments here we used n train = 10000 samples. 3 Use the n train samples from the previous step to train the MPS generative model.4 Obtain n MPS samples from the generative model which correspond to the new list of potential portfolio candidates.In our experiments, n MPS = 4000.For the case of 500 assets, as sampling takes sensibly longer because of the problem dimension, this value was reduced to 400 to match the time in SA. 5 Select new candidates: From the n MPS samples, select only those who fulfill the cardinality condition, and which have not been evaluated.These new portfolio candidates {x (i) } new are saved for evaluation in the next step. 6 Obtain risk value for new selected samples: Solve Eq.A2 to evaluate the objective function (portfolio risks) for each of the new candidates {x (i) } new .We will denote refer to the new cost function values by {σ 7 Merge the new portfolios, {x (i) } new , and their respective cost function evaluations, {σ ρ,N/2 } new with the seed portfolios, {x (i) } seed , and their respective cost values, {σ (i) ρ,N/2 } seed , from step 0 above.This combined super set is the new initial data set. 8 Use the new initial data set from step 7 to start the algorithm from step 1.If a desired minimum is already found or if no more computational resources are available, one can decide to terminate the algorithm here.In all of our benchmark results reported here when using TN-GEO as a booster from SA intermediate results, we only run the algorithm for this first cycle and the minima reported for the TN-GEO strategy is the lowest minimum obtained up to step 7 above. Algorithm Methodology for TN-GEO as a stand-alone solver This section presents the algorithm for the TN-GEO scheme as a stand-alone solver.In optimization problems where the objective function is inexpensive to evaluate, we can easily probe it at many points in the search for a minimum.However, if the cost function evaluation is expensive, e.g., tuning hyperparameters of a deep neural network, then it is important to minimize the number of evaluations drawn.This is the domain where optimization technique with a Bayesian flavour, where the search is being conducted based on new information gathered, are most useful, in the attempt to find the global optimum in a minimum number of steps. The algorithmic steps for TN-GEO as a stand-alone solver follows the same logic as that of the solver as a booster described Sec.A 5. The main differences between the two algorithms rely on step 0 during the construction of the initial data set and seed data set in step 0, the temperature use in the softmax surrogate in step 1, and a more stringent selection criteria in step 5. Since the other steps remain the same, we focus here to discuss the main changes to the algorithmic details provided in Sec.A 5. 0 Build the seed data set: since evaluating the objective function could be the major bottleneck (assumed to be expensive) then we cannot rely on cost function evaluations to generate the seed data set.The strategy we adopted is to initialize the algorithm with samples of bitstrings which satisfy the hard constraints of the problem.In our specific example, we can easily generate n seed random samples, D 0 = {x (i) } seed , which satisfy the cardinality constraint.Since all the elements in this data set hold the cardinality condition, then maximum length n seed of D 0 is N κ .In our experiments, we set the number of samples n init = 2, 000, for all problems considered here up to N = 100 assets 1 Construct the softmax surrogate distribution: start by constructing a uniform multinomial probability distribution where each sample in D 0 has the same probability.Therefore, for each point in the seed data set its probability is set to p 0 = 1/n seed .As in TN-GEO as a booster, we will attempt to generate a softmax-like surrogate which favors samples with low cost value, but we will slowly build that information as new samples are evaluated.In this first iteration of the algorithm, we start by randomly selecting a point x (1) from D 0 , and we evaluate the value of its objective function σ (1) (its risk value in our specific finance example).To make this point x (1) stand out from the other unevaluated samples, we set its probability to be twice that of any FIG.6.Relative TN-GEO enhancement similar to those shown in the bottom panel of Fig. 2 in the main text.For these experiments, portfolio optimization instances with a number of variables ranging from N = 30 to N = 100 were used.Here, each panel correspond to a different investment universes corresponding to a random subset of the S&P 500 market index.Note the trend for a larger quantum-inspired enhancement as the number of variables (assets) becomes larger, with the largest enhancement obtained in the case on instances with all the assets from the S&P 500 (N = 500), as shown in Fig. 2 FIG. 2 . FIG. 2. TN-GEO as a booster.Top: Strategies 1-3 correspond to the current options a user might explore when solving a combinatorial optimization problem with a suite of classical optimizers such as simulated annealing (SA), parallel tempering (PT), generic algorithms (GA), among others.In strategy 1, the user would use its computational budget with a preferred solver.In strategy 2-4 the user would inspect intermediate results and decide whether to keep trying with the same solver (strategy 2), try a new solver or a new setting of the same solver used to obtain the intermediate results (strategy 3), or, as proposed here, to use the acquired data to train a quantum or quantum-inspired generative model within a GEO framework such as TN-GEO (strategy 4).Bottom: Results showing the relative TN-GEO enhancement from TN-GEO over either strategy 1 or strategy 2. Positive values indicate runs where TN-GEO outperformed the respective classical strategies (see Eq. 1).The data represents bootstrapped medians from 20 independent runs of the experiments and error bars correspond to the 95% confidence intervals.The two instances presented here correspond to portfolio optimization instances where all the assets in the S&P 500 market index where included (N = 500), under two different cardinality constraints κ.This cardinality constraint indicate the number of assets that can be included at a time in valid portfolios, yielding a search space of M = N κ , with M ∼ 10 69 portfolios candidates for κ = 50. FIG.3.Generalization capabilities of our quantum-inspired generative model.Left panel corresponds to an investment universe with N = 50 assets while the right panel corresponds to one with N = 100 assets.The blue histogram represents the number of observations or portfolios obtained from the classical solver (seed data set).In orange we represent samples coming from our quantum generative model at the core of TN-GEO.The green dash line is positioned at the best risk value found in the seed data.This mark emphasizes all the new outstanding samples obtained with the quantum generative model and which correspond to lower portfolio risk value (better minima) than those available from the classical solver by itself.The number of outstanding samples in the case of N = 50 is equal to 31, while 349 outstanding samples were obtained from the MPS generative model in the case of N = 100. VarianceFIG. 5 . FIG. 5. A graphical demonstration of indices used for performance metrics calculation Number of initial data points: 10 • Acquisition function: Expected Improvement b.Simulated Annealing Solver For simulated annealing (SA) we implemented a modified version from Ref. [44].The main change consists of adapting the update rule such that new candidates are within the valid search space with fixed cardinality.The conventional update rule of single bit flips will change the Hamming weight of x which translates in a portfolio with different cardinality.The hyperparameters used are the following: • Max temperature in thermalization: 1.0 • Min temperature in thermalization: 1e-4 c.Conditioned Random Solver TABLE I . Detailed comparison with SOTA algorithms for each of the five index data sets and on seven different performance indicators described in Appendix A 2. Entries in red correspond to cases where TN-GEO performed better or tied compared to the other algorithm.Entries in bold, corresponding to the best (lowest) value, for each specific indicator.Data Set Performance Indicator GTS IPSO IPSO-SA PBILD GRASP ABCFEIT HAAG VNSQP RCABC TN-GEO
v3-fos-license
2021-06-10T06:16:33.365Z
2021-06-01T00:00:00.000
235380874
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "e7dee07480a73df21b06050989220d0ce7bc05ff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2434", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4c16c2ecd24076be52b82e03797d6b574faf2754", "year": 2021 }
pes2o/s2orc
The Diagnostic Challenge of the Pediatric Brain Abscess Pediatric brain abscess (PBA) is a rare condition that portends a high mortality rate if not recognized and treated early. The spectrum of clinical manifestations of this disease process is wide and can often be vague, making it difficult for timely diagnosis in the emergency department. We detail the presentation of a four-year-old male with autism and a four-day history of decreased activity after a fall with a critical and rapidly worsening clinical course. Subsequent operative intervention revealed a diagnosis of PBA. This case highlights the clinical challenges of diagnosing altered mental status in a developmentally challenged pediatric patient. Introduction Pediatric brain abscess (PBA) is a rare condition defined as an encapsulated area of pyogenic organisms and pus. The annual incidence of bacterial brain abscesses in the general population has been reported to be between 0.3 and 1.3 per 100,000 population; recent case series of PBAs have estimated the annual incidence to be approximately 0.5 per 100,000 children [1]. An overwhelming majority of PBAs stem from predisposing factors such as concurrent rhinosinusitis, mastoiditis, orbital cellulitis, congenital heart or lung disease, or odontogenic infection [1]. The classic symptoms and signs of brain abscesses have been reported to be headache (69%), fever (53%), and focal neurologic deficits (48%) [2]. However, in a study, the classic triad consisting of fever, headache, and focal neurologic deficits in brain abscesses was present in only 20% of patients [2]. One study found neck stiffness in 15% of patients [3]. When taking into account the insidious nature of the presentation, the vague symptomatology, and the rarity of the condition, it is no wonder that PBA is a particularly difficult diagnosis to arrive at. We report a case illustrating the vague presentation and the potential for rapid clinical decline of PBA. Case Presentation A four-year-old male with a history of autism spectrum disorder presented to the emergency department (ED) with a chief complaint of progressively worsening lower extremity weakness of four-hour duration. The mother reported that the patient had fallen from the bed at home approximately four days prior. She stated that the bed was slightly higher than three feet from the ground and that the patient hit his forehead. He was previously evaluated at a different ED on two separate occasions, and the mother stated that he was discharged without imaging. Over the last several days, the patient started to have a constant headache, several episodes of nonbloody, nonbilious vomiting, decreased activity, and subjective fever. Initial vital signs were remarkable for a temperature of 99.7°F, a heart rate of 81 beats per minute, a respiratory rate of 20 breaths per minute, a blood pressure of 130/71 mmHg, and an oxygen saturation of 100% on room air. Physical examination was remarkable for a swelling to the mid-forehead without signs of basilar skull fracture. Neurological examination was remarkable for sluggish extraocular movements, somnolence, slow speech, dysmetria, and an inability to walk. The left lower extremity was found to be weak. Computed tomography (CT) imaging of the brain without contrast was performed due to suspicion of nonaccidental trauma. It showed a subdural hematoma of 8 mm in width with 7 mm of midline shift ( Figure 1); ethmoiditis was also noted. Lab work was found to be remarkable for leukocytosis with monocytosis and left shift. Toxicology screenings returned presumptively positive for marijuana. A stat call was made to transfer the patient to a facility with pediatric neurosurgical services. Glasgow Coma Score (GCS) was 14. Vital signs were within normal limits and had remained stable throughout the patient's stay. However, a repeat temperature taken at the time of transfer showed a fever of 101.2°F. As the patient was loaded onto the ambulance, he had a seizure. He was given lorazepam and levetiracetam for seizure control with improvement and was transported emergently. Upon arrival to the accepting facility, the patient's GCS deteriorated and he had another seizure. He was intubated and taken to the operating room emergently. Right-sided craniectomy was performed and drainage of purulent fluid was done. Blood cultures grew Streptococcus intermedius. The patient was placed on long-term ampicillin and showed improvement with rehabilitation. Discussion PBA poses a diagnostic challenge in the ED due to its rarity, variable duration of symptoms, and nonspecific presentation. The mean duration of symptoms has been reported to be 8.3 days [2]. This patient's clinical presentation with previous hospital visits, especially in the context of a traumatic brain injury with a low PECARN criterion and no focal neurologic deficits at the time, were also nonspecific. Per standard of care, this likely explains the previous two discharges from the hospital. Upon arrival to our ED, the patient's history of head trauma and a presumptive positive test for marijuana in the context of initially normal vital signs served to shift the differential diagnosis away from an infectious etiology and towards toxidromes, nonaccidental trauma, and toxic or metabolic encephalopathy. In fact, CT imaging was performed without contrast given that trauma was initially the first choice on our differential diagnosis. The patient's suddenly deteriorating neurological status, leukocytosis, and rising temperature clarified the etiology of his symptoms later in his visit. This initial delay in diagnosis can portend poorly to prognosis. Diagnostic delay is more likely to occur in the pediatric population [4]. This relates to the inability of pediatric patients to verbalize symptoms, especially in the context of developmental disorder, as was seen in our patient. Additionally, the classic clinical triad suggestive of brain abscess, fever, headache, and focal neurologic findings, is more specific than sensitive, as only 20% of affected patients exhibit all three at the time of diagnosis [2]. The case fatality rate has been reported to be approximately 10%, with 70% completely recovering, especially if diagnosed expeditiously [2]. The most common neurologic sequela in patients with PBA is a seizure, particularly with frontal brain abscess [5]. Poor prognostic factors for patients presenting with PBA include a rapid progression of neurologic deterioration, severe mental status changes on admission, stupor or coma (60-100% mortality), and ventricular rupture (80-100% mortality) [4]. In 40-50% of PBA cases, a contiguous site, such as the middle ear, mastoid and paranasal sinus infections, or through a skull discontinuity due to head trauma or neurosurgery, was the source of infection [6]. The most frequent site is the frontal lobe (secondary to frontal or ethmoidal sinusitis or dental infection), followed by parietal and temporal lobes (acute otitis media, mastoiditis, sphenoidal sinusitis) and less frequent sites such as cerebellum and brainstem from an otogenic or hematogenous origin [6]. A meta-analysis that included 6,663 adult and 1,023 pediatric brain abscesses reported between 1935 and 2012, in which pus or blood cultures were performed, showed that children shared similar etiology with adults, with pediatric cultures positive for Streptococcus spp. in 36% of cases, followed by Staphylococcus spp. (18%) and gramnegative enteric bacteria (Proteus spp., Klebsiella pneumoniae, Escherichia coli, and Enterobacteriae) in 16% of cases Of note, the culture growth of S. intermedius is consistent with the expected source of infection, given the presence of ethmoiditis on imaging [2]. The paranasal sinuses have been historically linked to streptococcal infections [7]. Conclusions PBA is an exceedingly rare and dangerous condition with the potential for rapid deterioration. The disease has a myriad of presentations, of which fever, headache, and neurologic deficit, as seen late in our case, are the most classic findings which may not appear together initially. The presentation of symptoms depends on the anatomical location of the abscess, microbiological composition, and vector of infection. Particularly in the setting of focal weakness or lateralizing symptoms, a high index of suspicion is required to diagnose this critical disease in a timely manner. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. issued approval N/A. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
v3-fos-license
2022-01-20T16:02:28.214Z
2022-01-18T00:00:00.000
246052005
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-1266409/latest.pdf", "pdf_hash": "340cb88ea52e51d78b4abdc871fa67591280ecef", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2438", "s2fieldsofstudy": [ "Physics" ], "sha1": "9935fdbc994f34bea761890cc62e079a3c34d339", "year": 2022 }
pes2o/s2orc
Evaluation of Saving Energy by Using the Power of Plant Composites as a Thermal Insulation; A case study: Effect of a Local Date Palm Fibers This research focuses on assessing the potential use of the power of local palm ber (Zahedi) as an eco-friendly composite in order to help the sustainable development. Local date palm bers are separated into individual bers and washed in water and then heated in an oven at 60 ° C to dry completely in order to investigation of physical properties. The developed composites were characterized in different ratio and compared in terms of physical and morphological properties. Afterwards, the analysis of scanning electron microscopy (SEM) and dispersive energy spectroscopy (EDS) from date palm bers of Zahedi was carried out to determine the chemical and microstructure composition of the bers. It was found that carbon and oxygen, are the only compatible and primary compounds in this palm ber. Analyzing in the longitudinal direction of the bers, it can be mentioned that the surface of the samples in nanoscale is irregular with many impurities, cells and pores. Also, the Scanning Electron Microscopy analysis of the ber diameter shows the average range of the diameter is 107–135 µm. Plant-based composites were characterized in seven ratios by combining the date palm bers with industrial cornstarch as a matrix which creates the insulator composite totally natural. The amount of thickness swelling and water absorption observed in this study was approved and appropriate for a volume ratio of 1:1 in comparison with other studied compounds. The thermal resistance of the three specimens tested by experiment. To evaluate the insulation capability, the specimens were placed at a chamber in a position to act as an obstacle to the thermal transmission. In this case, by increasing the temperature up to 80°c on one side (inside the chamber) and recording the temperature rise on the opposite side, the thermal insulation ability of the specimens was assessed. According to the results of the practical experiments, combining date palm ber (DPF) with cornstarch resin as a matrix illustrated that the composite material is applicable, safe and useful as a thermal insulation material at a maximum of 80 °c which is appropriate for preventing heat transfer in buildings. Introduction Today, the use of heat-resistant materials, especially in buildings, has been a fundamental issue in the construction industry with the aim of saving energy . Saving energy in buildings is used as a way to decline energy consumption, carbon emissions, and their destructive effects on our living environment (Cao, 2016). One of the factors in saving energy is attention to heat transfer that change the temperature through the walls by the process of conduction and radiation (Oshabi, 2015). Currently, most wall insulation materials are made from mineral wool, expanded polystyrene, or polyurethane. Although these materials have good physical properties, including low thermal conductivity and re resistance, they are very expensive and hazardous to environmental and human health (Morad, 2013). These substances may be the cause of environmental hazards. Small particles of berglass insulation can pose a health hazard and irritate the breath or skin (Occupational Safety and Health Organization, 2003). these bers may cause coughing and sore throat. (Infante, Schuman, & Dement, 1994). Most thermal insulations contain formaldehyde resin which can affect sensitive individuals and cause asthma (US Environmental Protection Agency, 2000). Cellulose insulation with toxic chemicals, such as boric acid, has been reported to be harmful to human health (US Department of Occupational Safety and Health (OSHA), 1999). The harmful nature and high cost of making these composites have made research activities in recent times necessary to nd alternative thermal insulation products. Natural bers, as a good alternative, have attracted the attention of scientists and researchers due to their environmentally friendly properties, which makes them known as potential reinforcers in polymer composites instead of synthetic bers. (K.I. Alzebdeh, 2017). The thermal effect of these insulating materials is comparable to ordinary, mineral, and glass ber-based products. Extensive research on the features of plant thermal insulation has shown that these materials have good thermal insulation properties compared to commonly used insulation materials such as mineral wool, expanded polystyrene, and polyurethane. In the case of bio-based resources, the possibility of providing a new source of income for domestic agriculture can be an important factor in the production and use of bio-based materials. (Steffensa, 2017). Nowadays, one of the good bio-based materials is a tropical tree of date palm found primarily in the Middle East. According to the UN Food & Agriculture Organization (FAO), the Persian Gulf countries account for approximately 50 percent of the global date production (Wet. FAO, 2018). There are more than 3,000 species of date palms around the world including 250 in Tunisia, 370 in Iraq and 244 in Morocco. Among countries, Iran has the largest number of palm species by about 400 species. (Zaid, 2002) and has the potential to use date bers in industrial applications. The southern provinces of the country such as Khuzestan, Sistan and Baluchestan and Bushehr are among the major producers of this product. Therefore, by using these bers as a reinforcer, it is possible to produce high-value products from a cheap and low-cost material, and by doing so, it can signi cantly contribute to the environment and the agricultural industry. In Iran, the area of date palm groves is estimated at 22 hectares, which is about 20 to 22% of the world's groves. In order to harvest quality dates, palms must be pruned every year. The average waste from pruning each palm is about 17 to 34 kg, which, considering about 27 million palm trunks in the country, produces at least 270,000 kg of waste per year, which is not currently used industrially. (M. Karizaki, 2017) In Iran, palm bers were used to make a local rope named Sis. Also, the elastic property of palm bers was applied to strengthen the corners of the king dome in the Bam, historical Citadel which shows the good properties of this material against tensile force. (Barsam, 2019 Polyethylene (Mirmehdi, et al., 2014) Date palm wastes Linear Low density polyethylene matrix (Alshabanat, 2019) Hybridized date palm and flax fibers Thermoplastic starch (Ibrahim, et al., 2014) Date palm fibers low-density polyethylene (LDPE) (K.I.Alzebdeh, 2017) However, there is insu cient information on the various physics-related properties of such insulating materials. Thus, the purpose of this study is to evaluate the feasibility of using the local date palm bers harvested from a southern region in Iran, as a plant-based composite material for the reduction of heat loss in air conditioning zones. In addition, this paper discusses the suitability of DPF for thermal insulation by examining thermal resistance and morphological characteristics. The need for alternative wastes is inevitable due to the rapid depletion of natural resources. The new plant-based insulating material differs from other materials in the literature data that have produced similar materials using cornstarch as an adhesive. Experimental Procedure Materials and process Date palm bers used in this study were harvested in 2019. These materials are date palm bers (DPF) of Zahedi (Local name), collected from the province of Kerman in Iran ( Figure. 1) The main production of Zahedi dates is in a region called Posht Kuh in Bushehr province. This kind of palm can be transported and stored conveniently, due to its dry nature. Also, it has a long shelf life of about two years in cold and dry places. (M. Karizaki, 2017). Date palm ber (DPF) has a natural texture and is pulled out from the top of trunk of palm tree in a form of nearly mesh and almost rectangular mesh (30 to 50 cm length and 20 to 30 cm width) and several layers from the tree trunk. This mesh network can be separated into independent bers with diameters from 0.1 to 0.8 mm shown in the gure 2. In this research, the bers are separated from a surface of the palm tree trunk. Using a cutting mill with a sieve of 20 mm mesh, grinding of date palm bers was done. various particle sizes were obtained from 20 to 100 mm and saved at room temperature (25° C). For the combination with industrial cornstarch, bers were nally selected in pieces 20 mm long ( Figure.3). The grinding machine is used for the agricultural wastes, which were ground in a mixture of corn starch. The production steps and typical insulating composite specimen produced (25 × 25 × 8 cm in size) is presented in the gure 4. Using a length of 20 mm due to the high modulus of elasticity and cornstarch, the samples were obtained in the volume ratio of 1:1, 1:2, 1:3, 2:3, 2:1, 3:1, 3:2. To make the insulation material natural, industrial cornstarch was applied as a matrix and bind to bers. About 90% of the volume of consumption by cornstarch products are used to make paper and paperboard because of its elasticity; in tile manufacture of textiles; eld as adhesives in making cardboard, cartons, boxboard, insulation board, and gummed labels and tapes. (Russell, 1973) Other items are for making sample mixers and square molds without holes. Density: Using a length of 20 mm due to the high modulus of elasticity and cornstarch, the samples were obtained in the ratio of 1:1, 1:2, 1:3, 2:3, 2:1, 3:1, 3:2. Seven samples were tested and the results of the tested samples were presented ( gure 5, gure 6, table 3). Density was determined according to BS EN 323: 1993 using Eq: Where M is the mass and V is the volume of the composite sample. The relationship between bers and resin is presented in the gure 7. To show the graph appropriately, the amount of density was divided by 100. Thermodynamic properties: The aim of a temperature test is to verify the resistance of a specimen to the environmental in uence of temperature. In this study, mini-desktop temperature test chamber is the apparatus to test temperature resistance. These series are manufactured under the (ATM) model and are designed for setting in laboratories suffering from lack of space or want to test their smallest size, parts, materials and productivity at the lowest cost. This model of chambers with a minimum temperature of -40 and -70 and a maximum temperature of + 180 ° C. are available in sizes 37, 55 and 75 liters, (Figure 8). To evaluate the insulation capability, the specimens are placed in a position to act as an obstacle to the thermal transmission. Then, by increasing the temperature up to 80°c on one side (inside to the chamber) and recording the temperature rise on the opposite side, the thermal insulation ability of the samples is assessed. Water Absorption: Test Water absorption tests were carried out for each specimen, with dimensions of 40 mm × 40 mm × 50 mm, as per ASTM D 570-98 (2018). This experimental system to study water absorption has a duty to test of control over the uniformity of a product. This function is especially useful for sheet, rod and tube arms when tested on the nal product. The initial specimen's weight (Wd) was measured and recorded before immersion in water. Then, specimen weight (Wn) was measured again and noted each 24 hours for a week. The water absorption percentage of the composites were counted using Equation (2): Where Wn is the specimens' weight after immersion into the water and Wd is the specimens' weight before immersion into water. 2.1.6. Thickness Swelling Test: Thickness swelling of each composite sample, with dimensions of 40 mm × 40 mm × 50 mm, was assessed according to ASTM D 570-98 (2018). The initial thickness of all seven samples was recorded before placing them in distilled water. By immersing the samples for one week, the thickness of the samples was recorded every 24 hours. The thickness swelling of the specimens was done using Equation (3): Thickness swelling ( ) = (T1 -T0 / T0) × 100 (3) Where T0 is the thickness before soaking and the thickness of specimen after soaking is T1. Morphological analysis (SEM and EDS analysis) Using Nova Nano SEM 450 (SEM) scanning electron microscope, microscopic studies were performed. Energy dispersive spectroscopy (EDS) was applied to analyze the chemical combination of palm ber samples using Nova Nano SEM 450. (Figure 9) Energy dispersive spectroscopy (EDS) was applied to analyze the chemical composition of palm ber samples using Nova Nano SEM 450. (Figure 9) The EDS process illustrates x-rays emitted from the specimen when they are bombarded by an x-ray to specify the elemental combination. properties of materials can be tested in scale of 1 µm or less. When the specimen is bombarded by electron beam, electrons are emitted from the atoms containing the sample's surface. The resulting electron vacancies are completed by electrons from a higher state, and an x-ray is emitted to balance the energy division between the two electrons' states. The x-ray energy is a feature of the element from which it was emitted. The EDS X-ray detector illustrates the relative frequency of the emitted X-rays in comparison with their energy. The detector is mainly lithium-containing silicon. This detector converts electrons re ected by the sample surface into a signal that can be viewed as images on a monitor. When the X-ray hits the detector, it generates a rod pulse equal to the X-ray energy. The bar pulse is transformed to a voltage pulse by a charge-sensitive preampli er. The signal is then transmitted to a multi-channel analyzer in which the pulses are sorted by voltage. The amount of energy determined in each measurement is sent to the computer as an X-ray image to display and evaluate the data. The spectrum of x-ray energy is evaluated to determine the elemental combination of the sampled volume. Using a process known as X-ray mapping, information about the elemental combination of a sample can then be overlaid on top of the magni ed image of the sample (Wiley, 2015). Thermal conductivity Heat transfer occurs in objects when a temperature gradient is created between two points in objects. To create this gradient, the sample was placed in a heat resistance machine and by installing four thermocouples in the four corners of the sample for 140 minutes up to a maximum temperature of 80 degrees, the heat transfer rate between the two surfaces of the sample is tested. Also, the thermal conductivity of samples is determined using comparison with literary data. According to the report of Agoudjil et al, these parameters have been obtained at room temperature with their corresponding statistical con dence bounds (Agoudjil, 2011). Structure and morphology In order to investigate the structure of pores and their distribution, the Nova Nano SEM 450 scanning electron microscope with a working voltage of 0.30 / 30 kV was applied to observe the surface structure of palm bers. Prior to observation, a small portion of the bers was prepared and mounted on an aluminum foil with a thin layer of gold to prevent electrostatic charge during the SEM examination. The elemental samples of date palm bers were determined from the peak areas of the trunk. These gures illustrate SEM images of a sample Zahedi-DPF in a longitudinal direction of bers is scaled from 500 to 5 µm. Examining the nanostructures in the samples, it is observed that the surface of the samples is irregular with many impurities, cells, and pores ( gure.10). The results presented in gure 12 show the presence of oxygen and carbon atoms with signi cant atomic percentages and carbon is the predominant element that con rms the presence of cellulosic carboxylic groups. Carbon and oxygen are two major elements with 51.40% and 44.72% respectively. Other minor components in the test obtained from the analysis included sodium, magnesium, silicon, sulfur, and calcium. Figure 11 shows the irregular surface of bers with many cells, which allows for proper adhesion between the ber and the matrix and refers to the porous structure of the bers, which con rms their hydrophilic properties. 4-2. Energy dispersive spectroscopy (EDS) results The images above (Figure 12) show the spectrum obtained from the analysis of a sample of date ber material by EDS X-ray. The energy of each of the peaks shown in this diagram is assigned to a speci c atom. Also, the number of elements in the sample based on weight and atomic percentages is shown below the gures. In this experiment, two different points of the bers were analyzed: the rst point was related to the outer part of the ber. It was detected that carbon and oxygen are the main compatible compounds in this date with 51.40 and 44.72% atomic percentages, respectively, the only permanent and essential components in palm ber. The second point tested is basically made of fewer compounds, but The two elements, carbon and oxygen were also observed as permanent components of the bers. 4.3. Water Absorption: Figure 13 shows the percentage of water uptake of date / corn starch ber compounds. It is clear that the DPF composition in the corn starch matrix rises their water uptake values as the immersion time increases, as the samples become saturated after approximately eight days. This behavior can be interpreted by the hydrophilic character of palm ber, which is due to the existence of polar groups that form a strong hydrogen bond between cell and water molecules. This behavior can be explained by the fact that water uptake is generally affected by the presence of pores, cracks, holes, cavities, defects, poor surface adhesion and micro cracks at the interface between the DPF and the cornstarch matrix. Strong adhesion shows less space in the composites, which can store water molecules and thus reduce water absorption. In contrast, more water absorption leads to the creation of micro cracks, which eventually leads to cavities and free space inside the composites due to the swelling of the bers. The void / free space created contributes to the potential failure mechanism of these composites. Thickness Swelling Test: From Figure 14, it can be also observed that the specimen No.7 shows higher thickness swelling behavior. These statements are also in line with those of other researchers such as Edhirej, A et al and Senthil Kumar, K et al. Signi cantly, research has shown that swelling minimizes the surface adhesion thickness that affects mechanical properties. In fact, the thickness swelling and water absorption of palm berreinforced composites are two of their basic pitfalls since the existence of humidity results in the breakdown of the ber-matrix surface or in delamination, ber separating and, nally, the creation of micro voids and interfacial gaps induced by moisture which, in turn, affect their mechanical features. Supplementary to this, the statements have been studied and reviewed by other researchers. Thermodynamic Test Regarding to the results of water absorption and swelling thickness three specimens were opted to investigate the heat resistance test. considering the minimum, moderate and maximum percentages of water absorption and swelling thickness, samples examined were No.1,No.4 and,No.7. The results are shown in the gure 15. To make a same condition for comparing, each specimen was exposed to 80.3°c temperature in the hot side. Among three specimens, No.1 witnesses the maximum temperature of 35.7°c in the cold side during the 142.5 minutes. For Specimen No.4 and No.7 the maximum temperature of the cold side during the experiment was 24.4°c and 36.6°c at the same time, respectively. 4-5.1 Investigation of thermal resistance on the optimal specimen As is mentioned, the goal of this experiment is to assess the feasibility of applying the mixture of palm bers and cornstarch, as a plant-based composite material for the reduction of heat loss in air conditioning zones. To prepare the desired composite, the industrial starch was mixed with a suitable percentage of date palm bers (DPF) in a volume ratio of 1:1, and after soaking the starch particles and bers, the mixture is kneaded well to remove the air bubbles trapped inside it. In this way, the hardness of the composite also increases. DPF and cornstarch should rst be dry-mixed to have a uniform color on a clean masonry platform and then mixed by adding clean water slowly and gradually to have consistency. In order to evaluate the insulation capability, the specimen was placed at a chamber in a position to act as an obstacle to the thermal transmission. In this case, by increasing the temperature up to 80°c on one side (inside to the chamber) and recording the temperature rise on the opposite side, the thermal insulation ability of the specimen was assessed. The thermal resistance test was performed by increasing the burner output both gradually and rapidly over a speci ed period of time. The graph shown ( Figure.16) is the result of a thermal resistance test. In the vertical axis the temperature is from zero to 90 degrees Celsius and in the horizontal axis the time is up to 160 minutes, the red graph presents the heat transferred to the hot side surface and inside the furnace and the blue graph shows the cold surface temperature. The wall between these two surfaces is completely vacuumed to prevent heat loss. As shown in the gure, by increasing the burner output at the hot surface, the heated surface temperature rst reaches linearly up to 40 °C in the rst 8 minutes. Then, the temperature is gradually increased, with a gentle slope and downward concavity to about 60 ° C in 80 minutes. While the temperature on the hot side rises to 80 °C with a steep slope and is still exposed to a maximum temperature of 80 °C for 42 minutes, only a slight change is observed in the cold surface of the sample, which also It can be ambient temperature uctuations in the chamber inside the device. The maximum temperature of the cold side during the experiment is 24.4 °C. In general, this process means that this composite material can be used as a thermal insulation material below 80 °C, is safe and useful, which can be suitable for use to prevent heat transfer in construction application. As it was determined from the results of the EDS analysis, carbon and oxygen are the only permanent and fundamental components in these materials proved by the EDS test of Zahedi-DPF. In the study by Agoudjil et al, the constituent elements in the bers of several palm species in Morocco were examined with the SEM apparatus. In table (4) "Zahedi" species with its constituent elements as well as the weight percentage of each element is written in the rst row of the table. The other species tested by Agoudjil et al are also listed under Zahedi species, respectively. To better compare between these species, all the constituent elements of the species are shown in gure 16, and then the closest species to Zahedi bers in terms of weight percentage of constituent elements are speci ed separately ( gure.17). As is mentioned in the gure 17, the literature data showed the medium value of quanti cation EDS tests for each variety and part of date palm samples. Also, in the graph below, it can be seen the amount of Carbon and Oxygen as primary elements and other components in the Zahedi-DPF and the petiole of Elghers, PEG, are relatively similar to each other. This can be attributed to the similar chemical composition of the soil environment and local climate in which the plants are growing. By comparing the numbers of elements reported by Agoudjil et al (Agoudjil 2011) with those of in this case study, the thermal conductivity of PEG (Petiole of Elghers-variety) is approximately similar to that of DPF of Zahedi. Hence, according to the table (5), the thermal conductivity of the sample studied in this work is approximately 0.072W (Wm-1K-1) at atmospheric pressure (Table 5). This is close to the thermal conductivity of many insulator materials, such as wood ber insulation boards, which are used for thermal and acoustic insulation on ceilings, walls or oors. However, most of the board insulations are a combination of materials with different functions. Since DPF is a good material to improve the tensile and exural strength in plasterboards it can be also used as an improver for other natural materials (Azami 2015). Therefore, due to the similarity of its constituent elements with the species studied in this study (Zahedi species), the thermal resistance of Zahedi species can be estimated approximately. Therefore, with these conditions, it is possible to prepare and test the natural insulation composite from the desired bers and natural resin, assuming that these bers have the ability to resist heat transfer. (Agoudjil, 2011) (Agoudjil, 2011) It can be mentioned that adding the bers decreases the thermal conductivity of the materials, and consequently increases the thermal resistance. This fact may present two interpretations: The rst subject, has connection with the insulating properties of palm bers, which has a low heat conductivity. The thermal conductivity of a compound object relies on those of inclusions which constitute it. The lower the thermal conductivity of inclusions, the more the composite is insulating. The second case is that the addition of natural bers to strengthen building materials leads to an increase in the ratio of cavities and voids in them, led to a decline in the density of the composite and therefore its heat conductivity. Figure 18 shows the relationship between density and ratio of bers to cornstarch in different specimens. From the above, it can be concluded that the natural bers in industrial corn starch play an important role in improving thermal conductivity. Results with observations reported by several researchers such as: A. Oushabi et al, Negi et al, in their study on the effect of natural bers on the thermal properties of concrete, and V. Hospodarova et al, in their research (HOSPODAROVA, 2017) It can be seen from previous results, as the content of bers increase in the specimen, the sample is made lighter in weight and the density decreases. however, the thermal resistance improves and creates the plant-based composite more eco-e cient by applying the best proportion. This is a desirable characteristic of ideal production sort and encourages various applying and use in the eld. Although there is a determining relationship between thermal resistance and density, the quality of composites in terms of mechanical properties is the other substantial item that must be considered. The observations are in agreement with the results reported in the literature data. , (BEDERINA, 2007) According to the Figure 19, based on results, the density decreases with the increase in water absorption and swelling thickness percentage. Also, compared to the graph of water absorption and swelling thickness on the tenth day, the amount of thickness swelling is less than water absorption in specimen No.1 and No.2. Due to the fact that the higher the content of bers, the higher the density and consistency of the sample would be, so, the lower the thickness swelling and distortion compared to water absorption percentage could be reasonable. However, by increasing the ratio of the ber to matrix and decreasing the density consequently, the percentage of distortion and swelling thickness in the specimens No.6 and No.7 has increased compared to the percentage of water absorption. Among specimens, in No.3,No.4 and No 5, the percentage of thickness swelling and water is less than other samples. Considering the effect of coherence between resin and bers in thickness swelling and water absorption percentage, this point can be mentioned that consistency and coherence between materials in these specimens are well established. Totally, it can be mentioned that to have an optimal insulation, increasing the percentage of bers to decrease the density in composite is not the only item that should be considered, but the coherence between the bers and the matrix, the percentage of water absorption, swelling thickness, mechanical properties and resistance against moisture are the other important factors should be pointed out. To prevent mold in buildings in hot and humid climates, the humidity and temperature of the environment should be kept low. However, the materials proposed in the present study can be used both indoors and outdoors since they are protected with a surface of protective painting. To continue the investigation of this subject can be mentioned: working to improve the penetration of the corn starch between bers in order to fabricate plant-based composite with better mechanical properties. Other mechanical properties and IR-spectroscopy and the other strategies that can be used for assessing the quality of thermal conductivity and thermal diffusivity. Page 13/24 comprehensive study of the effects of composite moisture content on the mechanical features. Conclusion The number of people suffering from chemical allergies are increasing and natural building materials can help to reduce it. Formaldehyde and other chemicals are often used in the manufacture of construction products. The emission of these gases not only reduces the quality of indoor air but also their transfer to the open-air causes air pollution in the world and the environment. Choosing natural building materials without chemicals will lead to safer indoor air quality and reduced health risks for homeowners. Some resources are renewable (such as trees and bers) and some are so abundant that they are considered almost inexhaustible (such as rock and soil). These local natural materials require less processing or transportation and their economic and environmental costs are low. Therefore, by adapting these materials to today's technical and engineering requirements, they can be used to provide housing for low-income people. For sustainable development, it is necessary to distribute limited local nancial resources within and among the locals instead of leaving the area. Also, technical knowledge in various elds, including construction in the region, should be expanded and maintained. On the other hand, ecofriendly materials give the building an "indigenous" quality or a sense of belonging to the place, which is pleasant. In this article, date palm bers, which is one of the most widely used ecological materials that can be applied in Iran, were studied as a thermal insulation. After experimental testing of bers, obtaining the chemical composition of its constituents by EDS, investigation of seven samples in terms of water saturation and swelling properties and thermal resistance test, the optimal insulation sample was obtained. Considering the proposed factors mentioned in this study, combining the same volume ratio of date palm bers with industrial corn starch, the best specimen was experimentally obtained. The result showed no changes in heat transfer by using the composite, the eco-friendly material, in maximum temperature up to 80 °c. This result shows that date palm bers have a positive effect on energy optimization in buildings. Date palm bers with approximately 20 to 30 cm width and Separated date palm bers with dimeters of 0.1 to 0.8 mm Figure 3 Date palm bers in different sizes of 20mm,50 mm and 100 mm respectively from A to C. According to the table 2, the length of 20 mm (A) was used in preparing seven samples due to high modulus of elasticity. Figure 4 Requirement for making specimens: Date palm bers, Corn starch, Mixer, Molding container. Figure 5 Prepared sample in deferent proportion. Figure 7 Comparison between ber content in the specimens and density (kg/m3) /100. Surface of the samples in scale of 5 and 2 µm. Figure 12 The result of EDS test of Zahedi-DPF Figure 13 Water absorption tendencies of plant-based composite. Horizontal axis: Immersion (day), Vertical axis: Water absorption (%) Figure 14 Thickness swelling test. Horizontal axis: Immersion (day), Vertical axis: Thickness swelling (%) Figure 15 Results of thermal resistance test for selected specimens No 1, No 4, No 7. Page 23/24 Figure 16 The result of thermal resistance test on specimen No.4. Figure 17 Comparison between EDS test of Zahedi-DPF and samples from literary data Figure 18 Relationship between Density and content of date palm ber. Figure 19 Comparison with density, swelling thickness, water saturation.
v3-fos-license
2023-03-08T06:18:27.437Z
2023-03-07T00:00:00.000
257376190
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "6222f48f4e71043139e5d1eb51e4dccc22c85b75", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2440", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "75c2359d1c8d08b45be58d4caa05a7bb32d372a4", "year": 2023 }
pes2o/s2orc
Hepatotoxicity in immune checkpoint inhibitors: A pharmacovigilance study from 2014–2021 Adverse events(AEs) related to hepatotoxicity have been reported in patients treated with immune checkpoint inhibitors (ICIs). As the number of adverse events increases, it is necessary to assess the differences in each immune checkpoint inhibitor regimen. The purpose of this study was to examine the relationship between ICIs and hepatotoxicity in a scientific and systematic manner. Data were obtained from the FDA Adverse Event Reporting System database (FAERS) and included data from the first quarter of 2014 to the fourth quarter of 2021. Disproportionality analysis assessed the association between drugs and adverse reactions based on the reporting odds ratio (ROR) and information components (IC). 9,806 liver adverse events were reported in the FAERS database. A strong signal was detected in older patients (≥65 years) associated with ICIs. hepatic adverse events were most frequently reported with Nivolumab (36.17%). Abnormal liver function, hepatitis, and autoimmune hepatitis were most frequently reported, and hepatitis and immune-mediated hepatitis signals were generated in all regimens. In clinical use, patients should be alert to these adverse effects, especially in elderly patients, who may be aggravated by the use of ICI. In recent studies, ICIs have shown significant efficacy, changing the outlook for the treatment of non-small cell lung cancer(NSCLC) [5], squamous cell carcinoma [6], hepatocellular carcinoma [7], renal cell carcinoma [8] and uroepithelial carcinoma [9]. In phase 3 studies in unresectable hepatocellular carcinoma, atezolizumab was the first ICIs to demonstrate improved overall survival and progression-free survival. In addition to ICIs monotherapy, a combination of CTLA-4 and PD-1 inhibitors has been approved for the treatment of certain malignancies. However, due to their specific mechanism of action, ICIs may lead to disruption of the patient's normal body systems and adverse drug reactions (ADRs) while improving antitumor efficacy. Any organ system can be affected by immune-related adverse events (irAEs), but the most common organs are the skin, liver, gastrointestinal tract, lungs and endocrine glands [10]. The utility of ICI therapy is limited by irAEs in multiple organ systems. Some studies have shown that the probability of ICI-related hepatotoxic adverse events(AEs) is 2% and clinically significant ICI-related hepatotoxicity is uncommon, but most hepatotoxic AEs can lead to permanent discontinuation of the drug [11]. The indications for ICI continue to expand into different cancers, stages of disease. With this increased use, adverse events including immune checkpoint inhibitor-related hepatotoxicity have become an important clinical issue. FAERS is a spontaneous reporting system that includes all adverse drug reactions and medication error information collected by the US Food and Drug Administration (FDA) and provides an important basis for post-marketing safety risk monitoring and evaluation of drugs. There is no safety analysis of ICIs-related hepatotoxicity in a large data sample. Therefore, this study provides a reference for rational clinical use of drugs by data mining and analysis of safety signals of ICIs in the FAERS database, the adverse event reporting system of the FDA. Data Data for the retrospective pharmacovigilance study were obtained from the FAERS database. FAERS collects data not only from the United States, but also from other countries and regions. A total of 36 quarterly reporting files were screened from the first quarter of 2014 to the fourth quarter of 2021.FAERS data files contain seven types of data sets: patient demographic and administrative information (DEMO), drug/biologic information (DRUG), adverse events (REAC), patient outcomes (OUTC), reporting source (RPSR), reporting drug therapy start date and end date (THER) and indication (INDI). As recommended by the FDA, we removed duplicate records prior to statistical analysis by selecting the most recent EVENT_DT when the CASEID was the same and the higher PRIMARYID when the CASEID and EVENT_DT were the same. All data downloaded from the FDA website were processed by SAS 9.4 and further analyzed using R software. Targeted drugs and AEs We used the trade names and generic names of drugs included in the National Center for Biotechnology Information(NCBI) to search the FAERS database for ICIs that have been approved for marketing by the FDA, including CTLA-4 (ipilimumab, tremelimumab), PD-1 (nivolumab, cemiplimab, pembrolizumab) and PD-L1 (atezolizumab, avelumab, durvalumab) (S1 Table). Adverse events with ICIs related to hepatotoxicity were defined as cases in the FAERS database where the treatment regimen included drugs in the ICIs class and a liver-related adverse reaction in the SOC classification occurred. AEs in the FAERS database are coded according to the preferred terms (PTs) in the Medical Dictionary of Regulatory Activities (MedDRA). According to MedDRA version 23.0, our study includes all liver and hepatobiliary-like diseases (MedDRA code 10019654) and all tumors of the hepatobiliary system (MedDRA code 10019811). In addition, based on the structure and variables of the FAERS database, a single adverse event report of ICIs related to hepatotoxicity was recorded as one case of data, even if more than one adverse event report was reported by the same patient. Time interval analysis of the occurrence of ICI-related hepatotoxic AE We assessed the time to occur of ICIs-related hepatotoxic AEs. The time of occurrence was the time interval between START_DT (treatment start date) and EVENT_DT (date of adverse event). We excluded inaccurate data entry, missing specific data and incorrectly entered reports (EVENT_DT earlier than START_DT). Reported hepatotoxic adverse events were used as positive endpoint events of interest, with the time of initiation of treatment in the case as the starting point of the event and the time of reporting of AEs as the positive endpoint of interest. Kaplan-Meier curves were used to calculate the median survival time and to describe the course of AE occurrence in terms of survival curves. The Kruskal-Wallis test was used to determine whether the time to AE onset was statistically different between ICI treatment regimens. A two-by-two comparison of each drug was then performed by Wilcoxon rank sum test. Disproportionality analysis. Disproportionality analysis is a data mining method that is now widely used in adverse drug reaction monitoring, and this method uses the number of times a drug is reported in association with an event in the adverse drug event reporting database as a basis to study the statistical relationship between the target drug and the target event in the database [12]. There are two main types of proportional imbalance methods applied nowadays: the frequency method and the Bayesian method, and the frequency method contains the reporting odds ratio method and the proportional reporting ratio method. The advantage of the frequency method is that it is simple to calculate, easy to understand, low time consuming, and does not require a priori information about the model, but it is highly susceptible to singular values. When the number of frequencies is small, the accuracy of the algorithm will be affected. The BCPNN method takes into account not only the information of probability asymmetry but also the information of the overall sample, which is more flexible and stable compared with the frequency method. In data scenarios with larger sample size, the frequency method has more aggressive calculation results, while the BCPNN method is relatively conservative. Therefore, in practical applications, the two methods should be combined to evaluate the signal results of pharmacovigilance in an integrated manner. AE reports for suspected drugs and other drugs were calculated using a 2×2 columnar table (Table 1). Two data mining methods, reporting odds ratio (ROR) [13] and Bayesian confidence propagation neural network (BCPNN) [14] for information components (IC), were used to detect potential associations between ICI and liver AEs. Reporting odds ratio method. The formula for calculating the ROR is as The 95% confidence interval for the ROR can be calculated by ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi According to Bayes' theorem, the calculation of IC can be obtained as Let P(Drug) obey a Beta distribution with α 1 , α 2 as prior parameters, P(ADR) obey a Beta distribution with β 1 , β 2 as prior parameters and P(Drug, ADR) obey a joint Beta distribution with γ 1 , γ 2 as prior parameters. When the Bayesian prior probabilities are uninformative priors, the Beta distribution is represented as The 95% confidence interval for IC was calculated by ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi VðICÞ p E(IC) and V(IC) can be expressed in a more understandable and computer-friendly way by Descriptive analysis By processing, a total of 53,000,662 data were recorded in the FAERS database between 2014 and 2021, of which 269,090 ICI-related adverse events were reported, of which 9,806 were reported to be related to hepatotoxicity. We summarized the clinical characteristics of the patients, which are described in Table 2. A greater proportion of all reports of hepatotoxicity associated with ICIs were in males than in females (57.65% vs. 32.77%). By further analysis, signals were detected in both males and females (Male: IC025 = 0.0006, ROR025 = 0.5721; Female: IC025 = 0.3431, ROR025 = 1.6807). According to the United Nations definition of the age of the elderly, developed countries define 65 years or older as the elderly, so 65 years was used as the cut-off for classification. The results of the study showed significant differences between age subgroups, with a smaller proportion of elderly (�65) than non-elderly (<65) (36.78% vs. 44.50%), but further testing produced a significant signal (IC025 = 0.2617, ROR025 = 1.3984), which may be attributed to the effect of cancer survival and degenerative changes in the elderly organism. The most frequently reported regressions were other serious medical events, hospitalization and death. Hospitalizations (IC025 = 0.1101, ROR025 = 1.1408), deaths (IC025 = 0.0492, ROR025 = 1.0482) associated with liver AEs following ICI treatment were reported, indicating the threatening nature of potential life-ICIs-related hepatotoxicity. The most reported countries were Japan (29.57%), followed by the USA (18.05%), France (13.15%), Germany (5.65%) and China (3.24%). In all reports of ICIs-related hepatotoxicity, we analyzed the relationship between each class of ICIs and age separately (Table 3). atezolizumab, Ipilimumab, Nivolumab and Pembrolizumab were detected as signals in the elderly population. Spectrum of hepatotoxic AEs in immunotherapy regimens In general, not all ICIs were associated with hepatic AEs. However, signals were detected when each drug was analyzed separately with hepatotoxic AEs. Tremelimumab had the strongest PLOS ONE statistical association with ICIs-associated hepatotoxic AEs in the analysis of overall and individual ICIs (Table 5). Table 6 shows the time to onset of hepatotoxic AEs for each ICIs. The shortest median time to AE onset was 15 days for Avelumab (IQR 9.5-36.5) and 196 days for Cemiplimab (IQR 115-276.5). Analysis of the time interval of the occurrence of AEs A two-by-two comparison of the different drugs by Wilcoxon rank sum test revealed that the time interval to AEs onset was significantly shorter for Ipilimumab than Durvalumab, PLOS ONE Nivolumab and Pembrolizumab; Atezolizumab had a significantly shorter time to AEs onset than Nivolumab (Fig 4). Of these, the number of cases for Avelumab (11 cases), Cemiplimab (2 cases) and Tremelimumab (7 cases) was too small to be compared. Discussion Our study is the most systematic and comprehensive to date, based on the FAERS pharmacovigilance database, for comparing the association between ICIs and hepatic adverse reactions. We analyzed the FAERS database for adverse events associated with ICIs by measuring disproportionality and identified characteristics and differences between ICIs and associated PLOS ONE hepatotoxic AEs in order to update safety information. We used two methods, the Bayesian Confidence Neural Network method (BCPNN) [15] and the Reported Odds Ratio method (ROR) [16]. The ICIs included in the study had varying dates of launch, with ipilimumab being the earliest to be launched in 2011. However, based on clinical use, nivolumab was the most used and widely available. Our study is a pharmacovigilance study based on more than 50 million records of ICI-related hepatotoxicity over a specific time period, which makes our conclusions more reliable. Most clinical trials of ICIs have evaluated clinical effects other than AEs, and only brief descriptions of these severe or even fatal AEs have been provided. Previous studies have shown that ICIs increase the risk of organ system toxicity, such as endocrine, hematological, otologic and vagotoxicity [3,17,18]. Hepatotoxicity has also been mentioned in the adverse effects of some drugs. The delayed onset and prolonged duration of ICI-associated AEs compared with chemotherapy-induced AEs make timely recognition and early personalized management important [19]. Our results show that most ICIs-associated hepatotoxic AEs occur in the first few weeks after dosing, and then the likelihood decreases. However, it is still possible for patients to develop AEs during subsequent treatment, even over months or years [20]. Based on the analysis of the time interval of AE onset, the median time to AE onset was similar for ipilimumab, PLOS ONE nivolumab and pembrolizumab, with the same incidence of AE on day 15 after dosing, but from the long-term follow-up, it was found that the incidence of AE on day 63 after dosing reached 75% for Ipilimumab, while pembrolizumab took 106.75 days and nivolumab took the longest of 134 days. This indicates that nivolumab has a lower risk of AE occurrence from long-term use, which may be due to the long duration of time nivolumab has been on the market and the clearer indications and adverse effects. Notably, the detection of reported hepatotoxic adverse events in the elderly produced a significant signal (IC025 = 0.2617, ROR025 = 1.3984), although a smaller proportion of elderly (�65) than non-elderly (<65) were reported (36.78% vs. 44.50%). The results suggest that the elderly are at high risk of developing hepatotoxic AEs following ICIs drug therapy. Previous studies have shown that increased levels of hepatotoxicity in elderly patients are associated with elevated levels of oxidative stress and inflammation in the liver [21]. In addition, hepatitis and immune-mediated hepatitis have the strongest adverse effect signals and the widest distribution. the severity of ICI-associated immune-mediated hepatitis (IMH) varied. The risk of ICIs-induced liver injury may be influenced by the specific checkpoint molecules targeted, the dose level of the ICIs, and the preexisting autoimmune qualities, chronic infection or tumor cells infiltrating the liver parenchyma. The impact of ICIs therapy may be influenced by the presence of specific checkpoint molecules, ICIs dose levels and preexisting autoimmune qualities, chronic infection or tumor cells infiltrating liver parenchyma. When patients experience liver injury during ICIs therapy, the cause of the injury should be promptly assessed and steps taken to best manage adverse events [22]. In addition, autoimmune hepatitis is another major immune-related hepatotoxic event, with a signal found in seven drugs (except avelumab), of which Ipilimumab had the strongest signal (IC025 = 4.2346, ROR025 = 23.1078). Previous studies have found that the use of ipilimumab in patients with previous autoimmune disease is more likely to cause immune-related AEs [23]. Liver metastases were detected as a signal in four drugs (Atezolizumab, Durvalumab, Ipilimumab, Nivolumab), however, it is generally believed that liver metastases are not caused by ICIs, that liver metastases are associated with poor prognosis and that ICIs therapy is less effective in patients with liver metastases. One study reported a shorter median progression-free survival time (MPFS) and lower disease control rate (DCR) in patients with liver metastases in NSCLC patients treated with Nivolumab [24].One possible explanation for the poor prognosis of patients with liver metastases treated with ICI is that patients with liver metastases have a poorer ECOG score standard PS score compared to patients without liver metastases. Less than 10% of patients with adenocarcinoma have a single liver metastasis, suggesting that multiple metastases may have occurred at the time of diagnosis of liver metastases and may have contributed to the poorer score [25]. In addition, the tumor microenvironment plays an important role in liver metastases. One study showed that patients with liver metastases showed lower CD8 T-cell counts at the invasive margin [26]. Considering that the liver has immunomodulatory functions to maintain local and systemic immune tolerance to auto-and foreign antigens, the association between liver metastases and CD8 T cells suggests that liverinduced peripheral tolerance may influence treatment outcome. The mechanisms by which ICIs cause hepatotoxicity are currently being described differently. The hemi-antigen hypothesis mentions that reactive metabolites bind to cellular proteins to form neoantigens, called "haptens", which then move to major histocompatibility complex molecules on antigen-presenting cells and activate cytotoxic T lymphocytes, B cells and natural killer cells, stimulating an immune response against hepatocytes. The haptens may also induce autoantibodies to cytochrome p450 enzymes, leading to cellular damage and death [27,28]. In addition, a more plausible explanation for the non-specific activation of the immune system associated with ICIs may lead to side effects in many organs, where in addition to attacking tumor-specific antigens, highly activated T lymphocytes also have targeting activity against normal tissues [29]. CD8+ cytotoxic T lymphocytes destroy tumor cells, causing them to release tumor antigens, neoantigens and autoantigens, leading to immune tolerance decline, which is referred to as epitope propagation [30]. By giving PD-1 immune tolerance-deficient mice amodiaquine, an antimalarial drug associated with drug-induced liver injury, Metushi et al. compared with immune-tolerant mice and found greater elevations in ALT and hepatic monocyte infiltration, suggesting that the severity of liver injury may be associated with reduced immune tolerance [31]. ZEN et al. also found in their study, liver biopsy samples from patients with ICI-associated hepatotoxicity showed predominantly lobular hepatitis and a significant infiltration of CD3+ and CD8+ T cells in the liver parenchyma of the samples. All the above studies support the mechanism of autoimmune attack of the liver leading to injury [32]. There are some certain limitations remain in our study. First, although safety issues can be assessed through the FAERS database, the database is limited by the lack of detailed clinical data. Further study needs carry out to construct the relationship between safety issue findings and practical clinical scenario. Moreover, due to lack of follow-up/censoring data, it is difficult to determine the causal relationship between ICI and adverse events in the hepatobiliary system. Finally, we compared ICIs with non-ICIs in the FAERS database using the proportional imbalance method in this article. In the future, we will compare AE signals with non-ICI drugs to enhance this research. Conclusion This study comprehensively assessed the association of ICIs with hepatotoxicity in real-world practice. Overall, a significant association was detected between ICIs and liver AEs and a relatively strong signal was detected in several ICIs immunotherapy regimens. Some of the results were consistent with previous literature. Immune-mediated hepatitis was associated with each drug and, in addition, autoimmune hepatitis was associated with most drugs (except Avelumab). In the clinical use of ICIs, patients should be alerted to the development of hepatotoxic AEs, especially in elderly patients. Clinicians need to be alerted to AEs associated with hepatotoxicity. The delayed onset and prolonged duration of AEs associated with ICIs makes timely recognition and early individualized management important.
v3-fos-license
2022-09-04T15:04:58.685Z
2022-09-01T00:00:00.000
252058931
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/23/17/9960/pdf?version=1662038717", "pdf_hash": "a619f24ccc2997b457a2749a9b466ab1b80e0b10", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2442", "s2fieldsofstudy": [ "Biology" ], "sha1": "f94c9172dd028ee60d6c427ec15ef5711757af33", "year": 2022 }
pes2o/s2orc
The Landscape of Accessible Chromatin during Yak Adipocyte Differentiation Although significant advancement has been made in the study of adipogenesis, knowledge about how chromatin accessibility regulates yak adipogenesis is lacking. We here described genome-wide dynamic chromatin accessibility in preadipocytes and adipocytes by using the assay for transposase-accessible chromatin with high-throughput sequencing (ATAC-seq), and thus revealed the unique characteristics of open chromatin during yak adipocyte differentiation. The chromatin accessibility of preadipocytes and adipocytes exhibited a similar genomic distribution, displaying a preferential location within the intergenic region, intron, and promoter. The pathway enrichment analysis identified that genes with differential chromatin accessibility were involved in adipogenic metabolism regulation pathways, such as the peroxisome proliferator activated receptor-γ (PPAR) signaling pathway, wingless-type MMTV integration site (Wnt) signaling pathway, and extracellular matrix-receptor (ECM–receptor) interaction. Integration of ATAC-seq and mRNA-seq revealed that genes with a high expression were associated with high levels of chromatin accessibility, especially within 1 kb upstream and downstream of the transcription start site. In addition, we identified a series of transcription factors (TFs) related to adipogenesis and created the TF regulatory network, providing the possible interactions between TFs during yak adipogenesis. This study is crucial for advancing the understanding of transcriptional regulatory mechanisms of adipogenesis and provides valuable information for understanding the adaptation of plateau species to high-altitude environments by maintaining whole body homeostasis through fat metabolism. Introduction Adipose tissue is a vital endocrine organ for energy storage and metabolism in animals, and the growth of this tissue refers to the increase in adipocyte number and size [1]. Mature adipocytes differentiate from preadipocytes under the action of a series of transcription factors (TFs), fatty acid-binding proteins, and lipid-metabolizing enzymes after growth arrest. The TFs cascade occurs prior to adipocyte gene expression during adipocyte differentiation [2]. PPARγ (peroxisome proliferator-activated receptor-γ) and C/EBP (CCAAT/enhancer binding protein) have been confirmed as pioneer TFs for inducing initial adipocyte differentiation [3]. In addition, many signaling pathways, such as insulin, glucocorticoid, BMP (bone morphogenetic protein), Wnt (wingless-type MMTV integration site) and Hedgehog signaling, have been reported to promote or inhibit differentiation of preadipocytes into adipocytes [4][5][6][7][8]. Numerous studies have found the regulatory role of TFs in bovine adipocyte differentiation. For example, by studying the mechanism of TFs regulating SIRT4 (silent information regulator Protein 4) gene during bovine adipogenesis, it was found that E2F1 (E2F transcription factor-1), C/EBPβ, and HOXA5 (homeobox A5) positively regulate the expression of SIRT4, while IRF4 (interferon regulatory factor 4), PAX4 (paired box 4), and CREB1 (cAMP responsive element-binding protein 1) suppressed its expression [9]. A previous study investigated the core promoter region of the PLIN1 (perilipin1) in bovine adipogenesis and found that E2F1, PLAG1 (pleiomorphic adenoma gene 1), C/EBPβ, and SMAD3 (SMAD family member 3) can bind to this region and activate or inhibit its transcription [10]. A similar study found that KLF15 (Krüppel-like factors 15) plays a regulatory role by binding to the core promoter region of KLF3 in bovine adipogenesis [11]. However, knowledge about genome-wide TFs regulating bovine adipocyte differentiation and the interaction network between these TFs is lacking. The assay for transposase-accessible chromatin with high-throughput sequencing (ATAC-seq) is a method that can be used to find the open chromatin region, predict TFs, and map the nucleosome position [12,13]. Chromatin in mammalian cells is usually in active euchromatin and inactive heterochromatin states. When chromatin is in the active euchromatin state, open chromatin regions with DNA regulatory elements can be bound by TFs, which will then contribute to gene regulation. When chromatin is in the inactive heterochromatin state, TFs cannot bind to transcriptionally active regions of genes, which is not conducive to gene expression [14,15]. Therefore, ATAC-seq data analysis can identify genome-wide transcription regulatory elements and transcriptionally active genomic regions in specific biological processes. The yak is a unique and rare ruminant living in the Qinghai-Tibet Plateau and adjacent regions. As an important livestock species for animal husbandry in this area, yaks provide local residents with agricultural products such as meat, milk, and hair while adapting to environmental conditions including cold, low oxygen, and high altitude and ultraviolet rays [16,17]. Because yaks graze year-round, their growth and development is restricted by the yield and quality of forage [18]. During the warm season, the plentiful and high-quality forage feed induces weight gain and fat deposition in yaks. However, during the long cold season, yaks have insufficient nutrient intake and need to consume the fat stored in the warm season to resist severe cold and maintain the normal metabolism [19,20]. This mechanism of maintaining the normal biological process and development of the body through fat metabolism on a plateau makes the yak an ideal model for studying the adaptation of plateau species. We here used ATAC-seq to identify open chromatin regions in yak preadipocytes and adipocytes. We also applied RNA-seq to detect gene expression during yak adipocyte differentiation. Our study objectives were to (i) compare chromatin accessibility and gene expression to identify differential chromatin accessibility and differentially expressed genes (DEGs) during yak adipocyte differentiation; (ii) explore the correlation between chromatin accessibility and gene expression; and (iii) predict TF regulatory networks during yak adipogenesis. Our results provide new insights into yak adipogenesis and further reveal the mechanisms through which open chromatin regulates gene expression during biological processes. Landscape of Chromatin Accessibility during Yak Adipocyte Differentiation To determine whether preadipocytes fully differentiated into adipocytes, Oil Red O staining and mRNA detection of adipogenic marker genes were performed. The results of Oil Red O staining revealed that preadipocytes were fully differentiated into mature lipid-filled adipocytes after induction with adipogenic agents for 12 days ( Figures 1A,B and S1). At the same time, the expression of C/EBPα, PPARγ, FABP4 (fatty acid binding protein 4), and SREBF1 (sterol regulatory element-binding factor-1) was significantly higher in adipocytes than in preadipocytes, which also supports this finding ( Figure 1C). The mRNA expression level of C/EBPα, PPARγ, FABP4 and SREBF1 between the preadipocytes and adipocytes (mean ± SEM, n = 3, * p < 0.05, ** p < 0.01). To explore epigenetic regulation during yak adipocyte differentiation, we used ATACseq to identify open chromatin regions in preadipocytes and adipocytes (three biological replicates for each cell group). An average of 167,249,845 and 159,616,847 high-quality reads were obtained from Pread (preadipocytes) and Ad (adipocytes) groups, respectively. The reads information of each ATAC-seq library before and after filtering is shown in Table S1. After the reads were aligned to LU_Bosgru_v3.0 using Bowtie2, we selected the reads aligned to unique positions for downstream analyses. Because Tn5 transposase can preferentially insert into open chromatin regions, most reads are short fragments with no or only one nucleosome. Some long fragments containing multiple nucleosomes are also present. Analysis of fragment size distribution revealed that the length of most fragments was <100 bp, indicating that the reads of each library were primarily located in the open chromatin regions (Figure 2A). Statistical analysis of all reads in the 2 kb upstream and downstream of transcription start site (TSS) by using deep Tools software showed that the highest density was observed at the TSS, suggesting that the reads were distributed according to the typical characteristics of higher eukaryotes ( Figure 2B). Genome-wide peak scanning was performed using MACS software to identify regions of significant enrichment in the open chromatin. In the Pread and Ad groups, 15,045 and 7683 peaks were found, respectively. Approximately 35% of peaks detected in the Pread group were also present in the Ad group. Similarly, approximately 69% of peaks detected in the Ad group were also present in the Pread group ( Figure 2C). The genomic distribution of Pread peaks showed that 43.83% of the peaks were located within the distal intergenic region, 30.48% located within the intron, 16.97% located within the promoter (2 kb upstream of the TSS), 4.13% located within 5 UTR, 2.99% located within the exon, 1.22% located within 500 bp downstream of the gene, and 0.39% located within 3 UTR. The peaks detected in the Ad group exhibited a similar genomic distribution, showing a preferential location within the distal intergenic region (48.44%), followed by the intron (26.04%), promoter (17.19%), 5 UTR (4.10%), exon (2.59%), downstream of the gene (1.17%), and 3 UTR (0.46%) ( Figure 2D and Table S2). Differential Chromatin Accessibility during Yak Adipocyte Differentiation To demonstrate the role of open chromatin regions during adipocyte differentiation, we performed differential chromatin accessibility analysis between the Pread and Ad groups by using DiffBind (|log2FC| > 1, p < 0.05). The results revealed 1293 differential peaks, of which 1265 were downregulated and 28 were upregulated (Figures 3A and S2A and Table S3). GO term analysis exhibited that these differential peaks were mainly implicated with cellular transcriptional regulatory and metabolic activities, such as binding, transcription regulator activity, and molecular transducer activity (ontology: molecular function), metabolic process, regulation of the biological process, developmental process and signaling (ontology: biological process), and cell part and protein-containing complex (ontology: cellular component) ( Figure 3C and Table S4). Meanwhile, KEGG enrichment analysis revealed that differential chromatin accessibility was closely correlated to adipogenic metabolism regulation pathways including the PPAR signaling pathway, Wnt signaling pathway, regulation of actin cytoskeleton, and ECM-receptor interaction ( Figure 3B and Table S5). The genetic information about these signaling pathways is presented in Figure S2B and Table S6. These results suggested that changes in gene transcription induced by altered chromatin accessibility can regulate preadipocyte differentiation into adipocytes. Integration of ATAC-Seq and mRNA-Seq with Both Groups mRNA-seq was used to detect gene expression changes during adipocyte differentiation. In total, 667 DEGs (|log2FC| > 1, FDR < 0.05) were identified, of which 381 genes were downregulated and 286 upregulated ( Figure S3). The mRNA-seq results were validated by qRT-PCR ( Figure S4). To explore the association between chromatin accessibility and gene expression in different cell types, we integrated ATAC-seq and mRNA-seq in preadipocytes and adipocytes, respectively. All genes were divided into three groups (high, medium and low) based on the gene activity value (RPKM), and the gene expression levels (FPKM) of each group were counted. The results showed that the gene expression level of the high group in Pread and Ad was higher than that of the medium and low groups ( Figure 4A). This indicated that the more the open chromatin was present in preadipocytes and adipocytes, the higher the level of gene expression. Analogously, all genes were separated into three groups (high, medium and low) on the basis of the expression level, and the level of chromatin accessibility at different gene positions was counted. The results identified that the overall chromatin accessibility of the high group in Pread and Ad was higher than that of the medium and low groups. Interestingly, chromatin accessibility levels in the high and medium groups displayed obvious peaks near the TSS ( Figure 4A). This suggested that the high level of chromatin accessibility located near the TSS contributes to the high gene expression level during adipocyte differentiation. To further investigate the importance of the region located near the TSS, we integrated chromatin accessibility within 1 kb upstream or downstream of the TSS and gene expression levels. We used the same method to classify chromatin accessibility and gene expression levels into three groups (high, medium and low) and analyzed their association in Pread and Ad. The results also revealed that within the 1 kb upstream and downstream of the TSS, the chromatin accessibility and gene expression levels of the high group were higher than those of the medium and low groups ( Figure 4B). The high and medium groups had the highest levels of chromatin accessibility in promoter regions close to the TSS, demonstrating the vital role of these regions in gene transcriptional regulation ( Figure 4B). To more intuitively show the regulation of chromatin accessibility on gene expression, we used IGV to simultaneously display the ATAC-seq and mRNA-seq signals of the fat metabolism-related genes CDK5 (cyclin-dependent kinase 5), HMGB2 (high mobility group box 2), and PIN1 (peptidylprolyl cis/trans isomerase, NIMA-Interacting 1). The ATAC-seq signal within promoter regions close to the TSS was obviously higher than that in the other gene regions ( Figure 4C). The ATAC-seq signal levels of DEGs showed that the overall signal intensity of the Pread group was higher than that of the Ad group within the 2 kb upstream and downstream of the TSS ( Figure S5A). A further analysis of DEGs found that 33 genes had differences in chromatin accessibility ( Figure S5B). This indicated that differential gene expression caused by differential chromatin accessibility is a mode of gene expression regulation during adipocyte differentiation. The interaction of distal regulatory elements and proximal promoter regions is a crucial mechanism for gene regulation. Therefore, we detected the ATAC signals of the distal (>2 kb from the TSS) and proximal (TSS ± 2 kb) peaks and the gene expression level, and found that the overall signals of the distal and proximal peaks in the Pread group were higher than those in the Ad group. The gene expression also differed between the two groups ( Figure S5C). Prediction of TF Regulatory Networks during Yak Adipocyte Differentiation TFs are a class of protein molecules that bind to promoters and cooperate with RNA polymerase II to initiate gene transcription. We used AME from the MEME suite (E-Value < 0.01) to identify the enriched TFs based on conservative motifs of differential peaks. A total of 26 TFs were enriched in these differential peaks, among which FOS (FBJ osteosarcoma oncogene), JUNB (jun B proto-cologene) and KLF5 (Kruppel-like factor 5) were confirmed to be related to the adipogenic metabolism regulation ( Figure 5A and Table S7). The mechanism of action of TFs is usually combined with other TFs to work together. To identify the interaction between TFs, we constructed a global landscape of the TF regulatory network during yak adipocyte differentiation. The most connected TFs were FOS, JUNB, and JUND (jun D proto-cologene), and they could cooperate with many other TFs in the regulatory network. The expression levels of FOSL2, TEAD1 (TEA domain transcription factor 1) and RUNX2 were significantly different (FDR < 0.01 and |log2FC| > 0.8) between Pread and Ad groups ( Figure 5B and Table S8). Discussion To our knowledge, this study is the first to use ATAC-seq technology to detect the landscape of chromatin accessibility during yak adipocyte differentiation. This will provide a new perspective on the molecular mechanism of adipogenesis in plateau species. Transcriptional regulatory elements of genes include enhancers, upstream activators, and proximal promoters. When the chromatin region containing transcriptional regulatory elements is in an open state, TFs can bind to the region and recruit RNA polymerase to the core promoter to initiate transcription. By contrast, condensed chromatin prevents TFs from binding to regulatory elements to silence gene expression [21]. ATAC-seq is a whole genome epigenetic detection technology that utilizes Tn5 transposase to add the adapter into open chromatin regions located primarily in promoters. The most prominent advantages of ATAC-seq are low cell input and short time of sample preparation and experiment [22]. ATAC-seq is used to study chromatin accessibility regulation during embryonic development, cell differentiation, and disease occurrence [23]. Genome-wide open chromatin detected through ATAC-seq in mouse preimplantation embryos revealed that accessible chromatin regions were widely distributed within cis-regulatory sequences whose activities diminished prior to major zygotic genome activation. Further integration of cis-regulatory sequences with single-cell transcriptomes led to the identification of essential lineage-specific regulators [24]. Use of ATAC-seq to study bovine early embryo development revealed that chromatin accessibility significantly increased during major embryonic genome activation, and its signals were strong at the TSS and transcription end sites [25]. A study on bovine myogenic differentiation found that chromatin accessibility identified through ATAC-seq exhibited dynamic changes at different time points (0, 2, and 4 days) of differentiation, indicating that open chromatin was among the epigenetic regulatory effects regulating cell differentiation [26]. Yak adipogenesis is the differentiation of preadipocytes into adipocytes; therefore, ATAC-seq can be used to identify chromatin accessibility changes during this process. Our results also suggested that open chromatin regulates yak adipogenesis, and this finding can be helpful for fat metabolism research. We noted that the number of Pread peaks (15,045) was much higher than that of Ad peaks (7683), suggesting that ATAC-seq worked better in preadipocytes, or that differentiation would lead to reduced chromatin accessibility in yak adipocytes. The peaks of Pread and Ad groups exhibited a similar genomic distribution, demonstrating the preferential location within the distal intergenic region, intron and promoter. This distribution is consistent with the chromatin accessibility features reported in bovine myogenic differentiation, suggesting the vital role of non-promoter cis-regulatory elements in the regulation of cell differentiation [26]. Although TF-binding sites (TFBS) are mainly located in the promoter region, an increasing number of studies have shown that intron regions are also rich in TFBS. A study of regulatory elements in the human coagulation factor VIII (hFVIII) gene found that 31% of TFBS were located in intron regions and had binding preferences located far from intronic splice sites [27]. Genome-wide detection of cis-regulatory elements in early human adipogenesis revealed chromatin accessibility changes in PPARγ intron regions rich in TFBS [28]. Consequently, we inferred that the gene regulation effect of TFs binding in the intron region was also a regulatory mechanism during yak adipogenesis. Enhancer elements with sequence conservation are known to be present in intergenic regions [29]. These enhancer DNA elements are commonly recognized and bound by specific TFs to ensure transcriptional activation of target genes [30]. Meanwhile, increasing evidence suggests that enhancer-promoter interactions are also crucial for gene expression regulation [31]. Studies have shown that the majority of chromatin accessibility found during induction of T cell activation by omni ATAC-seq were located in intergenic enhancer regions [32]. A large fraction of chromatin accessibility in our study was located in intergenic regions, further suggesting that enhancers may interact with TFs and promoters to regulate gene expression during yak adipocyte differentiation. GO terms of differential chromatin accessibility were mainly implicated with binding, transcription regulator activity, molecular transducer activity, metabolic process, regulation of biological processes, developmental processes, signaling, cell part and protein-containing complex being enriched, further illustrating that adipocyte differentiation was a complex transcriptional regulatory activity mediated by a cascade of multiple TFs. The KEGG pathway enrichment analysis found that the signaling pathways of differential chromatin accessibility was significantly associated with adipogenesis, such as the PPAR signaling pathway, Wnt signaling pathway, regulation of actin cytoskeleton, and ECM-receptor interaction. Among them, the PPAR signaling pathway, Wnt signaling pathway, and ECMreceptor interaction are the classical pathways of fat metabolism. The actin cytoskeleton is a highly dynamic structure involved in the maintenance of cell morphology and structural stability [33]. Insulin metabolism is related to actin cytoskeleton participation in the insulin signaling pathway [34]. Furthermore, investigation of the role of actin cytoskeleton in adipocyte development found that the interaction of the remodeled actin cytoskeleton and insulin signaling would affect adipocyte size [35]. Open chromatin regions can provide binding sites for TFs, thereby making initiation of target gene transcription possible. Therefore, integration of genome-wide chromatin accessibility and gene expression in specific biological processes can reveal the role that open chromatin plays in gene expression regulation. Chromatin accessibility has a significant correlation with gene expression during adipogenesis of human adipose-derived stem cells [28], bovine early embryo development [25], and bovine myogenic differentiation [26]. Our study results also revealed that genes expressed at high levels were associated with high levels of chromatin accessibility, especially within the 1 kb upstream and downstream of the TSS. Moreover, highly expressed genes exhibited the highest levels of chromatin accessibility in promoter regions close to the TSS. These results suggested that open chromatin can regulate gene expression during yak adipogenesis. We noted that only 33 DEGs identified by RNA-seq exhibited differences in chromatin accessibility, which could be In the present study, differential chromatin accessibility was enriched in some crucial TFs, including FOSL2 (FOS-like antigen 2), JUND, FOS, and JUNB. FOSL2 and FOS are members of the FOS gene family. A study on the adipogenesis of deep intra-abdominal preadipocytes found that the induction of FOS protein was essential for preadipocyte differentiation into adipocytes [36]. Studies on osteoblasts from FOSL2 knockout mice in vitro found that the expression levels of adipogenic genes (C/EBPα, C/EBPβ, and PPARγ) were increased and adipocyte formation was accelerated [37]. Compared with the control mice, JUNB knockout mice exhibited reduced fat mass, higher insulin sensitivity, and increased adipose triglyceride lipase and hormone-sensitive lipase levels, demonstrating the critical regulatory role of JUNB in fat metabolism [38]. JUND can bind to the promoter of PPARγ, which contributes to the expression of FAS (fatty acid synthetase), CD36 (cluster of differentiation 36), LPL (lipoprotein lipase), and Plin5 (Perilipin 5) genes related to triglyceride synthesis, uptake, hydrolysis, and storage [39]. Based on the aforementioned findings, we infer that FOSL2, JUND, FOS, and JUNB may be involved in regulating yak adipogenesis. However, complex biological processes require multiple TFs to cooperate with each other. The members of FOS and JUN proteins can combine to form transcriptionally active complexes and exert regulatory functions during cell proliferation, differentiation, and embryonic development [40,41]. Studies have shown that the Fos-Jun complex can bind to the promoter region of the lipid-binding protein adipocyte P2 (aP2) to regulate 3T3-F442A adipocyte differentiation [42]. In our study, the core TF FOS interacted with JUNB and JUND in the TF regulatory network, further demonstrating that this interaction occurs during yak adipocyte differentiation. Interactions between other TFs were also observed in the regulatory network, and further investigation is required to clarify the mode of action of these TFs in adipogenesis. Ethics Statement All experimental procedures involved in this study were reviewed and confirmed by the Animal Administration and Ethics Committee of Lanzhou Institute of Husbandry and Pharmaceutical Sciences, Chinese Academy of Agricultural Sciences (SYXK-2014-0002). Preadipocyte Isolation Three healthy 3-day-old infant Datong yaks from Datong Yak Breeding Center (Datong County, Qinghai, China) were selected as experimental animals for this study. The collected subcutaneous adipose tissue samples were first rinsed with 0.9% NaCl and then with phosphate buffered saline (PBS) containing penicillin (200 U/mL) and streptomycin (200 U/mL). The adipose tissue was then cut into approximately 1 mm 3 pieces in a sterile environment. Cells were isolated through type I collagenase (1 mg/mL) digestion with constant stirring for 60 min in a 37 • C in water bath. Subsequently, the digested tissue was sequentially filtered through 100 µm and 70 µm nylon mesh films, and the filtrate was centrifuged at 1400× g for 5 min. Cell pellets obtained through centrifugation were incubated with red blood cell lysis buffer (Beyotime, Shanghai, China) for 10 min. After washing with PBS, the mixture was centrifuged at 1400× g for 5 min to obtain the final cell pellets. Finally, the pellets were dissolved with DME-F12 (Hyclone, Logan, UT, USA) containing 10% fetal bovine serum (Gibco, Waltham, MA, USA), inoculated into flasks, and cultured under conditions at 37 • C with 5% CO 2 . Staining of Oil Red O and Quantitative Real-Time PCR When the density of preadipocytes reached 70-80%, the cells were induced using adipogenic agents containing 3-isobutyl-methylxanthine (0.5 mM), dexamethasone (1 µM), and insulin (10 µg/mL) (Sigma, St. Louis, MO, USA) for 2 days. The cells were then cultured with adipogenic agents including only insulin (10 µg/mL) until day 12. Finally, the adipocytes were washed three times with PBS, fixed with 4% formaldehyde crosslink for 1 h, and reacted with Oil Red O for 30 min. The stained samples were observed through light microscopy. RNA extraction and reverse transcription of the cells were performed according to the manual of TriZol reagent (Transgen Biotch, Beijing, China) and PrimeScript™ 1st Strand cDNA Synthesis Kit (TaKaRa Bio Inc., Dalian, China), respectively. Quantitative RT-PCR was conducted on the LightCycler ® 96 Instrument (Roche, Basel, Switzerland) with the SYBR Green dye. Primer sequences used to determine the credibility of RNA-seq are listed in Table S9. ATAC-Seq Library Preparation and Sequencing We performed ATAC-seq using preadipocytes (n = 3) and adipocytes (n = 3). Nuclear suspensions of samples were incubated in the transposition reaction mix containing Tn5 transposase at 37 • C for 30 min. When Tn5 entered the nuclei, open chromatin regions were preferentially fragmented. Meanwhile, an adapter was added to the ends of the fragments. Immediately following transposition, the sample was purified using the QIAGEN MinElute PCR purification kit (Tiangen Biotech, Beijing, China) [43]. The libraries were sequenced using Illumina HiSeqTM 4000 by Gene Denovo Biotechnology Co. (Guangzhou, China). RNA-Seq Library Preparation and Sequencing Total RNA of preadipocytes and adipocytes was isolated using the TriZol reagent kit in accordance with the manufacturer's manual. RNA quality was verified through RNase-free agarose gel electrophoresis and evaluated on an Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA). Then, mRNA was enriched using oligo(dT) beads. Subsequently, the enriched mRNA was cut into short fragments using fragmentation buffer and reverse transcribed into cDNA by utilizing NEBNext Ultra RNA Library Prep Kit for Illumina (NEB #7530, New England Biolabs, Ipswich, MA, USA). The purified double-stranded cDNA fragments were end fixed, and a base was added. Then, the fragments were connected to Illumina sequencing adapters. The connection reaction was purified using AMPure XP beads (1.0×). Ligated fragments were subjected to size selection through polymerase chain reaction (PCR) amplification and agarose gel electrophoresis. The resulting cDNA library was sequenced using Illumina Novaseq6000 by Gene Denovo Biotechnology Co. (Guangzhou, China). ATAC-Seq Data Processing and Analyses To ensure the high quality of ATAC-seq reads, we conducted three stringent filtering standard procedures including removing reads with adapters, more than unknown nucleotides (N), and more than 50% of low-quality (Q-value ≤ 20) bases. Filtered reads of each sample were mapped to LU_Bosgru_v3.0 (Ensembl_release104) by using Bowtie2 [44] (version 2.2.8; parameters: -X 2000). Reads mapped to the + and − strands were offset by +4 bp and −5 bp, respectively. Peaks were called using MACS [45] (version 2.1.2, Yong Zhang, Boston, MA, USA) with parameters "--nomodel --shift -100 --extsize 200 -B -q 0.05". We used ChIPseeker [46] (version v1.16.1, Guangchuang Yu, Guangzhou, China) to confirm peak-related genes and distribution of peaks in different genome regions. Significant differential peaks were identified using DiffBind [47] (version 2.8.0, Rory Stark, Cambridge, UK) with p < 0.05 and |log2FC| > 1. The MEME-ChIP and MEME-AME suite was used to identify the motifs. The ChIP and AME were selected to examine motifs with high credibility through peak regions and confirm the existences of any specific known motifs, respectively. Gene Ontology (GO) enrichment analysis was performed to recognize the main biological functions of peak-related genes. Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis was performed to identify metabolic pathways or signal transduction pathways associated with peak-related genes. GO terms and pathways were considered to be significantly enriched if they met the threshold of p < 0.05. RNA-Seq Data Processing and Analyses To obtain high-quality reads, raw reads containing adapters or low-quality bases were filtered using fastp [48] [49] with "-rna-strandness RF" and other parameters set as a default. Reads aligning to each sample were assembled using StringTie (version 1.3.1, Mihaela Pertea, Maryland, USA) [50]. Meanwhile, the FPKM (fragment per kilobase of transcript per million mapped reads) value was calculated using RSEM [51] software to quantify gene abundance. DEGs between Pread and Ad groups were identified using DESeq2 (version 1.14.1, Michael I Love, Heidelberg, Germany) [52]. DEGs with false discovery rate (FDR) of <0.05 and fold change of >2 were regarded as significant DEGs. Integration of ATAC-Seq and RNA-Seq The reads belonging to each of the ATAC-seq peaks were converted to RPKM (reads per kilobase per million mapped reads). We split all genes into three groups (high, medium and low) by using a threshold value determined by dividing the gene activity value (RPKM) in three quantile groups based on their means (Hmisc::cut2 R function [53]). Then, the gene expression levels (FPKM) in each group were calculated. We used the same method to divide mRNA values (FPKM) of all genes into high, medium, and low groups and calculate RPKM in different gene positions. Integrative Genomics Viewer (IGV) (version 2.12.2, James T. Robinson, MA, USA) was applied to visualize ATAC-seq and mRNA-seq signals at the same location. The TF regulatory networks were generated using the STRING database and visualized in Cytoscape (version 3.7.1, Paul Shannon, MA, USA). Statistical Analysis Student's t-test in SPSS (version 22, IBM, Chicago, IL, USA) was used to evaluate statistics. The results were displayed as mean ± SEM, and p < 0.05 was defined as statistically significant. Conclusions In summary, our study described genome-wide dynamic chromatin accessibility and gene expression during adipocyte differentiation in yaks, which are an ideal model for studying the adipogenesis of plateau species. Integration of ATAC-seq and mRNA-seq revealed that genes expressed at high levels were associated with high levels of chromatin accessibility, especially within 1 kb upstream and downstream of the TSS. Additionally, we identified a series of TFs and created the TF regulatory network, clarifying the possible interactions between TFs during yak adipogenesis. Taken together, our study sheds light on the mechanism by which open chromatin regulates gene expression during adipogenesis and provides the theoretical and material bases for research on its epigenetic roles in fat metabolism. Informed Consent Statement: Not applicable. Data Availability Statement: The data were submitted to the data base of the Sequence Read Achive (SRA). The appropriate number for accession is PRJNA860771. Acknowledgments: We thank the Key Laboratory of Yak Breeding Engineering Gansu Province for experimental conditions. We are grateful to Guangzhou Genedenovo Biotechnology Co., Ltd. for assisting in sequencing. Conflicts of Interest: The authors declare no conflict of interest. ATAC seq assay for transposase-accessible chromatin with high-throughput sequencing DEGs differentially expressed genes FDR false discovery rate FPKM fragment per kilobase of transcript per million mapped reads RPKM reads per kilobase per million mapped reads qRT PCR real-time quantitative PCR PCR polymerase chain reaction
v3-fos-license
2022-08-16T12:28:13.305Z
2020-01-01T00:00:00.000
234781688
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1242/dev.189019", "pdf_hash": "0a4b9a09d6ed772c7e859a0b17af9b6ee730a8b3", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2443", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "0a4b9a09d6ed772c7e859a0b17af9b6ee730a8b3", "year": 2020 }
pes2o/s2orc
deficiency cerebellar postnatal development of microglia and climbing fiber refinement in a mouse model of Niemann-Pick Type C disease. Genetic deficiency of Npc1 impairs postnatal development of microglia and climbing fiber synaptic pruning in the mouse cerebellum. Abstract Little is known about the effects of NPC1 deficiency in brain development and if they contribute to neurodegeneration in Niemann-Pick Type C disease. Since cerebellar Purkinje cells die early and to a higher extent in NPC, here we analyzed the effect of NPC1 deficiency in microglia and climbing fiber synaptic refinement during cerebellar postnatal development using the Npc1 nmf164 mouse. Our analysis revealed that NPC1 deficiency leads to early phenotypic changes in microglia that are not associated with an innate immune response. However, the lack of NPC1 in Npc1 nmf164 mice significantly affected the early development of microglia by delaying the radial migration, increasing the proliferation and impairing the differentiation of microglia precursor cells during postnatal development. Additionally, increased phagocytic activity of differentiating microglia was found at the end of the second postnatal week in Npc1 nmf164 mice. Moreover, significant Climbing-fiber (CF) synaptic refinement deficits along with an increased engulfment of CF synaptic elements by microglia were found in Npc1 nmf164 mice, suggesting that profound developmental defects in microglia and synaptic connectivity precede and predispose Purkinje cells to early neurodegeneration in NPC. Our results suggest These results indicate that alters microglia. Introduction Niemann-Pick Type C disease (NPC) is a recessive genetic lysosomal storage disease caused by mutations in the NPC1 or NPC2 proteins, important transporters of cholesterol from endosomes and lysosomes (Patterson, 1993). Accumulation of cholesterol inside these intracellular organelles leads to progressive neurodegeneration, dementia, and death in children. Developmental regression, ataxia, and cognitive impairment are found among the symptomatology of NPC. Although the average age of diagnosis is 10yrs, 50% of the NPC patients are diagnosed before age 7 and die before 12.5yrs of age. However, the average age of death is 16yrs, indicating that NPC onset and severity are variable but progressive (Garver et al., 2007). In fact, the nature and severity of neurological symptoms in NPC are directly correlated to the onset of the disease. Infantile manifestations of NPC include delay in motor milestones, gait problems, clumsiness and speech delay, while symptomatic manifestations of juvenile and adult-onset of NPC include learning deficits, ataxia, dystonia and psychiatric symptoms (Mengel et al., 2013). Questions remain as to how the deficiency of the NPC1 protein perturbs developmental processes in the brain that could contribute to the early development of dementia and neurodegeneration. Studies in mice and cats have revealed that Purkinje cells (PCs) are particularly hypersensitive to the loss of the NPC1 protein; in NPC these cells degenerate earliest and to a greater severity than other neurons in the brain (Vanier and Millat, 2003). This early degeneration of cerebellar PCs contributes to the development of early neurological symptoms such as clumsiness, gait defects and ataxia in both the human disease and animal models. Identifying the timeline and nature of pathological changes at the cellular and molecular level in the cerebellum Development • Accepted manuscript caused by mutations in Npc1 is critical to understand the mechanisms underlying the early dysfunction and degeneration of PCs. Interestingly, recent studies have shown that genetic inactivation of reactive microglia in Npc1 -/mice reduces neurological impairment and increases their life span (Cougnoux et al., 2018). We recently found that engulfment and phagocytosis of dendrites by activated microglia occur early and precede PC loss in NPC (Kavetsky et al., 2019), demonstrating that activated microglia contribute to PC degeneration in NPC. Although NPC is a childhood disorder, little is known about the effects of NPC1 deficiency on brain development. Moreover, how developmental deficits precede and contribute to neurodegeneration in NPC is not completely understood. Developmental delay in motor skill acquisition and significant reductions in synaptic and myelin proteins were reported in the postnatal Npc1 nmf164 mouse (Caporali et al., 2016). In contrast to mice with complete deletion of the Npc1 gene, the Npc1 nmf164 mutant mice present a late onset and slower disease progression where severe motor deficits and neurodegeneration are evident at a young adult stage (Maue et al., 2012). Since no degeneration of PC and no severe motor deficits are found at early developmental stages, the Npc1 nmf164 mouse is an ideal model to study potential "silent" cerebellar developmental defects and behavioral changes caused by NPC1 deficiency that precede degeneration of PC. Contrary to most common late-onset dementias that are age-associated, the early-childhood onset of neurological manifestations in NPC and other lysosomal storage disorders (LSD), indicates the potential disruption of neurodevelopmental processes. The impacts of disrupted cholesterol trafficking by NPC1 deficiency in neural cells development is unknown, leading to a poor understanding of the origin and preclinical mechanisms that lead to childhood dementia. Understanding the mechanisms by which NPC1 Development • Accepted manuscript deficiency affects neural cells during development not only will expand our current knowledge of brain-behavior developmental processes, but also will provide potential therapeutic avenues to identify and delay the progression of NPC and other childhood dementias (Ford et al., 1951;Shapiro E.G., 1994). Previous work in our laboratory showed significant changes in microglia number, morphology and phagosome content in the cerebellum of Npc1 nmf164 mice at post-weaning age and before PC degeneration (Kavetsky et al., 2019). Interestingly, the migration, proliferation and differentiation of cerebellar microglia occur postnatally along with the development and differentiation of cerebellar neurons. In the brain, microglia play important roles during normal development, including the clearance of apoptotic cells and the elimination of redundant synapses. Importantly, when compared to microglia from other regions of the brain, normal cerebellar microglia strongly display cellular and gene expression patterns mostly associated with cell clearance (Ayata et al., 2018), suggesting that cerebellar microglia are more phagocytic. To determine the impact of NPC1 deficiency in cerebellar microglia during postnatal development, different stages of microglia development such as migration, proliferation and differentiation were examined at the corresponding postnatal age. The findings of this study suggest that NPC1 deficiency not only affects the different phases of microglia development, but also increases phagocytosis activity in these cells that promotes and amplifies profound synaptic defects during the postnatal development of the cerebellum. Results Early changes in microglia are not the result of an innate immune response in the cerebellum of post-weaning Npc1 nmf164 mice. Since changes in Npc1 nmf164 microglia are evident at P30 (Kavetsky et al., 2019, Fig.1A), we proceeded to test if genes associated with an innate immune response were upregulated in the cerebellum of P30 Npc1 nmf164 mice. A PCR array plate containing primers for ~92 genes associated with the mouse innate-immune response was used. RNA from the cerebellum of P90 Npc1 nmf164 mice was used as a positive control since at this age a severe loss of PCs is detected and the changes in microglia morphology and proliferation are more remarkable than at P30 (Fig. 1A) (Kavetsky et al., 2019). As expected, several genes including cytokines (Lif,Tnf,Csf3,Il1a,Ccl2,and Ccl3), endothelial-inflammatory genes (Sele and Vcam1), T-cell associated genes (Gzmb,Cd3e,Cd28,Stat4,PRF1,and Tnfrsf18) and other proinflammatory molecules (C3 and Ptgs2) were significantly increased (>5 fold) in the cerebellum of P90 Npc1 nmf164 mice when compared to WT mice (Fig. 1B). However, no significant expression changes were found in the cerebellum of P30 Npc1 nmf164 mice when compared to WT mice (Fig. 1B), suggesting that microglia changes at that early age are not the consequence of an immune or inflammatory response in Npc1 nmf164 mice. To determine if microglia changes at P30 were associated with effects of NPC1 deficiency on microglia development, a comprehensive analysis of cerebellar and microglia postnatal development was performed. Development • Accepted manuscript Early radial migration of microglial precursors into the developing cerebellar cortex is reduced in Npc1 nmf164 mice. In the mouse, most of the cerebellar development occurs postnatally. Therefore, at early postnatal stages such as P4, only round and ameboid IBA1 + microglial precursors were observed in the developing cerebellar medulla (CbM) while ramifying microglia were already found at this stage in the cerebral cortex ( Fig. 2A). As described by others (Ashwell, 1990;Cuadros et al., 1997;Mosser et al., 2017), in the P4 WT cerebellum microglial precursors were concentrated in the developing CbM and migrating tangentially towards the primitive cerebellar folia following tomato lectin + blood vessels (Fig. 2B). Also, at this early postnatal stage, abundant microglial precursors were observed at the pial surface (PS) of the meninges (Fig. 2B). The beginning of the radial migration of microglial precursors into the different regions of the cerebellar cortex that include the inner granule layer (IGL), Purkinje cell layer (PCL) and external granule layer (EGL), is expected to occur at this early stage. Quantitative analysis of the density of IBA1 + microglial precursors in the CbM revealed no differences between P4 WT and Npc1 nmf164 mice ( Fig. 2B and C). However, a significant reduction in the number of IBA1 + microglial precursors reaching the IGL and circulating the PS was found in Npc1 nmf164 mice when compared to WT mice ( Fig. 2B and D). At this early postnatal stage, very few IBA1 + microglial precursors have reached the PCL, and the EGL is completely devoid of them as reported by others (Cuadros et al., 1997;Nakayama et al., 2018). No changes in vascularization (tomato lectin + blood vessels) were observed between the P4 WT and Npc1 nmf164 mice (Fig. 2B, Fig. S1). Proliferative activity in CbM microglial precursors was detected in both P4 WT and Npc1 nmf164 mice, but a significantly higher number and percentage of these CbM Development • Accepted manuscript KI67 + /IBA1 + microglial precursors were found in Npc1 nmf164 mice ( Fig. 2E-G). Meanwhile, proliferative activity in microglial precursors at the IGL, PCL, and PS was very low in P4 WT and Npc1 nmf164 mice, but a further significant reduction in proliferative microglial precursors was found in Npc1 nmf164 mice ( Fig. 2H and I), mostly because of the delayed migration of these cells in the Npc1 nmf164 mice. Overall, these results suggest that NPC1 deficiency affects the ability of microglial precursors to migrate radially while increasing their proliferation during early postnatal cerebellar development. Increased density of precursor and maturing microglia in the developing cerebellum of P10 and P14 Npc1 nmf164 mice Studies have shown that the proliferative activity of microglial precursors in the developing cerebellar CbM and folias white matter region (WMR) start at birth and peak at P7 in normal mice (Ashwell, 1990;Li et al., 2019). In fact, an abundant subpopulation of microglia in the WMR region was recently identified as proliferativeregion-associated microglia (PAM) (Li et al., 2019). When we examined the cerebellum of P10 WT and Npc1 nmf164 mice, we found that the volume occupied by IBA1 + microglia in the WMR (axonal tracts) was significantly higher in Npc1 nmf164 mice than in WT mice ( Fig. 3A-B and D). However, the percentage of proliferative microglia in this region, as assessed by KI67 immunostaining, was similar between WT and Npc1 nmf164 mice ( Fig. 3B and E). Similarly, the density of IBA1 + microglia in the PCL and molecular layer (ML) was significantly increased in Npc1 nmf164 mice without significant changes in KI67 immunoreactivity when compared to WT mice ( Fig. 3C and F-G). In addition, microglia in the cerebellar cortex of WT and Npc1 nmf164 mice were evidently maturing and ramifying (Fig. 3C), while the microglia Development • Accepted manuscript in the WMR were ameboid shape and less ramified, an indication of a more immature cell (Fig. 3B). Since no differences in KI67 immunoreactivity at the cerebellar WMR were observed between P10 WT and Npc1 nmf164 mice, our results suggest that an increased number of microglia was produced in the cerebellar WMR of Npc1 nmf164 mice prior to the P10 stage (P4-P7) as reported by others (Li et al., 2019). To test this possibility, P10 WT and Npc1 nmf164 cerebellar slices were immunostained with CLEC7A (Fig. 4), a specific marker for PAM cells, which proliferation rate in WT mice is significantly reduced after peaking at P7 (Li et al., 2019). Interestingly, a significant higher number of CLEC7A + clusters were observed in Npc1 nmf164 developing WMRs ( Fig. 4A and C). Also, the area of these clusters with CLEC7A + (Fig. 4D) and IBA1 + cells (Fig. 4E), as well as the fraction of the IBA1 + area that was CLEC7A + (Fig. 4F), were significantly larger in Npc1 nmf164 mice when compared to WT. The majority of P10 CLEC7A + microglial precursors lacked processes and were primarily ameboid shape in both WT and Npc1 nmf164 mice ( CLEC7A + /IBA1 + cells were located (Fig. 4K, blood vessels staining is artifactual). We found that MBP intensity inside the CLEC7A + clusters tended to be decreased in Development • Accepted manuscript Npc1 nmf164 mice, but due to variability in WT mice the result was not statistically significant ( Fig. 4L). At P14, the density of IBA1 + microglia was also significantly higher in the PCL/ML of Npc1 nmf164 mice when compared to WT mice ( Fig. 5A-C). However, the levels of IBA1 + /KI67 + cells in P14 WT and Npc1 nmf164 mice were very low in the cerebellar cortex layers ( Fig. 5B and D). Overall, our results suggest that the increased proliferation of WMR microglial precursors in Npc1 nmf164 mice leads to an increased density of microglia in the cerebellar cortex region since no increased microglia proliferative activity is detected in the cerebellar cortex at P10 and P14 (Fig. 5E). NPC1 deficiency affects microglia differentiation and ramification In the early days of cerebellar postnatal development, microglial precursors in the WMR were distinctly recognized by their round or ameboid shape ( Fig. 2A-B). It was also evident that as these cells migrate to the cerebellar cortex, where PC dendrites and synaptic connections are developing, they began to differentiate, ramify and extend their processes through the ML where neuronal synaptic connections are found ( Fig. 3C). To determine if NPC1 deficiency alters microglia differentiation, a quantitative analysis of IBA1 + cells morphology was performed in P10 and P14 WT and Npc1 nmf164 mice, using the "Filament Tracer" tool (Imaris). Differences in microglia volume were not detected between WT and Npc1 nmf164 mice at P10 ( Fig. 6A-B), however, at this stage, the microglia total length and the number of terminal points were significantly reduced in Npc1 nmf164 mice ( Fig. 6C-D). At P14, the microglia volume, total length, and terminal points were significantly lower in Npc1 nmf164 mice when compared to WT mice ( Fig. 6A and E-G). These results show that microglia in Npc1 nmf164 mice are less ramified and have shorter processes, suggesting that NPC1 deficiency impairs microglia differentiation and ramification during postnatal development. Phagocytic activity is increased in developing Npc1 nmf164 microglia Microglia play an important role in the clearance of apoptotic cells during development (Mosser et al., 2017). While analyzing microglia morphology in P14 postnatal mice, we noticed an abundant number of maturing microglia in the ML containing phagocytic cups, especially in Npc1 nmf164 mice. Phagocytic cups are cupshaped endocytic vacuolar structures in ramified microglia that are formed by the ingestion of particles or cells during phagocytosis (Swanson, 2008). The number of microglia with phagocytic cups was significantly higher in Npc1 nmf164 mice when compared to WT mice ( Fig. 7A-C). We also found that microglia with at least two phagocytic cups and phagocytic cups per image area, were more abundant in the ML of Npc1 nmf164 mice (Fig. 7B, D-E). Interestingly, the number of phagocytic cups containing pyknotic bodies in the ML was higher in Npc1 nmf164 mice than in WT mice, suggesting that NPC1 deficiency increases microglial phagocytic activity in the developing cerebellum. Immunostaining of microglia with the CD68 antibody, a phagosome marker, showed that P14 WT and Npc1 nmf164 microglia were actively phagocytosing at this developmental stage, since CD68 + phagosomes were abundant in microglia from both mouse strains. Markedly, WT IBA1 + microglia at P14 had many small CD68 + phagosomes distributed through the cell body and processes ( Fig. 7G), while in the Npc1 nmf164 microglia the majority of the CD68 + phagosomes were accumulated in the cell body (Fig. 7G). Quantitative analysis showed that the mean volume of CD68 + phagosomes per microglia at P14 was larger in Npc1 nmf164 mice than in WT mice ( Fig. 7G-H), however, no differences were found in the total volume of CD68 between WT and Npc1 nmf164 microglia at this stage (Fig. 7I). Since CLEC7A expression is reactivated specifically in actively phagocytic diseaseassociated microglia (DAM) (Krasemann et al., 2017), to test if P14 phagocytic cells were similar to DAM, P14 and P60 (NPC neurodegeneration stage) WT and Npc1 nmf164 cerebella were immunostained with CLEC7A. We found that CLEC7A + microglia only reappear in the ML of Npc1 nmf164 mice during neurodegeneration at P60 (Fig. S2), and not at P14. Our results suggest that postnatal changes in NPC microglia are developmental alterations caused by NPC1 deficiency and not as an immunological response. These results indicate that NPC1 deficiency alters the phagocytic activity and the distribution of phagosomes in the developing cerebellar microglia. Development • Accepted manuscript The role of microglia during this phase of CF synaptic refinement is not completely understood, however recent studies have shown that CF synapse elimination is impaired in mouse cerebella depleted of microglial cells (Kana et al., 2019;Nakayama et al., 2018) suggesting a key role of microglia in CF synapse elimination. When we examined P14 cerebella from WT and Npc1 nmf164 mice, we found that the volume of CF VGLUT2 + presynaptic inputs in the proximal region of CALB + PC dendrites was significantly reduced in Npc1 nmf164 mice when compared to WT mice ( Fig. 8A-B), as previously reported by others (Caporali et al., 2016). However, we also noticed differences in the distribution of VGLUT2 + inputs between WT and Npc1 nmf164 mice ( Fig. 8A-B). In fact, a higher percentage of CALB + PC somas in Npc1 nmf164 mice contained VGLUT2 + inputs, and in higher numbers (VGLUT2 puncta/soma) than in WT mice ( Fig. 8C-E). Interestingly, significantly more VGLUT2 + inputs in the proximal region of CALB + PC dendrites of P14 WT mice were contacted by IBA1 + microglia than in the Npc1 nmf164 mice (Fig. 8F-H). However, a significantly larger percentage of CALB + PC somas in Npc1 nmf164 mice were contacted by IBA1 + microglia, suggesting a possible link between the excess of CF synaptic inputs in PC somas and the increased interaction of microglia with this region of the PC. Since previous studies in other regions of the brain have shown that microglia actively engulf and phagocytose presynaptic terminals during developmental pruning (Gunner et al., 2019;Schafer et al., 2012;Tremblay et al., 2010), and P14 cerebellar microglia were evidently phagocytic as they contained high levels of CD68 + phagosomes (Fig. 7G), we analyzed the interaction of individual microglial cells with VGLUT2 + presynaptic inputs. To assess whether IBA1 + microglia are contacting or engulfing VGLUT2 + inputs in WT and Npc1 nmf164 mice, confocal microscopy and 3D surface rendering analysis (Imaris) were used in P14 cerebella. At this stage of Development • Accepted manuscript postnatal development (P14, late phase of CF synapse elimination), WT microglia were actively contacting and engulfing VGLUT2 + inputs in the ML (Fig. 9A). Quantitative analysis of the total volume of VGLUT2 puncta per microglia at the PCL and ML indicated that Npc1 nmf164 microglia contacted or engulfed significantly more VGLUT2 + inputs than WT microglia at P14 (Fig. 9A-B). By examining the Z-stack sequence images of the microglial cell shown in figure 8A, the interactions of the IBA1 + cell processes with VGLUT2 + inputs innervating CALB + PC dendrites can be observed in Fig. 9C. Some VGLUT2 + inputs were completely engulfed (arrows) by the IBA1 + cell processes, while others were only contacted (Fig. 9C), demonstrating that CF presynaptic inputs are contacted or engulfed by microglia during the late phase of CF refinement. In contrast, Z-stack imaging sequence of the Npc1 nmf164 microglial cell presented in figure 8A clearly shows the interaction of the IBA1 + cell with CALB + PC soma while contacting or engulfing VGLUT2 + inputs that were found abundantly in this region of the PC in P14 Npc1 nmf164 mice (Fig. 9C). These results suggest that NPC1 deficiency not only impairs CF synapse formation, but that it also alters the elimination and translocation of CF synapses in addition to the normal interaction and synaptic pruning function of microglia during the postnatal developmental refinement of CF synapses. Overall, our results show severe impairments in cerebellar microglia and synaptic development that precede and may contribute to early behavioral deficits and neurodegeneration in NPC. Discussion In this study, we demonstrate that deficiency of NPC1 affects the postnatal development and function of cerebellar microglia, contributing to profound defects in developmental synaptic pruning and connectivity in the cerebellum. Specifically, we found that lack of NPC1 in mice reduced radial migration, increased proliferation and impaired differentiation of microglial precursors during the first two postnatal weeks. Increased engulfment of pyknotic bodies and CF presynaptic elements was characteristic of Npc1 nmf164 differentiating microglia at two weeks of age. These Similarly, in the human NPC disease, where the classic presentation of the disease is often found between middle to late childhood, early neurological symptoms associated with cerebellar dysfunction, such as clumsiness, gait disturbances, and eventually ataxia, are observed before the manifestation of other neurological symptoms (Patterson, 1993). These findings suggest that deficiency of NPC1 causes developmental disturbances in the cerebellum that precede neurodegeneration. we hypothesized that NPC1 deficiency severely affects the postnatal development of cerebellar microglia in Npc1 nmf164 mice. Indeed, our data demonstrated that early radial migration of microglial precursors was reduced or delayed in Npc1 nmf164 mice since fewer microglial precursors were found at the IGL in P4 mice. NPC1 deficiency has been previously implicated in the reduced in vitro migration and invasion of CHO and fibroblast cells from NPC patients implicating dysfunctional recruitment and function of integrins in focal adhesion during cell migration (Hoque et al., 2015). It is possible that the intrinsic ability of microglial precursors to migrate is affected by the lack of NPC1 and the lysosomal accumulation of cholesterol. Another important finding in this study was the increased density of microglia in the developing WMR and cerebellar cortex regions of Npc1 nmf164 mice. In the normal brain, microglia are highly proliferative during the first two postnatal weeks, particularly in the developing CbM and WMR (Li et al., 2019;Nikodemova et al., 2015). A recent study demonstrated that the density of a subset of microglial precursors named PAM peaks at P7 exclusively in the cerebellar WMR (Li et al., 2019). In our study, we found higher proliferative activity in microglial precursors at P4 in the CbM, followed by a significantly increased number of microglia in the cerebellar WMR and in the PCL/ML of P10 Npc1 nmf164 mice. Furthermore, the number of CLEC7A + PAM in the WMR was also increased in Npc1 nmf164 mice suggesting that NPC1 deficiency amplifies the proliferative activity of microglial precursors during highly proliferative stages. An increased number of microglia was still found at P14 and in post-weaning Npc1 nmf164 mice (Kavetsky et al., 2019), indicating that the active proliferation of microglial precursors in the WMR leads to a higher number of these cells in the cerebellar cortex. A few CLEC7A + differentiating microglia were observed in Npc1 nmf164 mice at P10, but these cells were no longer seen at P14 indicating a possible failure of these cells to rapidly downregulate (Zhao et al., 2018). It is highly probable that NPC1 deficiency causes the pathological changes in developmental microglia through the overactivation of the mTOR pathway, since increased proliferation, impaired differentiation and increased phagocytic activity were hallmarks of postnatal Npc1 nmf164 microglia. Further studies are warranted to determine the role of the mTOR signaling pathway in NPC microglia pathology. Microglial cells play an important role in the clearance of apoptotic cells during neuronal developmental death (Ashwell, 1990;Mosser et al., 2017). However, it has been also demonstrated, that microglia can induce apoptosis in the neurons they Development • Accepted manuscript phagocytose (Mosser et al., 2017). In this study, an abundant number of maturing microglia in the ML containing phagocytic cups in both WT and Npc1 nmf164 mice were found at the end of the second postnatal week. It was also evident that the number of phagocytic cups and phagocytic cups containing pyknotic bodies was significantly higher in Npc1 nmf164 mice than in WT mice. A high content of phagosomes in P14 microglia at the ML confirmed that at this stage of postnatal development microglial cells were engaged in phagocytic activity. Previous work in the developing rat cerebellum found that the density of phagocytic cups peak around P17 (Perez-Pouchoulen et al., 2015), supporting our findings that microglia is highly phagocytic by the end of the second postnatal week. It is presumed that pyknotic bodies observed at the ML are apoptotic granule precursor cells that were migrating from the ECL into the IGL during postnatal development (Wood et al., 1993). Interestingly, a reduced number of cerebellar granule cells and reductions in cerebellar lobule size at the end of postnatal development have been found in Npc1 -/mice (Nusca et al., 2014). It is possible that the increased number of phagocytic cups and the engulfed pyknotic bodies in Npc1 nmf164 mice are caused by the increased number of noninflammatory microglia in the developing mutant cerebellum, which could also increase the developmental apoptotic death of cells at the ML. Indeed, an increased number of apoptotic cells was found in mice with elevated microglial phagocytic activity due to the constitutive activation of the mTOR pathway in noninflammatory microglia (Zhao et al., 2018), suggesting that phagocytic microglia can induce and increase developmental neuronal apoptosis. Here we found that VGLUT2 + synaptic inputs from CFs were significantly reduced in the ML of Npc1 nmf164 mice at P14, suggesting that NPC1 deficiency affects CF synapse formation. Our results also indicate that translocation of CF synaptic inputs from the PC soma to the proximal region of PCs dendrites was impaired since a higher number of PC somas contained VGLUT2 + and a greater number of VGLUT2 + puncta per PC soma were found in Npc1 nmf164 mice. A previous study found that not only the glutamatergic CF synaptic inputs were reduced in Npc1 nmf164 mice, but also the GABAergic (basket/stellate cells) inputs, indicating that deficiency of the NPC1 protein broadly impairs synaptic connectivity in the cerebellum (Caporali et al., 2016). These synaptic defects were also associated with developmental deficits in motor skill acquisition in the Npc1 nmf164 mouse. Importantly, previous studies have demonstrated that microglia have a role in developmental activity-dependent synaptic pruning in the brain (Gunner et al., 2019;Schafer et al., 2012;Tremblay et al., 2010). In fact, microglia engulf and remove intact presynaptic elements during the process of developmental synaptic pruning (Gunner et al., 2019;Schafer et al., 2012;Tremblay et al., 2010). The role of microglia in developmental CF synapse refinement is not completely understood. However, recent studies have shown that genetic or pharmacological depletion of microglia in the cerebellum impairs the early and late stages of CF synapse elimination during postnatal development leading to behavioral and motor deficits Development • Accepted manuscript (Kana et al., 2019;Nakayama et al., 2018). Furthermore, it is thought that microglia facilitate developmental CF synapse elimination by promoting GABAergic inhibition of PCs (Nakayama et al., 2018). Here, we aimed to determine if cerebellar microglia engulf CF presynaptic inputs at P14 (late-phase of CF synapse elimination) and if NPC1 deficiency alters this microglial pruning function. In fact, we found that at P14, microglia were contacting and engulfing VGLUT2 + inputs in the ML of WT mice. These results are in accordance with the abundant density of CD68 + phagosomes observed in P14 microglia, indicating that microglia are highly phagocytic in the cerebellum at this postnatal age. Interestingly, at this age, the increased density of somatic VGLUT2 + in Npc1 nmf164 PCs coincided with a higher percentage of PC somas contacted by microglia. Furthermore, P14 microglia contacted and engulfed more VGLUT2 + inputs in Npc1 nmf164 mice than in WT mice. It is possible that the reduced elimination and translocation of VGLUT2 + inputs in Npc1 nmf164 PC somas could be the consequence of decreased GABAergic stimulation to PCs (Caporali et al., 2016), which is also modulated by microglia (Nakayama et al., 2018). Also, it is thought that microglia preferentially engulf and remove presynaptic inputs with decreased activity (Gunner et al., 2019;Schafer et al., 2012;Tremblay et al., 2010), which could explain why a higher number of VGLUT2 + inputs are contacted or engulfed by microglia in Npc1 nmf164 mice. Disrupted presynaptic terminals in NPC can predispose neurons to early neurodegeneration, as demonstrated in a mouse model of the lysosomal storage disease mucopolysaccharidosis type IIIA, where restoration of presynaptic function delayed neurodegeneration (Sambri et al., 2017). Current work in our laboratory is investigating if this phagocytic activity of NPC microglia affects other synaptic refinement and remodeling programs in PCs. Overall, our data demonstrate that deficiency of NPC1 affects microglia and synapse development during the postnatal development of the cerebellum, leading to behavioral deficits and predisposing PCs to neurodegeneration. Animals All experiments involving mice were conducted in accordance to policies and procedures described in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health and were approved by the Animal Care and Use Committees at the Rowan University School of Osteopathic Medicine. The C57BL/6J-Npc1 nmf164 /J mouse strain (Jax stock number 004817) was provided by Dr. Robert Burgess at The Jackson Laboratory. Npc1 nmf164 heterozygous mice were bred and housed in a 12/12-hour light/dark cycle to generate both WT and Npc1 nmf164 homozygous mutant mice. Both males and females were used in this study, at a ratio of 2:2 when 4 mice were used. Mouse Perfusion and Tissue Preparation Mice were euthanized with CO2 and transcardially perfused with 1X PBS followed by 4% paraformaldehyde. After perfusion, mice were decapitated and their brains were carefully dissected and fixed by immersion in 4% paraformaldehyde overnight. After fixation, brains were rinsed in 1X PBS, immersed in 30% sucrose/PBS solution overnight at 4°C, frozen in OCT, and cryosectioned at 25μm or 50μm (floating sections). Immunohistochemistry For immunostaining, brain sections in slides (25μm) or as floating sections (50μm) were rinsed once in 1X PBT (PBS + 1% Triton 100X) and incubated in primary antibodies diluted with 1X PBT + 20% normal donkey serum for overnight at 4°C. After incubation with primary antibodies, sections were rinsed three times with 1X PBT for 10 min and incubated for two hours in the corresponding secondary antibodies (1:800, Jackson-ImmunoResearch or Invitrogen). Tissue was then Microscopy image analysis To keep consistency between samples, imaging and quantitative analyses to determine changes in the number of IBA1 + cells and IBA1 + /KI-67 + cells were performed in the first four anterior cerebellar lobules (I-IV). For quantification of IBA1 + and IBA1 + /KI67 + cells in the developing cerebellum, four images (1 per lobule) were taken from two cerebellar cryosections for each mouse (8 images per mouse) with an inverted Leica DMi8 fluorescent microscope. For the WMR, one image per section (two sections) were taken using the Kyence microscope. The imaged regions were randomly selected and investigators were blinded to the genotype. Once the images were taken, a box of 250 X 350 pixels (cerebellar cortex) or 350 X 450 pixels (CbM) was used to crop the images (1-2 boxes per image), so that the area used for the cell counting was consistent between images/animals, and included IGL, PCL/ML, EGL and PS in P4 mice, PCL/ML, EGL and PS in P10 mice, and PCL/ML, EGL and PS in P14 mice. The cropped images were manually counted using the cell counter plugin from the ImageJ (1.47 d) software. For quantification of Lectin + IGL capillaries total length, a region of 400 X 300 pixels was cropped and the Simple Neurite Tracer from ImageJ was used to trace the capillaries and obtain the length measure of every capillary in the image. For quantification of CLEC7A/IBA1 area and MBP intensity, two images per section were taken using the Kyence microscope. The measure of CLEC7A/IBA1 and MBP immunostained area was selected by threshold and measured by the Analyze plugin of ImageJ. Investigators were blind to the genotype of the tissue while counting the cells or immunostained areas. For 3D image reconstructions and analyses, three sagittal 50μm cerebellar sections were immunostained by free floating immunohistochemistry. All the images analyzed by the Bitplane Imaris software, were acquired using a Nikon A1R Confocal System equipped with Live Cell 6 Laser Line and Resonant Dual Scanner. Confocal image stacks were acquired using a 40X objective lens with a 1μm interval through a 50μm z-depth of the tissue. Three confocal images per mouse were taken from the first three lobes (1 per lobe), in the CbM (P4), in the WMR (P10) and in the cerebellar cortex (P10 and P14). To quantify microglial precursors in the WMR of P10 mice, a box of 500 X 500μm was used and Imaris surface rendering tool was used to calculate the volume of IBA1 + cells and colocalization of IBA1 + and KI67 + cells inside the box. The quantification of microglia with phagocytic cups and the number of phagocytic cups were quantified manually in 40X confocal images of the Development • Accepted manuscript ML in P14 cerebella using the cell counter plugin from ImageJ. Two to three images per mouse (n=4) were used for the quantifications in confocal images. Quantitative analysis of 3D microglia morphology was performed using the Surface rendering tool for cell volume and the Filament Tracer for processes volume and ramification, both tools are part of the Bitplane Imaris software. Confocal zstack images of ~50μm were taken and twenty IBA1 + (5-6 per mouse, n=4 mice) were segregated using 3D surface rendering to be used for the Filament Tracer tool that determines processes length, volume and ramification. The 3D surface rendering was also used to segregate IBA1 + microglia and quantified CD68 + phagosomes inside microglia, or VGLUT2 + synaptic terminals contacted or engulfed by microglia, by using the "Mask all" tool which creates a new channel of the immunostained areas that are inside the created surface (in this case IBA1 surface) and clearing all the fluorescence that is not found overlapping/contacting the rendering surface. The sum of the CD68 or VGLUT2 volume contacted or inside the IBA1 surface was calculated and provided by the software and used for the data analysis presented here. The quantifications of VGLUT2 volume in the ML of P14 mice was performed by cropping the ML region (300μm height X 400μm wide) in 40X confocal images and creating a 3D surface rendering that was used to obtain the sum of the volume of all the VGLUT2 + inputs inside the cropped image. To quantify the volume of VGLUT2 + inputs contacted or engulfed by microglia in the ML, the "Mask all" tool, which creates a new channel of the IBA1 immunostained area that are in contact or inside the created surface (in this case the VGLUT2 surface) was used, then a new surface was created for the IBA1/VGLUT2 overlapping inputs and the calculated volume sum values were collected. The quantification of the percentage of CALB + PCs with VGLUT2 + inputs and the number of VGLUT2 + inputs Development • Accepted manuscript per cell were quantified manually in 1μm Z-sections from 40X confocal images using the cell counter plugin from the ImageJ (1.47 d) software. Two to three images per mouse (n=4) were used for these quantifications. Quantitative Real-Time Polymerase chain reaction (PCR) array To measure gene expression changes in mouse innate immune response genes in the cerebellum of WT and Npc1 nmf164 mice Real-Time PCR array TaqMan™ Array Mouse Immune Response. Cerebella from P30 WT (n=4), P30 Npc1 nmf164 (n=4) and P90 Npc1 nmf164 mice (n=3) were collected after mice were perfused with 1X PBS and treated overnight in RNAlater for long-term storage. For RNA extraction, 30mg of cerebellum from each mouse was used and total cellular RNA was extracted and purified from each individual tissue according to the TRIzol™ Plus RNA Purification Kit (ThermoFisher) manufacturer protocol; RNA concentration and purity were determined using the Qubit 2.0 Fluorometer using the RNA quantification kit (Invitrogen). RNA (1 g) was reverse transcribed to cDNA using the High-Capacity cDNA Reverse Transcription Kit (Thermofisher). Real-time quantitative PCR was performed using the 96-Well TaqMan™ Array Mouse Immune Response (ThermoFisher) according to the manufacturer protocol and one PCR array plate per mouse was used. Briefly, cDNA samples were diluted appropriately, 540 l cDNA template was added to 540 l of 2X real-time quantitative reaction mixture (TaqMan™ Fast Advanced Master Mix, ThermoFisher), and 10 l of reaction liquid plus cDNA were added to each well of the PCR array, containing gene specific primers. Conditions for the real-time quantitative PCR reaction were as follows: UNG incubation 50°C for 2 min, enzyme activation 95°C for 20 seconds, 40 amplification cycles of denaturing at 95°C for 3 s, annealing/extension at 60°C for 30 s, followed Development • Accepted manuscript by acquisition of fluorescence signal. Data analysis is based on the ΔΔCt method with normalization of raw signal data to housekeeping genes incorporated on the TaqMan™ Array Mouse qPCR plate. Statistical Analysis Data were analyzed using GraphPad Prism software. Significance was calculated using unpaired t tests for comparisons between two groups. p-values are provided as stated by GraphPad Prism software and significance was determined with pvalues less than 0.05. F) The number of IBA1 + /KI67 + cells is significantly higher in the CbM of Npc1 nmf164 mice. G) The percentage of KI67 + microglia is significantly higher in the CbM of Npc1 nmf164 mice. H) IBA1 + and KI67 + cells in the developing cerebellar cortex at P4. I) The number of IBA1 + /KI67 + cells is significantly lower in the IGL of Npc1 nmf164 mice.
v3-fos-license
2017-08-03T02:43:44.700Z
2016-08-26T00:00:00.000
4656844
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-016-0375-3", "pdf_hash": "c4d92f486229e8936dba2f39de515c8649ed72a3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2444", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "13a57a89f7b27f188c0a558173868dda3630b588", "year": 2016 }
pes2o/s2orc
TGFβ-induced switch from adipogenic to osteogenic differentiation of human mesenchymal stem cells: identification of drug targets for prevention of fat cell differentiation Background Patients suffering from osteoporosis show an increased number of adipocytes in their bone marrow, concomitant with a reduction in the pool of human mesenchymal stem cells (hMSCs) that are able to differentiate into osteoblasts, thus leading to suppressed osteogenesis. Methods In order to be able to interfere with this process, we have investigated in-vitro culture conditions whereby adipogenic differentiation of hMSCs is impaired and osteogenic differentiation is promoted. By means of gene expression microarray analysis, we have investigated genes which are potential targets for prevention of fat cell differentiation. Results Our data show that BMP2 promotes both adipogenic and osteogenic differentiation of hMSCs, while transforming growth factor beta (TGFβ) inhibits differentiation into both lineages. However, when cells are cultured under adipogenic differentiation conditions, which contain cAMP-enhancing agents such as IBMX of PGE2, TGFβ promotes osteogenic differentiation, while at the same time inhibiting adipogenic differentiation. Gene expression and immunoblot analysis indicated that IBMX-induced suppression of HDAC5 levels plays an important role in the inhibitory effect of TGFβ on osteogenic differentiation. By means of gene expression microarray analysis, we have investigated genes which are downregulated by TGFβ under adipogenic differentiation conditions and may therefore be potential targets for prevention of fat cell differentiation. We thus identified nine genes for which FDA-approved drugs are available. Our results show that drugs directed against the nuclear hormone receptor PPARG, the metalloproteinase ADAMTS5, and the aldo-keto reductase AKR1B10 inhibit adipogenic differentiation in a dose-dependent manner, although in contrast to TGFβ they do not appear to promote osteogenic differentiation. Conclusions The approach chosen in this study has resulted in the identification of new targets for inhibition of fat cell differentiation, which may not only be relevant for prevention of osteoporosis, but also of obesity. Background Human mesenchymal stem cells (hMSCs) from bone marrow have the ability to differentiate into cells from multiple lineages, including osteoblasts and adipocytes. The commitment of hMSCs towards either the osteogenic or adipogenic lineage depends on the local availability of growth factors and hormones, which are able to activate lineagespecific transcriptional regulators [1]. Patients suffering from osteoporosis show an increased number of adipocytes in their bone marrow, concomitant with a reduction in the pool of hMSCs that are able to differentiate into osteoblasts, thus leading to suppressed osteogenesis [2,3]. It is still unclear to what extent this age-related increase in differentiation of hMSCs towards adipocytes results from intrinsic changes in the stem cells or from alterations in the microenvironment of the bone marrow [1,4]. These observations have recently stirred increasing interest in anabolic therapies for osteoporosis, whereby osteogenic differentiation of hMSCs is stimulated by preventing adipogenic differentiation [5,6]. Most information about the signaling pathways that are required for osteogenic and adipogenic differentiation of hMSCs has come from in-vitro studies, whereby cells are treated with specific combinations of growth factors and hormones [7]. Differentiation into both lineages requires treatment of monolayer cells with dexamethasone (DEX) and is enhanced by the presence of bone morphogenetic proteins (BMPs). Osteogenic differentiation is obtained by the additional presence of ascorbate and β-glycerophosphate, whereas adipogenic differentiation requires treatment with insulin, 3-isobutyl-1-methylxanthine (IBMX), and the PPARG activator rosiglitazone. Our previous data have indicated that under these experimental conditions at least 75 % of the hMSCs differentiate into either osteoblasts or adipocytes [7]. Activation of the RUNX2 nuclear transcription factor appears to be essential for the osteogenic pathway of hMSCs, while the adipogenic pathway requires the transcriptional activity of the PPARG nuclear hormone receptor [1]. Evidence has been presented that transcriptional regulators promoting differentiation into one of these lineages actively suppress differentiation into the other lineage [8]. It has therefore been postulated that a reciprocal relationship exists between the osteogenic and adipogenic pathways, implying that impaired adipogenic differentiation of hMSCs may result in enhanced osteogenic differentiation [3]. Multiple regulators are known that affect the choice between the osteogenic and adipogenic lineage. Most notably, activation of the WNT/β-catenin pathway promotes osteogenic differentiation and inhibits adipogenic differentiation of hMSCs [9]. BMP enhances the outgrowth of both osteoblasts and adipocytes, while the related cytokine transforming growth factor beta (TGFβ) inhibits both osteogenic and adipogenic differentiation, at least when tested under these in-vitro conditions [10,11]. Moreover, Kim et al. [12] have shown that cAMP-activated protein kinases regulate the differentiation choice of hMSCs between osteogenesis and adipogenesis. In order to find drugs affecting the choice between osteogenic and adipogenic differentiation of hMSCs, we have optimized the adipogenic culture conditions in such a way that, within the same culture, a fraction of the cells differentiates into osteoblasts and another fraction into adipocytes. Our results show that addition of TGFβ to these cultures fully blocks adipogenic differentiation but, surprisingly, enhances osteogenic differentiation. Analysis of the individual components in the culture medium showed that the presence of the phosphodiesterase inhibitor IBMX, which stabilizes cAMP levels in the cell, converted TGFβ from an inhibitor to an enhancer of osteogenic differentiation. Based on these observations we have set up a gene expression microarray experiment to identity genes that under these optimized adipogenic culture conditions are downregulated by TGFβ. The genes identified in this way seem to play an important role in adipogenesis, since inhibitors of the corresponding proteins were found to be able to block adipogenic differentiation of hMSCs. Some of these inhibitors have already received FDA approval for the treatment of various diseases, and may therefore be good candidates for therapeutic drug repurposing in order to treat patients suffering from such diseases as obesity and osteoporosis. Methods Culture and differentiation of hMSCs hMSCs harvested from normal human bone marrow were purchased from Lonza (Walkersville, MD, USA) at passage 2. Cells were expanded for no more than five passages in "mesenchymal stem cell growth medium" (MSCGM; Lonza) at 37°C in a humidified atmosphere containing 7.5 % CO 2 . Studies were performed with hMSCs from three different donors, encoded 5F0138, 6F4085, and 7F3458. Alkaline phosphatase assays To quantify alkaline phosphatase (ALP) enzymatic activity as a measure of osteogenic differentiation, hMSCs were seeded in 96-well tissue culture plates as already described, after which cells were differentiated for 7 days in osteogenic differentiation medium. ALP enzymatic activity was quantified by measuring the formation of p-nitrophenol from p-nitrophenyl phosphate (PNPP; Sigma-Aldrich), as described previously [13]. ALP enzymatic activity was corrected for differences in cell number, as determined by a Neutral Red assay. Cells were incubated with Neutral Red dye diluted in PBS for 1 hour at 37°C. After washing with PBS, the dye was extracted from the cells using 0.05 M NaH 2 PO 4 in 50 % EtOH, after which the absorbance was measured at 540 nm. For histochemical analysis of ALP activity, cells were seeded in 48-well tissue culture plates and differentiated for 7 days in osteogenic differentiation medium. Subsequently, cells were fixed in 3.7 % formaldehyde/PBS for 10 min at 22°C. After washing with PBS, cells were incubated for 1 hour at 37°C in a mixture of 0.1 mg/ml naphtol AS-MX phosphate (Sigma-Aldrich), 0.5 % N,N-dimethylformamide, 2 mM MgCl 2 , and 0.6 mg/ml Fast Blue BB salt (Sigma-Aldrich) in 0.1 M Tris-HCl, pH 8.5. Mineralization assay To measure calcium deposition in the extracellular matrix, hMSCs were seeded in 24-well tissue culture plates and cultured for 13 days under osteogenic or adipogenic differentiation conditions, as indicated. Cells were subsequently washed twice with PBS after which calcium was extracted from the extracellular matrix by treatment with 150 μl of 0.5 M HCl. Calcium concentrations were measured in a colorimetric assay using ocresolphtalein complexone as a chromogenic agent, according to the protocol provided by the manufacturer (Sigma-Aldrich). Oil Red O staining and Triglyceride assay To quantify adipogenic differentiation, lipid droplets were stained in mature adipocytes obtained after treatment of hMSCs for 9 days in adipogenic differentiation medium. Cells were first washed twice with PBS, fixed for 30 min with 1 % formaldehyde in PBS, and then washed once with water and twice with 60 % isopropanol. Cells were then stained for 1 hour with 0.3 % w/v Oil Red O (Sigma-Aldrich) in 60 % isopropanol. Subsequently, cells were washed once with 60 % isopropanol and twice with distilled water. For quantification of Oil Red O staining, samples were treated with 100 % isopropanol and absorbance was measured at 530 nm. The amount of triglycerides stored in lipid droplets of mature adipocytes was quantified after treatment of hMSCs for 9 days in adipogenic differentiation medium. Cells in 96-well tissue culture plates were washed twice with PBS. Triglycerides were extracted from the lipid droplets by freezing the cells in 50 μl of a buffer containing 25 mM Tris-HCl (pH 7.5) and 1 mM EDTA, followed by addition of 40 μl tert-butanol and 10 μl methanol. Samples were heat-dried at 55°C, after which they were resuspended in Triglycerides LiquiColor® mono reagents (HUMAN GmbH, Wiesbaden, Germany). Triglycerides were quantified by measuring the absorbance at 490 nm. RNA isolation and real-time quantitative RT-PCR RNA was isolated as described by Piek et al. [13]. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using random hexamer primers and SUPER-SCRIPT™ II reverse transcriptase (Invitrogen, Carlsbad, CA, USA). Subsequently, cDNA was amplified in a quantitative real-time PCR, performed using Power SYBR Green® PCR Mastermix (Applied Biosystems, Foster City, CA, USA) on an Applied Biosystems 7500 Real-time Fast PCR System. For each gene, PCR was carried out in duplicate and mean expression values were calculated relative to the mean expression level of the housekeeping gene RPS27A (ribosomal protein S27a). Human gene-specific PCR primers used included the following: Immunoblotting hMSCs were seeded at 4.0 × 10 4 cells per cm 2 in six-well plates and cultured for 24 hours in the indicated differentiation media. Cells were then lysed in 250 μl of RIPA lysis buffer per well. Then 5 μl of reducing sample buffer was added to 25 μl of lysate, heated to 95°C, and subsequently loaded onto an 8 % SDS-PAGE gel. HDAC5 was detected on blots using a goat polyclonal anti-HDAC5 antibody (G-18, 1/200 dilution) raised against the N-terminus of human HDAC5 (sc-5250; Santa Cruz Biotechnology, Dallas, TX, USA), followed by an HRP-labeled polyclonal secondary antibody. Antibodies against α-tubulin (Sigma) served as a loading control. Gene expression microarray analysis To identify genes that are regulated during osteogenic and adipogenic differentiation of hMSCs, a total of 54 samples (each containing 800,000 cells/ 20 cm 2 ) were seeded in PM and grown for 24 hours. Subsequently the medium was exchanged for differentiation medium, now consisting of PM with 10 −6 M DEX, 10 μg/ml insulin, 10 −7 M rosiglitazone, and 50 ng/ml BMP2 (B). In addition, either 5 ng/ml TGFβ (BT), or 250 μM IBMX (BI), or 5 ng/ml TGFβ and 250 μM IBMX (BTI) were added. Samples were incubated for either 0, 1, 2, 3, or 7 days. Experiments for each group and time point were carried out as three biological replicates, while the untreated control group (time 0) consisted of six samples. RNA was isolated as already described, and hybridized onto Affymetrix HGU 133 plus 2.0 microarrays according to existing protocols [13]. Microarray data were analyzed with the R language for statistical computing using appropriate Bioconductor packages (http://bioconductor.org/) for reading, normalizing, and statistically evaluating the data, followed by annotation of the gene sets and integration of parallel data sources. Briefly, the analysis started with a careful quality assessment of the dataset using the automatic R pipeline AffymetrixQC [14], which was customized and run locally. All 54 microarrays passed the quality control and were included in the analysis, consisting of robust microarray analysis (RMA) normalization [15], followed by statistical analysis to find differentially expressed genes using Linear Models for Microarray Data (LIMMA) [16], and subsequent functional annotation and enrichment analysis using the online resource Database for Annotation, Visualization and Integrated Discovery (DAVID) [17,18]. Finally, the list of differentially expressed genes for the contrasts of interest was crossed with the information from the DrugBank database [19] in order to derive the final list of candidate genes for experimental testing. All R scripts used for this analysis are available upon request. Current microarray data have been deposited in NCBI's Gene Expression Omnibus [GEO:GSE84500] (https://www.ncbi. nlm.nih.gov/geo/query/acc.cgi?acc=GSE84500). Statistical analysis Student's t test was used for statistical comparisons. Numeric data are represented as mean ± standard deviation of triplicate experiments, unless stated otherwise. Results TGFβ induces hMSCs to switch from adipogenic to osteogenic differentiation BMPs have been described as positive regulators of both osteogenesis and adipogenesis [8,10]. In order to study the effect of BMP2 on differentiation of hMSCs in more detail, we cultured these cells in either osteogenic differentiation medium or adipogenic differentiation medium in the absence and presence of BMP2. Figure 1a shows that addition of BMP2 has only a small stimulatory effect on adipogenic differentiation of hMSCs, as measured by the amount of the triglyceride production. On the other hand, BMP2 strongly enhanced osteogenic differentiation, as indicated by increased ALP activity (Fig. 1b). The role of TGFβ in adipogenic and osteogenic differentiation of hMSCs is still unclear. Figure 1a also shows that adding TGFβ to adipogenic differentiation medium blocks adipogenic differentiation of hMSCs in a dose-dependent manner, both in the absence and additional presence of BMP2. Figure 1b shows that addition of TGFβ to osteogenic differentiation medium results in a similar inhibition of osteogenic differentiation, both in agreement with previous data [10,11]. However, when hMSCs are treated with adipogenic differentiation medium (which contains a 10-fold higher concentration of DEX than osteogenic differentiation medium) in combination with BMP2, a fraction of the cells will differentiate into bone cells and a fraction into fat cells within the same well, as shown in Fig. 1c by histological staining. Addition of TGFβ under these conditions resulted is a dose-dependent increase in the number of ALP-positive bone cells, with a concomitant reduction in Oil Red O-positive fat cells. These data show that TGFβ blocks bone cell differentiation under osteogenic differentiation conditions, but enhances bone cell differentiation under adipogenic differentiation conditions. It can therefore be concluded that, under the experimental conditions used in Fig. 1c, TGFβ induces a switch from adipogenic to osteogenic differentiation. IBMX is a critical component in the TGFβ-mediated switch in cell fate In order to investigate which component of the adipogenic differentiation medium allows osteogenic differentiation in the presence of TGFβ, we added the adipogenic differentiation medium components insulin, IBMX, and rosiglitazone successively to hMSCs grown in osteogenic differentiation medium with BMP2 and TGFβ. Figure 2a shows that the inhibition of osteogenic differentiation by TGFβ could not be prevented by insulin or rosiglitazone, but was largely overcome upon addition of IBMX. A parallel experiment showed that omission of IBMX from adipogenic differentiation medium was sufficient to prevent the TGFβ-induced enhancement of osteogenic differentiation, while removal of insulin or rosiglitazone was without effect (Fig. 2b). IBMX is a phosphodiesterase inhibitor, which prevents degradation of cAMP. In order to show that enhanced cAMP levels play a role in the TGFβ-mediated switch in cell fate, we tested the effect of PGE2, a known activator of adenylate cyclase. Figure 2c shows that PGE2 is able to overcome TGFβ-induced inhibition of osteogenic differentiation of hMSCs in a dose-dependent manner, to a similar extent as IBMX. Osteogenic differentiation requires not only matrix maturation, as indicated by alkaline phosphatase expression, but also matrix mineralization, as indicated by calcium deposition. Figure 2d shows that under osteogenic differentiation conditions BMP is required for calcium deposition by hMSCs, while both BMP and TGFβ are required under adipogenic differentiation. These data show TGFβ is able to induce fully differentiated, mineralized osteoblasts under adipogenic differentiation conditions. IBMX suppresses HDAC5 expression Previous studies [20] have shown that TGFβ-mediated inhibition of osteogenic differentiation can be overcome by trichostatin A, an inhibitor of class I and II mammalian histone deacetylases (HDACs). We confirmed this observation upon incubating hMSCs in osteogenic differentiation medium with BMP2 and TGFβ (data not shown). Kang et al. [20] have presented evidence that HDAC4/5 can interact with TGFβ-activated SMAD3, resulting in a complex that represses transcription of the bone marker gene osteocalcin (BGLAP). In human adipose-derived mesenchymal stem cells, trichostatin A impaired PPARG activity, at least under osteogenic differentiation conditions [21]. Still, the role of HDACs in adipogenesis is far from clear, since HDAC inhibitors may either enhance or inhibit adipogenic differentiation [22]. We have studied the expression of HDAC5 under conditions that TGFβ enhances osteogenic differentiation of hMSCs. Figure 3a shows that HDAC5 mRNA levels are slightly upregulated 24-48 hours after addition of osteogenic differentiation medium, particularly in the presence of BMP2 and TGFβ (OBT). A strong reduction in HDAC5 gene expression was observed when IBMX was added to the medium (OBTI), however, thus creating conditions under which adipogenic differentiation is prevented and osteogenic differentiation is promoted. Less reduction in HDAC5 mRNA levels was observed in adipogenic differentiation medium alone. We could not detect mRNA expression of HDAC4 in these cells. Figure 3b shows that HDAC5 levels are similarly regulated at the protein level. Western blot analysis revealed Fig. 1 Effect of TGFβ on adipogenic and osteogenic differentiation of hMSCs. a Triglyceride production by hMSCs, 9 days after incubation with adipogenic differentiation medium in the absence (white shading) and presence (black shading) of 125 ng/ml BMP2 and increasing concentrations of TGFβ1. The enhancing effect of BMP2 is not significant, while the inhibitory effect of TGFβ is significant (p < 0.01) above 1 ng/ml. b Alkaline phosphatase (ALP) activity of hMSCs, 7 days after incubation with osteogenic differentiation medium in the absence (white shading) and presence (black shading) of 125 ng/ml BMP2 and increasing concentrations of TGFβ1. Enhancing effect of BMP2 is significant (p < 0.01) at all data points, while the inhibitory effect of TGFβ is significant (p < 0.01) above 1 ng/ml. c TGFβ-induced switch from adipogenic to osteogenic differentiation. ALP staining (after 7 days) and Oil Red O (ORO) staining (after 9 days) following incubation of hMSCs in adipogenic differentiation medium containing 125 ng/ml BMP2 and the indicated concentrations of TGFβ1. BMP bone morphogenetic protein, TGFβ transforming growth factor beta the highest expression level in hMSCs treated with osteogenic differentiation medium containing BMP2 and TGFβ (OBT), whereas almost no protein was detected in the additional presence of IBMX (OBTI). These data indicate that under conditions whereby TGFβ enhances osteogenic differentiation, no HDAC5 is available to prevent expression of bone specific genes. Altered gene expression during TGFβ-mediated switch in cell fate The presented data show that, upon incubation of hMSCs in adipogenic differentiation medium containing BMP2 and IBMX, a fraction of the cells will differentiate into ALP-positive bone cells and a fraction into Oil Red Opositive fat cells. Subsequent addition of TGFβ reduces the number of fat cells and enhances the number of bone cells in a dose-dependent manner (see Fig. 1c). This observation implies that under these experimental conditions addition of TGFβ will stimulate the expression of osteoblast genes and reduce the expression of adipocyte genes. In order to identify genes involved in this lineage switch, we performed gene expression microarray analysis on hMSCs, treated for 1, 2, 3, or 7 days with differentiation medium containing DEX, insulin, and rosiglitazone, using a Osteogenic differentiation of hMSCs in osteogenic differentiation medium supplemented with or without 125 ng/ml BMP2, 2 ng/ml TGFβ1, 10 μg/ml insulin, 500 μM IBMX, or 10 −7 M rosiglitazone. ALP activity is significantly higher (p < 0.001) in medium with BMP2 and TGFβ in the presence of IBMX than in the absence of IBMX. b Osteogenic differentiation of hMSCs in adipogenic differentiation medium supplemented with or without 125 ng/ml BMP2 and 2 ng/ml TGFβ1, and following omission of 10 μg/ml insulin, 500 μM IBMX, or 10 −7 M rosiglitazone. ALP activity is significantly higher (p < 0.01) in medium with all supplements than in the absence of IBMX. c Effect of PGE2, added at the indicated nanomolar concentrations, on osteogenic differentiation of hMSCs in osteogenic differentiation medium. A comparison is made with the effects of BMP2 (125 ng/ml), TGFβ (2 ng/ml), and IBMX (500 μM). Enhancement of ALP activity is significant (p < 0.05) at concentrations of 10 nM PGE2 and above. d Effect of BMP2 (125 ng/ml) and TGFβ (2 ng/ml) on total Ca 2+ deposition (μg) in a six-well plate well (10 cm 2 ) by hMSCs, cultured for 13 days in either osteogenic or adipogenic differentiation medium. Ca 2+ deposition is significantly enhanced by BMP2 alone in osteogenic differentiation medium (p < 0.01) and by BMP2 + TGFβ in adipogenic differentiation medium (p < 0.05). ALP alkaline phosphatase, BMP bone morphogenetic protein, IBMX 3-isobutyl-1-methylxanthine, as supplements BMP2 and combinations of IBMX and TGFβ. Untreated cells at day 0 served as the control experiment. In order to verify the extent of differentiation of the cells used for the microarray experiments, we used real-time PCR to measure mRNA expression levels of the osteoblast-specific alkaline phosphatase (ALPL) gene and of the adipocyte-specific adiponectin (ADIPOQ) gene, a target gene of the adipogenic master gene PPARG. This analysis was carried out 7 days after incubation with BMP2 (B), BMP2 + TGFβ (BT), BMP2 + IBMX (BI), or BMP2 + TGFβ + IBMX (BTI). Figure 4a shows that under these conditions IBMX, in combination with TGFβ, enhanced ALPL expression. No inhibitory effect of TGFβ was observed on bone cell differentiation, in agreement with the data of Fig. 2b. ADIPOQ expression was strongly enhanced upon IBMX addition, but reduced again to very low levels in the additional presence of TGFβ (Fig. 4b). These data confirm that, at the gene expression level, TGFβ induces a switch from adipocytes to osteoblasts. The data from the 54 microarray samples were normalized using RMA [15] followed by LIMMA [16] statistical analysis to identify differentially expressed genes and functional enrichment analysis using DAVID [17,18]. From the 54,675 probes on the chip, 7755 probes appeared differentially expressed at any time point or treatment, compared with the t = 0 control, based on a q value of 10 −5 and a minimum log 2 -fold change of 1. Our primary interest was to identify genes that controlled the TGFβinduced switch from adipogenic to osteogenic differentiation. In the current experiment, genes downregulated by TGFβ are potentially involved in adipogenic differentiation and genes upregulated by TGFβ in osteogenic differentiation. A comparison between the samples BTI and BI resulted in 2911 differentially expressed probes at any time point, of which 1176 probes (735 genes) were differentially expressed at the early time points (days 1 and 2), when cells become committed for lineage specific differentiation. Extending the log 2 -fold change from 1 to a minimum of 2 resulted in a reduction of the number of differentially expressed genes between BTI and BI to 109, of which 25 were established drug targets according to the DrugBank database (www.drugbank.ca). Visual inspection of the time course for expression of these genes identified nine genes which showed the desired dynamics; that is, modulation at early time points and higher expression in BI than in BTI, as presented in Fig. 5. Analysis of adipogenic differentiation inhibitors For the thus identified target genes, as presented in Table 1, we tested whether commercially available inhibitors could block the adipogenic differentiation of hMSCs under conditions similar to those used for the microarray analysis. We first confirmed the established observation that adipogenic differentiation does not occur in the absence of a PPARG agonist, such as rosiglitazone, and therefore an antagonist of this receptor was not further tested. We did, however, also observe that inhibitors of ADAMTS5 and AKR1B10 prevented adipogenic differentiation of hMSCs, while inhibitors of AGTR1, BDKRB2, and KCNK3 were without effect. Earlier studies in the literature have already indicated that inhibitors of metalloproteinases, including Batimastat, have a negative influence on adipogenic differentiation [23], but this is the first report for an inhibitory effect of an aldoketo reductase inhibitor. Figure 6a shows the effect of Batimastat on adipogenic and osteogenic differentiation of hMSCs under the experimental conditions used for the microarray analysis. TGFβ and DMSO, as a vehicle for Batimastat, were used as a positive and a negative control. The data show that Batimastat inhibits adipogenic differentiation in a dosedependent manner but, in contrast to TGFβ, this inhibition does not result in a concomitant enhancement of osteogenic differentiation. A similar observation was made for the AKR1B10 inhibitors Sorbinil (not shown) and Zopolrestat. The quantitative analysis presented in Fig. 6b shows that Zopolrestat actively inhibits adipogenic differentiation of hMSCs in a dose-dependent manner, although to a lesser extent than Batimastat. Discussion Osteoporosis is a debilitating disease which affects tens of millions of people worldwide. Postmenopausal women are particularly at risk of developing this disease, since reduced estrogen activity enhances the bone-resorbing activity of osteoclasts. Additionally, aging in general results in a significant reduction in the number and function of bone-forming osteoblasts [24]. Treatment of patients with osteoporosis may include hormone replacement therapies and the use of bisphosphonates to inhibit the activity of osteoclasts. Moreover, vitamin D3 or parathyroid hormone can be prescribed to enhance new bone formation [25]. Recent years have seen an increasing interest in the interaction between fat and bone cells in the bone marrow. Patients suffering from osteoporosis show an increased number of adipocytes in their bone marrow, concomitant with a reduction in the pool of hMSCs that are able to differentiate into osteoblasts [2,3]. In addition, adipokines secreted by these adipocytes may further enhance osteoclast activity [26], while an increasing fat content of the bone marrow may also impair the bone cell niche required for the functioning of hematopoietic stem cells [27]. These considerations show that there is great need for developing drugs that prevent the differentiation of hMSCs into fat cells and may thereby enhance their differentiation into bone cells. Likewise, drugs that prevent adipogenic differentiation could also be useful in the battle against obesity. In the present study we have shown that, under proper in-vitro conditions, TGFβ is able to stimulate osteogenic differentiation and prevent adipogenic differentiation within the same culture. These culture conditions require not only the presence of DEX and BMP, but also of a cAMP-enhancing stimulus such as IBMX or PGE2. Because of its strong side effects TGFβ is not suited for in-vivo applications, but our results show that the current in-vitro system can be used for identifying genes whose expression is repressed following a TGFβ-induced switch from adipogenic to osteogenic differentiation of hMSCs. By concentrating on those genes for which FDA-approved drugs are available, we have identified nine potential genes that could be tested for inhibition of adipogenic differentiation of hMSCs. Using this approach we have identified the nuclear hormone receptor PPARG, the metalloproteinase ADAMTS5, and the aldo-keto reductase AKR1B10 as potential drug targets for treatment of osteoporosis and obesity. TGFβ is a highly pleiotropic cytokine which plays an important role in many physiological processes, as well as in cancer [28,29]. Because of its strong inhibitory effect on the immune system, systemic treatment with TGFβ is not considered a realistic option, except for protection against autoimmune diseases [30]. Studies in laboratory animals have shown that TGFβ treatment can result in skin fibrosis and toxicity, without displaying significant antitumor effects [31,32]. Local injection of TGFβ into the knee joint has been shown to enhance cartilage integrity, leading to prevention of osteoarthritis Red O staining was measured at 530 nm on hMSCs treated for 9 days with adipogenic differentiation medium. TGFβ (5 ng/ml) was used as a control. Significance levels are indicated relative to untreated controls (Cont). *p < 0.05; **p < 0.01; ***p < 0.001. DMSO dimethyl sulfoxide, TGFβ transforming growth factor beta [33]. Our present study shows that the effect of TGFβ on osteogenic differentiation of hMSCs strongly depends on the culture conditions. In osteogenic differentiation medium TGFβ inhibits bone cell differentiation, while it promotes this process in adipogenic differentiation medium. Since the local conditions in the fatty bone marrow of osteoporosis patients are not well defined, it is difficult to predict whether injection of TGFβ will result in a net enhancement or a decrease of functional osteoblast cells. Furthermore, current clinical trials directed towards TGFβ are aimed at inhibiting its activity in cancer patients, since overactivation of TGFβ-induced pathways have been associated with cancer progression [28,29]. Among the genes that were downregulated at early time points following TGFβ treatment of hMSCs in adipogenic differentiation medium is PPARG. This observation is not unexpected, since ligand-induced activation of PPARG (e.g., by rosiglitazone) is known to be essential for adipogenic differentiation. We have studied the role of PPARG in this process not by using the PPARG-specific antagonist GW9662 (see Table 1), but by omitting rosiglitazone from the culture medium, which resulted in a complete block of fat cell formation (data not shown). The promoting role of PPARG agonists in obesity is well established, but on the contrary these hormones have been shown to be clinically active as antidiabetic drugs [34]. Because of these multiple faces of PPARG, inhibition of this nuclear hormone receptor does not seem an attractive approach for the prevention of fatty bone marrow formation in osteoporosis patients. The second gene for which inhibitors were found to prevent fat cell differentiation is ADAMTS5, a disintegrin and metalloproteinase with thrombospondin motifs [35]. Its main function is the cleavage of the aggrecan core protein, in which it appears more efficient than other matrix metalloproteinases (MMPs) [36]. Recent studies on Adamts5 -/mice, however, have indicated that ADAMTS5 may not responsible for aggrecan proteolysis, but instead regulates glucose uptake by mediating the endocytotic trafficking of LRP1 and GLUT4 [37]. Other studies have shown that deletion of active ADAMTS5 prevents cartilage degradation in a murine model of osteoarthritis [38]. Batimastat and the related drug Marimastat have primarily been developed as antineoplastic drugs. They prevent angiogenesis by blocking metalloproteinases including MMPs and ADAM family members [39]. Both drugs performed poorly in clinical trials on metastatic breast cancer patients [40] and were therefore never marketed. Previous studies have shown that Batimastat blocks the enzymatic activity of MMP2 and MMP9, and prevents the differentiation of mouse 3T3-F442A preadipocytes into fat cells [23,41]. Our present results show that Batimastat also prevents adipogenic differentiation of hMSCs. Long-term studies on Batimastat and Marimastat in humans are lacking and therefore it cannot be concluded whether these drugs present a real potential for the treatment of patients with osteoporosis or obesity. MMPs are not only involved in cancer progression, but also play an important role in inflammation and immunity [42]. Multiple natural compounds have been identified which seem to block MMP activity [43]. It will be interesting to test the effects of these compounds, some of which are used as food supplements, in relation to osteoporosis and obesity. In this study we have made the novel observation that inhibitors of aldo-keto reductases prevent adipogenic differentiation of hMSCs. AKR1B10, which was downregulated by TGFβ in our studies, is particularly active on lipid substrates. It plays an important role in the reduction of retinaldehyde to retinol, as well as in the lipid modification of the K-Ras oncogene. In humans, this gene is particularly expressed in the intestine, adrenal gland, and liver. Inhibition of AKR1B10 prevents the outgrowth of pancreatic carcinoma cells by modulating the Ras pathway [44,45]. Moreover, expression of AKR1B10 is considered to be a tumor marker for NSCLC [46] and liver tumors [47]. To the best of our knowledge, a role of AKR1B10 in adipogenic differentiation has not yet been studied. However, an RNA-mediated knockdown study has indicated that AKR1B10 is an important regulator of fatty acid biosynthesis in human RAO-3 breast cancer cells [48]. AKR1B10 is structurally very similar to AKR1B1, and many inhibitors of AKR1B1 also bind AKR1B10. These include the drugs Sorbinil and Zopolrestat, which have been used in the present study. These and related FDAapproved drugs are used particularly for treatment of patients with diabetic polyneuropathy [49]. They do so by inhibiting the metabolism of glucose by the so-called polyol pathway, which converts glucose into sorbitol. This reduced sugar accumulates in the cell and, as a result of osmotic stress, induces microvascular damage to the retina, kidney, and nerves. Although each of these drugs has its specific adverse effects, including rash, toxicity, and hypersensitivity reactions, at least some of them were well tolerated in clinical trials lasting more than a year [49]. However, no long-term beneficial effects of these drugs were observed in diabetes patients. More recently, AKR1B1 inhibitors have also been tested as anticancer drugs [50]. So far, no reports have been made in the literature on the effects of aldo-keto reductase inhibitors on patients with osteoporosis or obesity. Similar to observations for MMPs, multiple naturally occurring inhibitors for aldo-keto reductases have been identified, particularly from plant tissue [51,52]. Feeding mice with the most potent of these natural AKR1B inhibitors, bisdemethoxycurcumin, has been shown to reduce particularly the incidence of intestinal cancer. Interestingly, some of these natural compounds inhibit not only AKR1B members but also MMPs. In ongoing research we are testing the effects of these natural compounds on fat cell differentiation. These studies may provide a lead towards the further development of more optimized compounds which specifically prevent adipogenic differentiation in vivo. TGFβ is generally considered to be an inhibitor of bone cell differentiation, but our current results show that in the presence of cAMP-enhancing agents TGFβ is able to promote osteogenic differentiation. PGE2, which has been shown to raise cAMP levels in hMSCs [53], is known to stimulate osteogenesis upon short-term admission in vivo [54], but it is unclear whether PGE2 exerts its action by preventing TGFβ from inhibiting bone formation. Kim et al. [12] have shown that cAMP-activated protein kinases play a central role in the choice between osteogenic and adipogenic differentiation of hMSCs, but again no correlation was made with the activity of TGFβ. Our present data are summarized in Fig. 7, which show that PPARG, the nuclear transcription factor essential for adipogenic commitment, is inhibited by TGFβ, which consequently suppresses the maturation to adiponectin and Oil Red Opositive fat cells. On its own TGFβ suppresses the activity of RUNX2, the nuclear transcription factor essential for osteogenic commitment by a SMAD3/HDAC5-mediated mechanism [20], resulting in impaired maturation to ALP and matrix mineralization-positive bone cells. However, in the presence of IBMX, which strongly represses HDAC5, TGFβ becomes an activator of RUNX2-mediated osteogenesis. Obviously, the TGFβ-mediated inhibition of adipogenesis is not HDAC5 sensitive. Using our gene expression microarray analysis we have identified novel inhibitors of adipogenic differentiation. We thereby focused on genes which were differentially expressed on days 1 and 2 following TGFβ treatment. This time frame corresponds with the upregulation of commitment genes for adipogenic differentiation such as PPARG, as shown in Fig. 5. Genes differentially expressed at later time points were particularly involved in fatty acid metabolism (data not shown). Figure 6a shows that under the experimental conditions tested inhibitors of these adipogenic commitment genes did not promote osteogenic differentiation to the same extent as TGFβ. The possibility should therefore be considered that within 24-48 hours the cells have already undergone an irreversible commitment towards either adipogenic or osteogenic differentiation. In that respect it may be interesting to study whether TGFβ added at later time points can still promote cells under adipogenic differentiation conditions to become osteoblasts. Alternatively, the possibility should be considered that under high cAMP conditions TGFβ is able to stimulate specific pathways leading to osteogenic differentiation. Our current results showing that, in the presence of cAMP-enhancing agents, TGFβ is able to prevent the formation of fat cells and promote the formation of bone cells have been obtained on commercially available mesenchymal stem cells from bone marrow of healthy human donors. Given the observation that patients suffering from osteoporosis show an increased number of adipocytes in their bone marrow, it will be interesting to carry out similar experiments with mesenchymal stem cells from such patients. This may indicate if this aberrant pattern of differentiation results from intrinsic changes in their stem cells or from an altered microenvironment in their bone marrow. Conclusions Our data show that in the presence of cAMP-enhancing agents TGFβ stimulates the ability of hMSCs to differentiate into bone cells, while impairing their ability to Fig. 7 Overview of signaling pathways activated in hMSCs by TGFβ in the absence and presence of IBMX: adipogenic differentiation (left) and osteogenic differentiation (right). For details, see text. BMP bone morphogenetic protein, HDAC histone deacetylate, IBMX 3-isobutyl-1-methylxanthine, TGFβ transforming growth factor beta differentiate into fat cells. Under these conditions TGFβ treatment results in a reduced expression of genes which contribute to adipogenic differentiation, including PPARG, ADAMTS5, and AKR1B10. Since FDA-approved drugs are available for these genes, they are potential targets for treatment of patients suffering from osteoporosis or obesity.
v3-fos-license
2019-04-22T13:11:32.611Z
2018-03-23T00:00:00.000
126010891
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://journals.uran.ua/eejet/article/download/126635/123715", "pdf_hash": "eb48af72053cf31b9c6d1d151f6e3f6917e565af", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2446", "s2fieldsofstudy": [ "Engineering" ], "sha1": "b22efa59c8491f760bc07ed631771d02b9506af7", "year": 2018 }
pes2o/s2orc
IMPROVING POWER EFFICIENCY OF PNEuMATIC LOGISTIC COMPLEx ACTuATORS ThROuGh SELECTION OF A RATIONAL SChEME OF ThEIR CONTROL The work addresses solving important problems that occur when using pneumatic actuators, namely energy saving and expanding the scope of its use by covering the zone of large inertial loads at a constant maintenance of the actuator's operability. A rational structure of the pneumatic actuator based on a change in the structure of commutation links was determined. It ensures the following advantages over a discrete actuator: – an optimal form of the transient and high braking effect in the PA which are achieved by simultaneous pressure growth in the exhaust chamber and pressure differential in the working chamber up to ensuring a constant negative pressure differential at which a constant negative acceleration during braking takes place; – in the braking phase, not only transit working capacity but also potential energy of expansion of the compressed air in the working chamber is used; – the compressed air from the braking chamber is not irrevocably transformed into thermal energy but is returned to the feed line through the opened return valve (recuperation mode is realized); – the compressed air consumption for fixing the piston in the final position is significantly reduced; – due to the minimum pressure р k in the exhaust chamber at the initial moment of the piston motion, nonproductive work of ejection of the compressed air from the exhaust chamber is substantially reduced. Thus, the complex nature of reducing nonproductive energy inputs creates an energy saving effect that makes it possible to reduce energy inputs by 4–10 times in the rational scope of use of this actuator (χ<0.2 and β<2). The engineering procedure for solving the basic problem of functional and cost analysis was demonstrated on a specific numerical example: comparison of lump sum and operational costs in making a decision on the expediency of use of the new solution in practice. introduction When solving the problem of automating handling operations in warehouses, portal type manipulators are increasingly being used.In implementation of translational motion in manipulators of this type, the use of rodless long-stroke pneumatic cylinders is the most rational [1].In contrast to conventional cylinders, they provide ultimate compactness to the manipulator, have high radial rigidity and act as guides.It is also necessary to take into account that pertaining to lump-sum costs, this is the cheapest type of actuators characterized by simplicity, environmental friendliness and convenience of transmission and utilization of energy carriers.On the other hand, compressed air itself is one of the most expensive energy carriers.Therefore, when using pneuma tic actuators in logistics complexes, solving the following problems is relevant: 1) energy saving and ensuring smooth shock-free braking; 2) positioning of the actuator work member in conditions of rather high inertial loads. literature review and problem statement The progress in the field of automation of production processes features an increasing spread of pneumatic automation means.It is noted in [2] that production of compressed air in industrialized countries accounts currently for about 10 % of their total energy balance.Despite the fact that compressed air is one of the most expensive energy carriers, the energy saving issue of this carrier remains one of the least studied.Various methods of saving compressed air are proposed in [2,3].However, among them, there is not a single method associated with selection of a rational way of braking and positioning of the working member of the pneumatic motor from the point of view of energy saving.One way to save compressed air for the entire enterprise is proposed in [4].It involves refusal to use a single compressor and the synchronization of activation and deactivation of a group of compressors depending on consumption of compressed air in the enterprise network.However, the issue of energy efficiency of each individual consumer (pneumatic actuator) is not given attention.In contrast tp previous works, the problem of improving energy efficiency of an individual pneumatic actuator is being solved in [5].This is achieved by replacing the standard four-line pneumatic distributor with two threeline distributors and selecting a rational program of their control.Shortcomings of the work include absence of theoretical studies based on mathematical modeling.This excludes the possibility of generalization and determining the scope of rational use of this solution.It is noted in work [6] that the problem of energy saving is necessarily solved in all newly created projects in the last 20 years.However, with regard to pneumatic systems, this is not so easy to do because of complexity of the processes taking place in the pneumatic actuators.Therefore, it is proposed to use mathematical modeling when solving the issue of energy saving in these systems.However, it should be noted that the model proposed in this work is based on outdated representations, in particular, on the assumption of isothermal nature of thermodynamic processes.At present, it is well known (and it was proved) that a polytropic process with a variable polytropic index for describing operation of the pneumatic actuator is the most proper process.Paper [7] draws attention to the fact that irrational use of the increased pressure level leads to the growth of power input.It is suggested that in a necessity, pressure to the consumer should be reduced or, conversely, use of a local device similar to a hydraulic multiplier is advisable if a higher pressure is required.However, there is no proper theoretical solution, which makes it possible to choose rational size of such devices. There is a relatively small number of publications in which the issue of energy saving is associated with the process of braking and positioning of the work member in the pneumatic actuator.Conventional methods of braking are based on the use of external or internal throttling devices, that is, the methods belonging to exclusively dissipative methods of braking when the kinetic energy of moving parts is transformed to thermal energy [8]. In addition to low energy efficiency, such methods suffer from a lack of operational flexibility which makes it difficult to use them in present-day mechatronic systems [8,9].The methods of braking based on the change in the structure of commutation links where throttling devices are not used enable expanding of the scope of application of power pneumatics in a direction of a significant increase in inertial load.Also, due to the use of these braking methods, it is possible to achieve much more effective realization of operational availability (exergy) of compressed air [10].The authors of [5] drew attention to the important feature of non-dissipative braking through switching of commutation contacts, the possibility of using braking energy in a form of potential energy of compressed air in the brake chamber.This energy is then recuperated into the network or used to reverse the work member.However, the material presented in [5] does not contain an analysis of the field of application and a study of the qualitative and quantitative nature of energy losses. the aim and objectives of the study This work objective was to develop and substantiate structure of the pneumatic actuator (PA) which ensures efficient use of compressed air working capacity and determine the fields of rational use of the energy saving scheme of the PA based on qualitative and quantitative analysis of energy losses in the PA. To achieve the objective, the following tasks were set: -substantiate the most rational structure of commutation links for all phases of motion of the PA work member which ensure minimization of nonproductive power inputs; -develop an energy-saving scheme of the PA and an algorithm of its control; -define and quantify all components of power inputs in operation of a discrete PA; -conduct computer simulation on the basis of the developed mathematical model to define the field of the most rational use of the energy saving scheme of the PA. materials and methods for studying effectiveness of the pneumatic actuator by selection of a rational scheme of braking and positioning of the work member When braking and positioning the PA work members with a large inertial load, the braking methods based on the change in the structure of commutation links are increasingly used.The advantage of these braking methods is the ability to realize the most rational commutation links for all phases of motion of the work member of the PA.As a result, it becomes possible to provide the most favorable braking law and realize a shock-free operation of the PA with a large inertial load.Also, the most complete use of compressed air energy is possible in this case. When creating a pneumatic control system (braking) by changing the structure of commutation links, two diametrically opposite approaches are possible.The first is focused on minimizing lump-sum costs, that is, it requires a minimum of apparatuses to implement a sufficiently effective braking of the PA work member.The second is aimed at minimizing operating costs, that is, it provides a minimum of compressed air for a shock-free high-speed operation of the PA. As a basic (elementary) structure of the PA with braking by changing the structure of commutation links, consider a PA with two three-linear distributors (Fig. 1).Despite the presence of two pneumatic distributors, the set of commutation links for ensuring radical braking is small and is actually limited to only two variants (Table 1).The state of the control electromagnets is described by the Boolean variables T 1 and T 2 (1 for electric signal given, 0 for no electrical signal). Along with a reliable braking effect, this scheme provides high speed and requires a minimum of devices, so it is often considered as an effective scheme of the PA with large and medium inertial loads. 1. structural synthesis of an energy-saving pa scheme The principle of synthesizing an energy-saving PA scheme consists in that the most rational switching situations in terms of energy saving and maximum performance must correspond to each motion phase.Represent them in a table form (Table 2).The energy-saving scheme of the PA with braking by changing the structure of commutation links ensuring implementation of all switching situations of Table 2 is shown in Fig. 2 and the map of rational control of distribution valves of this scheme is given in Table 3. For an objective comparison of effectiveness of these methods of the PA control, a universal mathematical model has been developed in a dimensionless form with distinguishing the main criteria for dynamic similarity [10]. 2. the criteria of dynamic similarity and analysis of power inputs in the pa with braking by changing the structure of commutation links The following is used as the main criteria of dynamic similarity: - = through an opening equal to the effective area of the intake path f e 1 .The procedure of rationing differential equations describing the work processes in the PA is based on the use of dimensionless time τ = t t b and the criteria of dynamic similarity.This procedure allows one to reduce to a limit a large number of design parameters in the mathematical PA model replacing them with independent parameters that determine its dynamics. To assess the degree of energy perfection of the PA, the concept of exergy, that is, specific operability is used [9]: The first term on the right corresponds to the potential energy of expansion, the second term represents the specific work of pushing through, that is, the so-called transit working capacity. Let us estimate the degree of energy perfection of the PAs loaded with a static and an inertial load with a complete braking of the work member at the end of the stroke.It is advisable to make this estimation with the aid of the efficiency (η cf ) averaged over the cycle and the dimensionless mass of compressed air consumed by the PA in one actuation ( ) M [9]: where ¢ τ is the dimensionless time of piston motion from one position to another;  ξ is dimensionless piston speed; M is a dimensionless quantity of compressed air consumed by the pneumatic actuator; σ a is dimensionless atmospheric pressure: where ρ m is the density of air at its parameters in the feed line; G is the mass flow rate; τ is dimensionless time of complete PA actuation; j I 1 ( ) is the consumption function: where is the ratio of pressures at the ends of the pipeline (σ m1 is the dimensionless pressure in the object of commutation of the work cylinder chamber, σ 1 is the dimensionless pressure in the work cylinder chamber). Fig. 3 shows a comparative diagram of compressed air consumption for the pneumatic actuators operating according to the schemes in Fig. 1 (solid line) and in Fig. 2 (dotted line).The results were obtained for the PA at b = 5, χ = 0 1 ., σ a = 0 2 . .The high consumption of compressed air for the scheme in Fig. 1 is a consequence of its simplicity when there are just two commutation situations for the PA having three phases of forward motion and three phases of backward motion. The energy saving effect for the scheme in Fig. 2 consists in that the most rational commutation links are realized for each motion phase.In the phase of acceleration, only the transit working capacity of the incoming compressed air (1) is used.But since the initial pressure differential on the piston during fixation was small p p k a − ( ), nonproductive work of pushing compressed air from the exhaust chamber was minimal which also contributes to an increase in the speed of the PA.In the braking phase, the switching situation is such that the potential energy of air expansion in the working chamber starts to be used.At the same time, compressed air is recuperated from the exhaust (braking) chamber into the network after opening the return valve. In the mode of fixation, the additionally compressed air from the network is not consumed more and the piston is retained by the minimum pressure differential p p k a − ( ). The energy balance for the compressed air consumed by the PA in the course of one actuation can be represented by the following dependence: where E s is the full working capacity of the compressed air consumed by the PA in the course of one actuation; E ie is loss of working capacity of the compressed air because of incompleteness of expansion in the working chamber of the cylinder; E lc is loss of working capacity when the real process of expansion does not correspond to the ideal (isothermal) process; E lv is loss of working capacity in the dead volume of the cylinder; E th is loss on throttling; R 1 is external mechanical work of compressed air; E f is the loss associated with fixation of the work member by compressed air in the final position; E r is working capacity of compressed air returned to the network as a result of recuperation. A comparative diagram of components of energy consumption for schemes No. 1 and No. 2 (Fig. 4) was obtained under the same conditions as the diagram in Fig. 3 3. determination of the rational use of the energy-saving actuator To determine the scope of rational use of the energysaving scheme, computer simulation of the work processes occurring in the PA has been carried out.The work processes reflect a different approach when solving the problem of braking the PA work members with medium and large inertial loads.The calculations cover a rather wide area of application of a PA represented by a space of criteria of dynamic similarity b and χ (Fig. 5, 6).Fig. 5 shows dependence of the dimensionless response time τ, the dimensionless braking distance ξ t , cycle mean efficiency η and the relative mass of compressed air on the criteria for dynamic similarity b and χ. Analysis of the graphs in Fig. 5 shows that scheme No. 2 provides an unconditional reduction in power input in comparison with the scheme No. 1 in the whole region of existence of the PA.The most significant reduction in energy input is achieved when χ = ÷ 0 0 1 ., and when χ = ÷ 0 15 0 3 . ., the decrease is significant only at a large inertial load b = ÷ ( ) . and small values of b b < ( ) . , the use of the PA with an energy-saving structure becomes inexpedient because it does not result in a significant reduction in energy input.The decrease in the energy efficiency of the PA operating according to scheme No. 2 in this region is explained firstly by the lack of recuperation into the network because of a small braking distance and secondly by an insufficiently complete expansion of the compressed air in the working chamber.In addition, the nature of the dominant energy inputs itself is changing.In the phase of acceleration, the transit working capacity of the compressed air (the pu shing operation) is mainly used when air is just a kinematic link between the compressor and the pneumatic cylinder.The main type of loss of working capacity is the throttling loss.As the b parameter decreases (with a decrease in inertance), the piston speed increases and the braking distance is shortened.This increases the throttling losses and the losses connected with incompleteness of air expansion in the working chamber.The graphs reflecting growth of these losses with a decrease in b are shown in Fig. 6. The increase in throttling losses E th ( ) occurs according to an exponential law.For example, when χ = 0 1 .and b decreases from 5 to 0.5, there is an 11-fold increase in throttling losses making up 1 4 of the entire working capacity of the compressed air flow, i. e. becoming the main loss item.The growth of losses because of incompleteness of air expansion is also significant and increa ses almost 3.5 times under the same conditions.The graphs in Fig. 5 allow us to make both qualitative and quantitative assessment of efficiency of energy saving scheme.This makes it possible to compare lump sum and operating costs and make a substantiated decision. Let, for example, a pneumatic cylinder with D = 100 mm and L = 400 mm at p m = 0 5 . MPa and with an attached mass of p m = 0 5 . kg overcomes the built-in static load of P = 390 N and the effective area of the intake path f e The economy of compressed air for one cycle in the transition to an energy-saving scheme is: .MPa for the whole operation time. If price for 1 m 3 of compressed air and cost of pneumatic devices are known, it is possible to compare operational and lump sum costs.Comparative analysis will make it possible to make a decision on the expediency of using an energy-saving actuator scheme. discussion of the results: the field of effective use of power pneumatics In the case of conventional throttling braking, the process of braking the work member at an average and large inertial load (b ≥ 0.5) is accompanied by an uncontrolled nature of the pressure changes in the cylinder chambers and the piston motion with a developed oscillatory process.The proposed braking scheme provides an optimal form of the transient process.A high braking effect is achieved due to a simultaneous pressure growth in the exhaust chamber and a pressure drop in the working chamber up to a constant negative pressure differential р m -р k at which there is a constant negative acceleration during braking.The level of this pressure differential depends on the adjustment of pressure р k in the reducing valve.The greater the inertial load, the more stable this differential is maintained.Owing to this fact, the field of use of power pneumatics can be expanded by almost an order of magnitude in the direction of increase in the inertial load (up to the value of the inertance criterion b = 5).Using such a scheme, it is possible to achieve much more efficient use of working capacity of the compressed air due to the fact that: -in the phase of braking (Table 2, Fig. 2), not only the transit working capacity is used but also the potential energy of expansion of the compressed air in the working chamber (1) which is completely impossible in pneumatic actuators with a conventional throttle braking scheme and a full filling of the working volume; -the braking energy, that is, compressed air from the braking chamber is not irrevocably transformed into thermal energy like in the actuators with throttle braking.It returns through the open return valve at р 2 ≥ р m into the feed line, i. e. the recuperation mode is realized (dotted line in Fig. 3); -the compressed air consumption for fixing the piston in the final position at a minimum pressure differential р k -р а , is significantly reduced and pressure р k is considerably smaller than the main pressure p m ; -owing to the minimum pressure р k in the exhaust chamber at the initial moment of the piston motion, the nonproductive work of pushing the compressed air from the exhaust chamber is substantially reduced. Such a complex character of reducing nonproductive energy inputs creates an energy saving effect that enables a 4to 10-fold reduction of energy inputs in the scope of rational use of this actuator (χ < 0.2 and b < 2). A distinctive feature of the foregoing in comparison with similar publications on this subject is a much higher level of generalization of the results obtained.Due to the use of dynamic similarity criteria instead of physical parameters, it was possible to extend the results obtained practically to the entire region of existence of such actuators (the graphs in Fig. 5, 6) that has made it possible to effectively and clearly identify the scope in which the use of a discrete actuator is rational. The concrete numerical example has demonstrated the engineering procedure of using these graphs in solving the main problem of functional and cost analysis which consists in comparison of lump sum and operating costs when deciding whether to use the new solution in practice.This approach assumes the use of the concept of economic expediency as the main criterion of the solution efficiency. Further development of the proposed solution is transition to the so-called compression actuation mode (transition to resonant pneumatic actuators).Such units, when operating under conditions of large inertial loads, use the compressed air accumulated in braking directly for the return stroke.Such a transition enables an even more efficient utilization of working capacity of the compressed air in the actuator. conclusions 1.A rational structure of commutation links in a pneumatic actuator was defined, where each motion phase corresponds to commutation situations the most rational from the point of view of energy saving and maximum speed. 2. Based on the most rational structure of commutation links, a pneumatic actuator scheme and an algorithm for its control were constructed.Energy saving is ensured due to the following: -in the phase of braking (Table 2, Fig. 2), not only transit working capacity is used but also potential energy of expansion of the compressed air in the working chamber; -compressed air from the brake chamber is not converted irreversibly into thermal energy but returns to the feed line; -the compressed air consumption for fixing the piston in the final position is reduced; -due to the minimum pressure р k in the exhaust chamber at the initial moment of the piston motion, the nonproductive work of pushing the compressed air from the exhaust chamber is substantially reduced. 3. In functioning of the pneumatic actuator ( 5), all components of loss of working capacity of the compressed air leading to nonproductive energy inputs were determined.A comparative quantitative analysis of energy losses was made using the basic and proposed PA schemes. 4. A procedure for determining the scope of use of an energy-saving actuator was developed based on defining the criteria of dynamic and energy similarity.The graphs of dependence of the operational characteristics of the PA on the criteria of dynamic similarity (Fig. 5, 6) were plotted, which make it possible to extend the results obtained to the entire field of the PA use.The proposed procedure makes it possible to calculate quantity of compressed air consumed by the PA with a base and an optimal structure of commutation links and compare the results obtained in percentage and in a monetary equivalent. Fig. 1 . Fig. 1.Basic (elementary) scheme of the pneumatic actuator (scheme No. 1): m is mass of moving parts of the PA; р m is pressure in the feed line of the PA; p a is atmospheric pressure; T 1 , T 2 are Boolean variables determining the states of the control electromagnets (T = 1 for current on, T = 0 for current off) Fig. 2 . Fig. 2. Energy-saving scheme of the pneumatic actuator (scheme No. 2) the inertance criterion (dimensionless mass) to the ratio of the inertia force of the moving parts of the PA at basic acceleration L t b 2 to the maximum force developed by the piston, р m F 1 (where p m is the feed pressure, F 1 is the piston area, L is a complete stroke of the piston; t b is the basic time unit); -static load parameter χ = P p F m 1 ; -the basic time unit t F of filling of the cylinder working volume (F 1 L) by air moving at the speed of sound ( Fig. 3 . Fig. 3. Consumptions of compressed air in actuation of the PA with the base scheme No. 1 (solid line) and the energy-saving scheme No. 2 (dotted line) Fig. 4 . Fig. 4. Comparative diagram of energy consumption components for schemes No. 1 and No. 2 Fig. 6 . Fig. 6.Dependence of the loss of working capacity of compressed air for throttling (E th ) and because of incompleteness of air expansion in the working chamber (E ie ) on the b and χ criteria Fig. 5 . Dependence of the main operational characteristics of the PA on the criteria of dynamic similarity b and χ: scheme No. 1 (a, b); scheme No. 2 (c, d ) relative mass of the consumed air in accordance with the graphs in Fig.5for the scheme No. 1 isM 1 = 1 44., andM 2 = 0 22. , for the scheme No. 2, the basic unit of mass of air: 1 m 3 of compressed air at p = 0 5 Table 1 Map of states of electromagnets of valves for scheme No. 1 Table 2 Commutation situations for each phase of the PA motion Table 3 Map of states of distributor electromagnets for scheme No. 2
v3-fos-license
2019-04-09T13:04:31.634Z
2016-12-31T00:00:00.000
102822971
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://stdj.scienceandtechnology.com.vn/index.php/stdj/article/download/597/981", "pdf_hash": "462e9035898ee2abdeb31d53455909199f3a301e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2450", "s2fieldsofstudy": [ "Physics" ], "sha1": "5f29fb1fb263ec23dedd320e78f1d3ff233db494", "year": 2016 }
pes2o/s2orc
A study of neutron emission spectra and angular distribution of neutron from (p,n) reaction on some targets of heavy elements For the design of ADS (Accelerator Driven System), it is important to study neutron spectra and details of nuclear reactions induced by neutrons. Furthermore, neutron energy and angular distribution data are important for a correct simulation of the propagation of particles inside a spallation target and the geometrical distribution of the outgoing neutron flux. Many experimental results are available for thin targets and massive targets additional studies of neutron spectra and neutron production were investigated to design target for ADS with incident proton energies up to 3 GeV. In our study, the angular distribution and the neutron energy spectra are reported for the (p,n) reaction on target nuclei such as Pb, U, W with energy from 50 MeV to 350 MeV calculated with database of JENDL-HE 2007. We obtain a set of data about the angular distribution and energy spectra of produced neutrons on some heavy targets with energy ranges as stated above. From the results of neutron spectra, the paper also gives many comments to recommend a choice of materials for target and energies for accelerating proton beam . From the angle distribution of neutrons generated in (p, n) reactions on the different targets with the different energies of proton, the solutions to arrange the reflection bars in reactor proposed. A comparison is also made to improve the reliability for calculation of the paper. INTRODUCTION The spallation reaction is caused by bombarding a target with particles having energies above a few hundred MeV. This reaction produces a great number of neutrons, and is applicable to produce an intense spallation neutron source or transmuting long-lived radioactive wastes [1,2]. The design of target is a key issue to be investigated when designing an ADS [3], and its performance is characterized by the number of neutrons emitted by (p, n) reaction. This paper describes the calculation of spatial distribution and energy spectra of produced neutron performed on the proton beam with the energy of 50 MeV to 350 MeV. Based on the JENDL-HE library [4] we obtain a set data about energyangle spectra on Pb, U, W targets with ranges as stated above. METHOD We adopt the formula for calculating energyangle double differential cross section of neutron from (p,n) reaction: Where: E p is the incident energy (eV), E n is the energy of the product emitted (eV),  is the interaction cross section (barn), y is the product yield or multiplicity, f is the normalized distribution with units (eV unit cosine -1 ), energy-angle double differential cross section (barn/eV-sr). Angular distribution of neutrons produced For the proton induced reaction, we are interested in the neutron production. We use the data of JENDL-HE library to calculate for incident proton energies of 50, 100, 150, 200, 250, 350 MeV. Figures 1, 2, 3 show angular distribution of neutron produced from the (p,n) reaction on 238 U, 208 Pb, 186 W calculated at the energies from 50 MeV to 350 MeV: All the curves have the same behaviors but they have different values. The angular distribution of emitted neutrons shows dominant forward angular emission with respect to the incident proton direction. Production cross section is the highest for reaction induced on lead target and the lowest for reaction induced. When the incident proton energy increases, production cross section does, too. Comparison with the other published data Up to now, we haven't found any papers studying about angular distribution of neutron in energy range of 50 MeV to 350 MeV. We use our model to calculate the angular distribution of neutron at 800 MeV and we make a comparison with the obtained result of P.K. Sarkar and Maitreyee Nandy [5] as following: We can see that there is a significant difference between the two models QMD (Quantum Molecular Dynamics) and SDM (Statistical Decay Model). Fig. 4A shows a dominant forward angle emission for the QMD process while the neutrons from the SDM calculations have isotropic angular distribution with respect to the incident proton direction. Fig. 4B shows that the curve in our result is similar to that of the QMD process. We do not mention several important effects, the result shows a significant difference in value. We are interested in the form of the curve. It means that our calculation model is good. CONCLUSION We are interested in the cross section for the energic spectra and spatial distribution of neutron obtained for the incident proton energy of 50 to 350 MeV. We calculate the distribution of neutron escaped a heavy target at different angles from zero degree to 180 0 degree, so we know the dominant forward angular emission with incident proton direction and spatial distribution of produced neutrons to arrange fuel bars in ADS. Heavy nuclei as U, Pb, W were chosen as spallation target and obtained rather hard neutron energic spectrum (see Figures 4,5,6). This is the need to optimize the fission probability of Transuranic elements (TRU). Indeed, in the fast neutron flux provided by the ADS, all TRU can undergo fission, a process which eliminates them, while in a traditional reactor thermal neutron flux many TRU do not fission and thus accumulate as waste. Acknowledgments: Author would like to thank to Nuclear Research Institute of Dalat for their support to finish this work.
v3-fos-license
2017-06-25T17:34:08.729Z
2012-10-31T00:00:00.000
1150472
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/287/52/43765.full.pdf", "pdf_hash": "e76869ec0046f8502e92586cda5fbc5b2d1a0e8a", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2451", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "07b7b339ccf1b176a2fe2b7825256bceccb66a08", "year": 2012 }
pes2o/s2orc
The Heat Shock Response Is Modulated by and Interferes with Toxic Effects of Scrapie Prion Protein and Amyloid β* Background: The heat shock response (HSR) is a stress response pathway to counteract proteotoxic effects of aberrantly folded proteins. Results: The HSR is deregulated by PrPSc and Aβ and can protect against toxic effects of PrPSc, Aβ, and a neurotoxic PrP mutant. Conclusion: The toxicity of different pathogenic proteins is mediated via similar cellular pathways. Significance: Identifying cellular pathways activated by neurotoxic proteins will help to develop therapeutic strategies. The heat shock response (HSR) is an evolutionarily conserved pathway designed to maintain proteostasis and to ameliorate toxic effects of aberrant protein folding. We have studied the modulation of the HSR by the scrapie prion protein (PrPSc) and amyloid β peptide (Aβ) and investigated whether an activated HSR or the ectopic expression of individual chaperones can interfere with PrPSc- or Aβ-induced toxicity. First, we observed different effects on the HSR under acute or chronic exposure of cells to PrPSc or Aβ. In chronically exposed cells the threshold to mount a stress response was significantly increased, evidenced by a decreased expression of Hsp72 after stress, whereas an acute exposure lowered the threshold for stress-induced expression of Hsp72. Next, we employed models of PrPSc- and Aβ-induced toxicity to demonstrate that the induction of the HSR ameliorates the toxic effects of both PrPSc and Aβ. Similarly, the ectopic expression of cytosolic Hsp72 or the extracellular chaperone clusterin protected against PrPSc- or Aβ-induced toxicity. However, toxic signaling induced by a pathogenic PrP mutant located at the plasma membrane was prevented by an activated HSR or Hsp72 but not by clusterin, indicating a distinct mode of action of this extracellular chaperone. Our study supports the notion that different pathological protein conformers mediate toxic effects via similar cellular pathways and emphasizes the possibility to exploit the heat shock response therapeutically. The heat shock response (HSR) is an evolutionarily conserved pathway designed to maintain proteostasis and to ameliorate toxic effects of aberrant protein folding. We have studied the modulation of the HSR by the scrapie prion protein (PrP Sc ) and amyloid ␤ peptide (A␤) and investigated whether an activated HSR or the ectopic expression of individual chaperones can interfere with PrP Sc -or A␤-induced toxicity. First, we observed different effects on the HSR under acute or chronic exposure of cells to PrP Sc or A␤. In chronically exposed cells the threshold to mount a stress response was significantly increased, evidenced by a decreased expression of Hsp72 after stress, whereas an acute exposure lowered the threshold for stress-induced expression of Hsp72. Next, we employed models of PrP Sc -and A␤-induced toxicity to demonstrate that the induction of the HSR ameliorates the toxic effects of both PrP Sc and A␤. Similarly, the ectopic expression of cytosolic Hsp72 or the extracellular chaperone clusterin protected against PrP Sc -or A␤-induced toxicity. However, toxic signaling induced by a pathogenic PrP mutant located at the plasma membrane was prevented by an activated HSR or Hsp72 but not by clusterin, indicating a distinct mode of action of this extracellular chaperone. Our study supports the notion that different pathological protein conformers mediate toxic effects via similar cellular pathways and emphasizes the possibility to exploit the heat shock response therapeutically. Accumulation of misfolded and aggregated proteins is a hallmark of various neurodegenerative diseases. Prion diseases (for review, see Refs. [1][2][3][4] and Alzheimer disease (AD) 2 (for review, see Refs. 5 and 6) are characterized by extracellular protein assemblies formed by the scrapie prion protein (PrP Sc ) or amyloid ␤ (A␤) peptide, respectively. Whereas prion diseases and AD are clearly distinct disease entities, there appear to be commonalities concerning structural features of the pathogenic protein conformers as well as pathways implicated in their toxic effects (for review, see Refs. [7][8][9][10]. The protein deposits found in AD or prion diseases are associated with intra-and extracellular heat shock proteins (Hsps) (11)(12)(13), suggesting a role of Hsps in the pathogenic process. Hsps, many of which function as molecular chaperones, comprise a class of proteins that are induced under conditions of cellular stress when the concentration of aggregation-prone folding intermediates are increasing. However, Hsps exert fundamental functions also under physiological conditions as they are vitally engaged in protein folding, trafficking, and regulation of signaling pathways (for review, see Ref. 14). Hsps are found in all cellular compartments and organelles. In addition, clusterin is a secreted chaperone shown to be involved in the extracellular protein quality control system (15). Up-regulation of Hsps after acute or chronic proteotoxic damage is mediated by a highly conserved pathway denoted the heat shock response (HSR). At the molecular level, different stressors are integrated through the activation of a single transcription factor, the heat shock transcription factor 1 (HSF1), which binds to specific heat shock element (HSE) sequences present in the promoter region of inducible Hsp genes (for review, see Refs. 16 and 17). An increase in Hsp levels prevents protein aggregation and facilitates correct folding of non-native proteins after cellular stress. In addition, chaperones participate in anti-apoptotic pathways (for review, see Refs. 18 -20). It is, therefore, not surprising that a deregulation of the HSR can contribute to the progression of various diseases. Consequently, the HSR represents a target for therapeutic intervention in a range of diseases (for review, see Refs. [21][22][23][24][25][26]. For example, pharmacological induction of the HSR was shown to ameliorate disease progression and neuropathological alterations in mouse models of neurodegenerative diseases (27)(28)(29). Supporting a protective role of the HSR, deletion of HSF1 dramatically shortened the lifespan of scrapie-infected mice (30). We have previously studied the HSR in scrapie-infected mouse neuroblastoma (ScN2a) cells, which offer a useful model to study certain aspects of prion diseases in cultured cells. Most importantly, ScN2a cells propagate partially protease-resistant PrP Sc and infectious prions (31,32). The stress-induced expression of Hsp72 and Hsp28 is significantly impaired in ScN2a cells, whereas their uninfected counterparts are able to mount a normal stress response (33,34). Notably, we found that the impaired HSR in ScN2a cells is caused by an accelerated deactivation of HSF1 after stress and can be restored by the Hsp90binding drug geldanamycin (34). In this study we characterized the impact of pathogenic protein conformers on the regulation of HSR by making use of cell culture models of PrP Sc -and A␤-induced toxicity. We demonstrate that PrP Sc and A␤ have different effects on the HSR depending on whether they are applied in an acute or chronic manner to cells. Moreover, activation of the HSR or ectopic expression of individual chaperons is protective against PrP Scand A␤-induced cell death as well as the toxic activity of a pathogenic PrP mutant. Cell Culture, Transfection, Co-culture-Cells were cultured and transfected as described earlier (35). The human SH-SY5Y cell line (DSMZ number ACC 209) is a sub-line of bone marrow biopsy-derived SK-N-SH cells. Stably transfected Chinese hamster ovary cells (CHO-7PA2) that express the familial AD mutation V717F in the amyloid precursor protein APP 751 and secrete A␤ were described earlier (46). Cells cultured in 3.5-cm dishes were transfected with DNA by a liposome-mediated method using LipofectAMINE Plus reagent (Invitrogen) according to the manufacturer's instructions. For co-culture experiments, SH-SY5Y cells were grown on glass coverslips. 2 h after transfection coverslips were transferred into dishes containing a 90% confluent cell layer of either ScN2a or N2a or CHO-7PA2 or CHO cells (47,48). After 16 or 24 h of co-culture, either apoptotic cell death or luciferase activity was analyzed (see below). For stable transfection, SH-SY5Y cells were transfected with the plasmid pCEP4 containing the coding sequence for APP695 using Transfectin (Bio-Rad) according to the manufacturer's instructions. Stably transfected cells were selected with hygromycin (250 g/ml). The empty vector was used as control (mock-transfected). Cell Lysis, Immunoprecipitation, and Western Blot Analysis-As described earlier (49), cells were washed twice with cold phosphate-buffered saline (PBS), scraped off the plate, and lysed in cold detergent buffer A (0.5% Triton X-100, 0.5% sodium deoxycholate in PBS). Total lysates or secreted and trichloroacetic acid (TCA)-precipitated proteins were boiled with Laemmli sample buffer and analyzed by Western blotting as described previously (50). For proteolysis experiments, lysates of ScN2a or N2a cells were digested with Proteinase K for 30 min at 37°C (final concentration 10 g/ml). Reaction was stopped by the addition of PMSF (final concentration 2 mM), and PrP was analyzed by Western blotting using the polyclonal anti-PrP antibody A7. A␤ in conditioned medium of CHO-7PA2 cells or stably transfected SH-SY5Y cells were analyzed by immunoprecipitation with the polyclonal antibody 3552 followed by Western blotting using the monoclonal antibody 2D8. To block A␤ generation, CHO-7PA2 cells were treated for 24 h with DAPT before immunoprecipitation. To interfere with PrP Sc -induced toxicity, transfected cells were pretreated for 1 h with the monoclonal anti-PrP antibody 3F4 (1 g/ml) before co-culture. The antibody was also present during co-cultivation. For quantification of Hsp72, total lysates were analyzed by Western blotting using the monoclonal anti-Hsp72 antibody C92. Chemiluminescence was determined using a Fujifilm LAS-4000 ChemiDot imager and the Multi Gauge V3.0 software and normalized to ␤-actin. Values of CHO-7PA2 cells or SH-SY5Y cells overexpressing wild type APP were compared with either CHO cells or mock-transfected SH-SY5Y cells subjected to the same heat shock. Quantifications were based on at least three independent experiments. Exosome Isolation-Conditioned media of ScN2a or N2a cells were centrifuged for 10 min at 3,000 ϫ g and ultracentrifuged for 30 min at 10,000 ϫ g and for 1 h at 100,000 ϫ g as described earlier (55). Pellets were resuspended in cold detergent buffer A (0.5% Triton X-100, 0.5% sodium deoxycholate in PBS) and digested with Proteinase K for 30 min at 37°C (final concentration 10 g/ml). The reaction was stopped by the addition of PMSF (final concentration 2 mM), and PrP was analyzed by Western blotting using the polyclonal anti-PrP antibody A7. Luciferase Assays-Co-cultivated SH-SY5Y cells or SH-SY5Y cells cultured in 3.5-cm dishes were transiently transfected with firefly luciferase reporter plasmid (HSE-luc) and subjected to the stress treatment indicated. After 8 h of incubation at 37°C, cells were lysed in Reporter Lysis Buffer (Promega). Luciferase activity was analyzed luminometrically using the luciferase assay system (Promega) and a LB96V or Mithras LB 940 luminometer (Berthold Technologies, Bad Wildbad, Germany) according to the manufacturer's instruction. The measured values were analyzed using a WinGlow Software (Berthold Technologies). Quantifications were based on at least three independent experiments. Apoptosis Assay and Immunofluorescence-For quantification of apoptotic cell death, SH-SY5Y cells were fixed on glass coverslips with 3.7% paraformaldehyde for 20 min, washed, and permeabilized with 0.2% Triton X-100 in PBS for 10 min at room temperature. Fixed cells were incubated with an antiactive caspase-3 antibody overnight at 4°C followed by an incubation with the secondary antibody fluorescently labeled with Alexa Fluor 555 for 1 h at room temperature. Cells were then mounted onto glass slides and examined by fluorescence microscopy using a Zeiss Axioscope 2 plus microscope (Carl Zeiss). The number of cells positive for activated caspase-3 from at least 1000 transfected cells was determined in a blinded manner. All quantifications were based on at least three independent experiments. For immunofluorescence analysis of the stress-inducible Hsp72 in N2a or ScN2a or CHO or CHO-7PA2 cells, cells were grown on glass coverslips. At day 2 (CHO/ CHO-7PA2) or day 4 (N2a/ScN2a) in culture, cells were subjected to the heat shock indicated, returned to 37°C, and analyzed after an additional 8 or 16 h, respectively. After incubation, cells were fixed, permeabilized, and stained for Hsp72 using the monoclonal anti-Hsp72 antibody C92. Nuclei were stained with ToPro. Cells were examined by confocal fluorescence microscopy using a Zeiss Axiovert 200M microscope (Carl Zeiss). Statistical Analysis-Quantifications were based on at least three independent experiments. Data were shown as the means Ϯ S.E. Statistical analysis was performed using Student's t test. p values are as follows: *, p Ͻ 0.05; **, p Ͻ 0.005; ***, p Ͻ 0.0005. RESULTS The Heat Shock Response Is Impaired in Cell Lines Chronically Exposed to PrP Sc or A␤-We previously showed that the HSR in scrapie-infected mouse neuroblastoma (ScN2a) cells, which propagate proteinase K-resistant PrP Sc and infectious prions (Fig. 1A), is significantly impaired (33,34). The amount of Hsp72, which is expressed at high levels only after heat shock or other forms of metabolic stress (51), is greatly increased in uninfected N2a cells after a heat shock of 10 or 20 min (42 or 44°C), whereas ScN2a cells do not express Hsp72 after being subjected to the same stress conditions. This phenomenon is illustrated by Western blotting (Fig. 1C) and indirect immunofluorescence ( Fig. 2A, left panel). Prompted by these results, we asked whether chronic exposure to another pathogenic protein assembly would also modulate the HSR. To experimentally address this possibility, we made use of a stably transfected Chinese hamster ovary cell line (CHO-7PA2) that expresses the familial AD mutant V717F of the human amyloid precursor protein APP 751 and secretes A␤ (46) (Fig. 1B, left panel). Importantly, secreted A␤ from CHO-7PA2 cells is neurotoxic, demonstrated by its ability to potently inhibit long term potentiation in vivo and to interfere with neuronal viability (48,52,53). In addition, we generated a stably transfected SH-SY5Y cell line expressing wild type human APP. Similarly to the CHO-7PA2 cells, SH-SY5Y-wtAPP cells secreted significantly increased levels of A␤ when compared with the mock-transfected control (Fig. 1B, right panel). To analyze the HSR, we subjected CHO-7PA2 and SH-SY5Y-wtAPP cells to different heat shock conditions and analyzed expression of Hsp72 after the cells had been cultivated for another 8 h at 37°C. The Western blot (Fig. 1, D and E) and immunofluorescence analysis ( Fig. 2A, right panel) revealed that A␤-overexpressing CHO-7PA2 and SH-SY5Y-wtAPP are able to mount a stress response; however, the amount of Hsp72 in stressed CHO-7PA2 and SH-SY5Y-wtAPP was lower when compared with CHO or mock transfected SH-SY5Y cells, respectively, subjected to the same stress conditions. These differences were significant under all stress conditions tested for the SH-SY5Y cell lines (Fig. 1E), whereas after more severe stress (42 or 44°C for 20 min) Hsp72 levels were comparable in CHO and CHO-7PA2 cells (Fig. 1D). Ectopic expression of a mutant of the heat shock transcription factor 1 (⌬HSF), which contains a deletion in the regulatory domain (⌬202-316) and is constitutively active (38), induced the up-regulation of Hsp72 in both ScN2a and CHO-7PA2 cells (Fig. 2B). These findings suggest that the impaired Hsp72 expression after stress is obviously caused by a deregulated HSF1 activation/inactivation pathway and not by mutations in the promoter regions of stress-regulated genes (34). These results demonstrate that cells chronically exposed to A␤ or PrP Sc have a higher threshold to mount a HSR. Acute Exposure of Cells to PrP Sc Lowers the Threshold for a Heat Shock Response-ScN2a cells had been established from a population of cells acutely infected with prions. Thus, it might well be that an impaired stress response was a selection advantage to counteract adverse effects of PrP Sc on cell viability. We, therefore, wanted to analyze possible acute effects of PrP Sc on the HSR by employing a novel cell culture assay, which is based on the co-culture of SH-SY5Y cells with N2a or ScN2a cells (47,48). In this context it is important to note that scrapie-infected cells release PrP Sc and infectious prions into the extracellular environment (Fig. 1A, right panel) (54,55). Our experimental set-up allows us to study the HSR in SH-SY5Y cells after transient exposure to PrP Sc present in the cell culture medium (Fig. 3A). To assess the HSR in a quantitative manner, we used a reporter gene construct (HSE-luc) expressing firefly luciferase under the control of the highly heat-inducible promotor of the human Hsp70B gene (36). After a brief heat shock, transcription of the luciferase gene is induced, and luciferase activity can be determined luminometrically (Fig. 3B). First, we examined whether PrP Sc released by ScN2a cells would induce an HSR in co-cultured SH-SY5Y cells. Luciferase activities in SH-SY5Y cells co-cultured with ScN2a cells for 24 h were comparable to FIGURE 1. Impaired heat shock response in cell lines chronically exposed to PrP Sc or A␤. A, chronically scrapie-infected N2a cells (ScN2a) are characterized by the formation of Proteinase K (PK)-resistant scrapie prion protein (PrP). Total cell lysates and isolated exosomes prepared from N2a or ScN2a cells were treated with proteinase K or left untreated and then analyzed by Western blotting using the polyclonal anti-PrP antibody A7. B, stably transfected CHO cells (CHO-7PA2) or SH-SY5Y cells generate amyloid ␤ (A␤). A␤ present in conditioned medium of CHO or CHO-7PA2 cells and stably transfected SH-SY5Y cells was analyzed by immunoprecipitation with the polyclonal antibody 3552 followed by Western blotting using the monoclonal antibody 2D8. To block A␤ generation, CHO-7PA2 cells were treated for 24 h with DAPT before immunoprecipitation. C-E, ScN2a, CHO-7PA2, and stably transfected SH-SY5Y cells exhibit an impaired heat shock response. C, N2a and ScN2a cells were subjected to heat shock conditions as indicated. The stress-inducible heat shock protein Hsp72 was analyzed by Western blotting using the monoclonal anti-Hsp72 antibody C92. D, CHO and CHO-7PA2 cells were subjected to heat shock conditions as indicated. The stress-inducible heat shock protein Hsp72 was analyzed by Western blotting using the monoclonal anti-Hsp72 antibody C92. Band intensities of the Hsp72 signals from CHO and CHO-7PA2 cells were quantified and normalized to ␤-actin. The fold induction of Hsp72 in CHO-7PA2 cells in response to various stresses, relative to CHO cells, is shown in the right panel. E, mock-and wild type APP-transfected SH-SY5Y cells were subjected to heat shock conditions as indicated. The stress-inducible heat shock protein Hsp72 was analyzed as described under Fig. 1D. The relative amounts of Hsp72 are represented as the mean Ϯ S.E. of three to four independent experiments. *, p Ͻ 0.05. those in cells co-cultured with N2a cells, indicating that acute exposure to PrP Sc did apparently not induce the HSR (Fig. 3C). Next we tested whether acute exposure to PrP Sc modulates the HSR. To this end we co-cultured HSE-luc-expressing SH-SY5Y cells with ScN2a cells and then subjected them to a brief heat shock (Fig. 3D). SH-SY5Y cells co-cultured with ScN2a cells showed significantly higher luciferase activities after a heat shock than cells co-cultured with N2a cells. For example, a 20-min heat shock at 42°C led to an 8-fold induction of luciferase in SH-SY5Y cells co-cultured with N2a cells, whereas the same heat shock condition led to a 18.5-fold induction in cells pre-exposed to PrP Sc (Fig. 3D). Of note, there was no increase in cell death of co-cultured SH-SY5Y cells under the heat shock conditions applied (Fig. 3E). Induction of the HSR or Increased Expression of Hsp72 or Clusterin Protects against PrP Sc -or A␤-induced Toxicity-To address the possibility that an induction of the HSR can protect cells from the toxic activity of PrP Sc or A␤, we employed a previously established cell culture model (47,48). As illustrated in Fig. 4A, left panel, PrP Sc induces cell death in co-cultured SH-SY5Y cells expressing the cellular prion protein (PrP C ). Similarly, expression of PrP C sensitizes cells to the toxic effects of A␤ (Fig. 4B). Toxicity of PrP Sc could be suppressed by performing the co-culture in the presence of the monoclonal anti-PrP antibody 3F4 (Fig. 4A, right panel). Likewise, co-cultivation with CHO-7PA2 cells pretreated with the ␥-secretase inhibitor DAPT did not induce apoptotic cell death in PrP C -expressing SH-SY5Y cells, indicating that the toxic effect of CHO-7PA2 cells was dependent on the generation of A␤ (48). To induce the HSR without a stress treatment, we expressed the constitutively active ⌬HSF mutant, which increases expres-sion of many heat shock proteins, for example of Hsp72 (Fig. 2B). SH-SY5Y cells transiently co-transfected with PrP C and ⌬HSF or GFP as a control were co-cultured with ScN2a or CHO-7PA2 cells, and apoptotic cell death was analyzed after 16 h of co-culturing. ScN2a or CHO-7PA2 cells induced cell death in co-cultured SH-SY5Y cells expressing PrP C and GFP, whereas the co-expression of ⌬HSF protected the cells from PrP Sc -or A␤-induced cell death (Fig. 4C). In a next step we tested whether it is sufficient to express individual chaperones to block PrP Sc -or A␤-induced toxicity. To analyze chaperones located in different cellular compartments, we chose Hsp72, a cytoplasmic chaperone, and clusterin, an extracellular chaperone that has recently been genetically associated with AD (56,57). Indeed, expression of either Hsp72 or clusterin was sufficient to inhibit PrP Sc -or A␤-induced cell death (Figs. 4D and 5). Hsp72 and ⌬HSF but Not Clusterin Protect against a Neurotoxic PrP Mutant-Several PrP mutants can induce neuronal cell death in the absence of infectious prion propagation (for review, see Ref. 8). PrP C can acquire a neurotoxic potential by deleting the internal hydrophobic domain (HD) (58,59). Similar to PrP C , PrP⌬HD is glycosylated with complex sugars and linked to the outer leaflet of the plasma membrane via a glycosylphosphatidylinositol anchor (35). To assess whether an activated HSR and the expression of chaperones can also interfere with the toxic effects of a pathogenic PrP mutant located at the plasma membrane, we used a cell culture model previously established in our group (47,60). Upon ectopic expression of PrP⌬HD, apoptotic cell death is induced in SH-SY5Y cells. The toxic effects of PrP⌬HD are abrogated by co-expression of PrP C (Fig. 6A). This activity of PrP C has been conclusively documented in various transgenic mouse models and cultured cells; FIGURE 2. Cells chronically exposed to PrP Sc or A␤ exhibit a higher threshold to mount a heat shock response. A, ScN2a and CHO-7PA2 cells have an impaired heat shock response. N2a, ScN2a, CHO, and CHO-7PA2 cells were subjected to the heat shock conditions as indicated. Hsp72 was analyzed by indirect immunofluorescence using the monoclonal anti-Hsp72 antibody C92. B, expression of a constitutively active mutant of the heat shock transcription factor 1 (⌬HSF) induces expression of Hsp72 in both ScN2a and CHO-7PA2 cells. N2a, ScN2a, CHO, and CHO-7PA2 cells were transiently transfected with wild type HSF (wtHSF) or the constitutively active ⌬HSF mutant. 24 h after transfection expression of Hsp72 was analyzed by indirect immunofluorescence as described under Fig. 2A. Nuclei were stained with ToPro. Scale bars, 10 m. DECEMBER 21, 2012 • VOLUME 287 • NUMBER 52 however, the underlying mechanisms are elusive (47, 58 -63). To test a possible protective effect of an activated HSR or of individual chaperones, we co-expressed PrP⌬HD alternatively with ⌬HSF or Hsp72 or clusterin. Indeed, co-expression of either ⌬HSF or Hsp72 protected cells against PrP⌬HD-induced toxicity (Fig. 6, B and C). In contrast, clusterin, which efficiently interfered with PrP Sc -or A␤-induced cell death, could not prevent toxic effects mediated by PrP⌬HD (Fig. 6D). Importantly, ⌬HSF or Hsp72 expression did not reduce PrP⌬HD protein levels, nor did expression of PrP⌬HD prevent secretion of clusterin. DISCUSSION Regulation of the cellular stress response is critical to maintain cellular homeostasis and to protect cells from proteotoxicity. Our results indicate that pathogenic oligomers made from different proteins deregulate the HSR; in particular, they can modify the threshold for the stress-induced expression of heat shock proteins. Furthermore, we present evidence that the toxic effects of three different neurotoxic protein conformers (PrP Sc , A␤, and PrP⌬HD) can be ameliorated by activating the HSR or by increasing the expression of individual chaperones. The HSR is Modulated by Different Pathogenic Protein Assemblies; Distinct Effects of Acute and Chronic Exposure-PrP Sc and A␤ form pathogenic protein assemblies within the secretory/endosomal pathway and/or at the plasma membrane. Both protein species are released into the extracellular space where they can form amyloid plaques. To study how chronic exposure of neuronal cells to these aberrantly folded proteins might modulate the HSR, we made use of previously established cell lines generating neurotoxic PrP Sc or A␤. ScN2a cells represent a well characterized cell culture model to study pathomechanistic pathways linked to prion diseases. Notably, proteinase K-resistant PrP Sc and infectious prions are released into the cell culture medium. Generation of A␤ is a physiological process; however, it was previously shown that A␤ secreted into the medium of CHO-7PA2 cells is neurotoxic, demonstrated by its ability to potently inhibit long term potentiation in vivo and to interfere with neuronal viability (48,52,53). Based on the finding that the HSR response is significantly impaired in ScN2a cells (33,34), we first compared the HSR of CHO to that of CHO-7PA2 cells by analyzing expression of Hsp72, the stress-inducible Hsp70 variant, after moderate, FIGURE 3. Acute exposure to PrP Sc lowers the threshold for a heat shock response. A, shown is a schematic model of the co-culture assay. SH-SY5Y cells were plated on glass coverslips. 2 h after transfection coverslips were transferred into dishes containing a 90% confluent layer of either ScN2a or N2a cells. After 24 h of co-culture, the coverslips were removed, and the SH-SY5Y cells were analyzed. Either luciferase activity was determined in cell lysates (B, C, and D), or SH-SY5Y cells were fixed, permeabilized, and stained for active caspase-3 to assess apoptotic cell death (E). All quantifications were based on at least three independent experiments. B, heat shock induces expression of luciferase. SH-SY5Y cells were transiently transfected with a reporter gene construct (HSE-luc) expressing firefly luciferase under the control of the highly heat-inducible promotor of the human Hsp70B gene. 18 h after transfection cells were subjected to a heat shock (42°C) for the time indicated or held at 37°C. After additional 8 h at 37°C luciferase activity in total cell lysates were determined luminometrically and plotted as fold induction relative to cells held at 37°C. C, acute exposure to PrP Sc does not induce a heat shock response. SH-SY5Y cells transiently transfected with HSE-luc were co-cultured with ScN2a or N2a cells for 24 h at 37°C, and then luciferase activity was analyzed; -fold induction relative to cells co-cultured with N2a cells at 37°C is plotted. D, acute exposure to PrP Sc lowers the threshold for a stress response. SH-SY5Y cells transiently transfected with HSE-luc were co-cultured with ScN2a or N2a cells for 16 h at 37°C. Cells were subjected to a heat shock (42°C) for the time indicated or held at 37°C. After an additional 8 h at 37°C, luciferase activity was analyzed as described above. The fold induction relative to cells co-cultured with N2a cells at 37°C is plotted. E, apoptotic cell death was not increased by the heat shock conditions tested. SH-SY5Y cells were co-cultured with ScN2a or N2a cells and heat shocked as described under D. For quantification of apoptotic cell death, SH-SY5Y cells were fixed, permeabilized, and stained for active caspase-3. n.s., not significant; *, p Ͻ 0.05; **, p Ͻ 0.005; ***, p Ͻ 0.0005. non-lethal heat shock conditions. In contrast to ScN2a cells, CHO-7PA2 cells are able to increase expression of Hsp72 in response to heat shock; however, their efficiency to mount a heat shock response is reduced, which is most evident under mild heat shock conditions. To exclude the possibility that the observed effect is specific for CHO-7PA2 cells or the mutant . Induction of the heat shock response or increased expression of Hsp72 protects against PrP Sc -and A␤-induced toxicity. A, scrapie prions induce apoptosis in SH-SY5Y cells expressing PrP C . SH-SY5Y cells expressing the cellular prion protein (PrP C ) were co-cultured with ScN2a or N2a cells in the presence or absence of the monoclonal anti-PrP antibody 3F4. B, A␤ secreted by stably transfected cells is toxic to cells expressing PrP C . SH-SY5Y cells expressing PrP C were co-cultured with the indicated cell lines. C, expression of a constitutively active HSF1 mutant (⌬HSF) protects against PrP Sc -and A␤-induced toxicity is shown. SH-SY5Y cells co-expressing PrP C and ⌬HSF were co-cultivated with the indicated cell lines. D, expression of a Hsp70 variant protects against PrP Sc -and A␤-induced toxicity. SH-SY5Y cells co-expressing PrP C and Hsp72 were co-cultivated with the indicated cell lines. In A-D, after 16 h of co-culture, apoptotic cell death in SH-SY5Y cells was determined as described under "Experimental Procedures". Expression of PrP and Hsp72 were analyzed by Western blotting using the monoclonal anti-PrP antibody 3F4 or the monoclonal anti-Hsp72 antibody C92, respectively. n.s., not significant; *, p Ͻ 0.05; **, p Ͻ 0.005; ***, p Ͻ 0.0005. human APP expressed in this line, we show an impaired HSR also in stably transfected SH-SY5Y cell lines overexpressing human wild type APP. Similarly to what we observed in ScN2a cells, forced expression of a constitutively active mutant of HSF1 (⌬HSF) efficiently induced Hsp72 expression in CHO-7PA2 cells. These data agreed that the reduced levels of Hsp72 in CHO-7PA2 cells are not due to mutations in the promotor region of the Hsp72 gene but rather to a modulation of the activation/deactivation pathway of HSF1. Such a scenario is in line with our previous finding that the impaired HSR in ScN2a cells is caused by an accelerated deactivation of HSF1 after stress (34). With the help of a co-culture model we were able to study acute effects of pathogenic protein conformers on the HSR. Exposure of SH-SY5Y cells to PrP Sc per se did not induce Hsp72 expression but increased Hsp72 expression in response to heat shock conditions. Mechanistically, it is conceivable that the acute exposure of cells to PrP Sc sensitizes the HSF1 activation pathway, thereby lowering the threshold for efficient Hsp72 expression in response to additional stress. HSF activation/deactivation is regulated in the cytoplasmic and nuclear compartment at multiple steps via the interaction with chaperones and by different posttranslational modifications (for review, see Ref. 64). It is difficult to discriminate whether PrP Sc or A␤ modulates any of these steps directly by interacting with any of the HSF1 modulators or indirectly via disruption of the proteostasis. Both PrP Sc and A␤ have been found in the cytoplasmic compartment where they could interact with either HSF1 or chaperones implicated in HSF1 regulation. On the other hand, it has also been shown that accumulation of PrP Sc or A␤ disrupts the proteostasis network. For example, cytosolic PrP Sc inhibits proteasomal activity (65), and A␤ interferes with mitochondria function (for review, see Ref. 66). Activation of the HSR or Expression of Cytosolic Hsp72 Protects against Toxic Effects of A␤, PrP Sc , and a Neurotoxic PrP Mutant-The possibility to harness the stress response therapeutically have been demonstrated in various misfolding disease models previously (22,26,64,67,68). New in our study are the approaches to study cell ability to mount a HSR under conditions of acute and chronic exposure to PrP Sc and A␤ and to analyze three different neurotoxic proteins under comparable experimental conditions. Moreover, we evaluated the protective effect of individual chaperones located in different cellular compartments. Although the exact mechanisms of how PrP Sc , A␤, or other pathogenic protein conformers interfere with neuronal function are largely unknown, there appear to be common features. In particular, there is increasing experimental evidence that different toxic protein assemblies are structurally FIGURE 5. Expression of the extracellular chaperone clusterin protects against PrP Sc -and A␤-induced toxicity. Expression of the extracellular chaperone clusterin prevents PrP Sc -and A␤-induced toxicity. SH-SY5Y cells co-expressing PrP C and clusterin were co-cultivated with the indicated cell lines. After 16 h of co-culture, apoptotic cell death in SH-SY5Y cells was determined as described under "Experimental Procedures". Expression of PrP was analyzed by Western blotting using the monoclonal anti-PrP antibody 3F4. Secretion of clusterin in conditioned media was determined by TCA precipitation followed by Western blotting using the monoclonal anti-clusterin antibody 41D. n.s., not significant; **, p Ͻ 0.005; ***, p Ͻ 0.0005. FIGURE 6. Hsp72 and ⌬HSF but not clusterin protect against a neurotoxic PrP mutant. A, expression of PrP C protects against PrP⌬HD-induced toxicity. B and C, Hsp72 or ⌬HSF interferes with PrP⌬HD-induced toxicity. D, the extracellular chaperone clusterin does not prevent toxic effects of PrP⌬HD. In A-D, apoptotic cell death in SH-SY5Y cells expressing the indicated proteins was determined as described under "Experimental Procedures". Expression of PrP and PrP⌬HD or Hsp72 was analyzed by Western blotting using the monoclonal anti-PrP antibody 3F4 or the monoclonal anti-Hsp72 antibody C92. The presence of clusterin in conditioned media was determined by TCA precipitation followed by Western blotting using the monoclonal anti-clusterin antibody 41D. n.s., not significant; **, p Ͻ 0.005; ***, p Ͻ 0.0005. related and can activate similar cellular signaling pathways (6 -10, 69, 70). Notably, it has been shown that the cellular prion protein can serve as a cell surface receptor to mediate toxic signaling of both PrP Sc and A␤ (47,48,(71)(72)(73)(74)(75)(76)(77)(78)(79)(80)(81)(82)(83)(84)(85). We cannot exclude the possibility that cytosolic chaperones directly interact with PrP Sc or A␤. For example, studies in yeast demonstrated that chaperones can interact with and modulate maintenance and propagation of prions (for review, see Refs. 86 -91). Similarly, employing Caenorhabditis elegans and yeast as models of polyglutamine-induced toxicity, it was shown that cytosolic chaperones can ameliorate toxic effects of aberrantly folded protein conformers (92)(93)(94)(95)(96). However, it is also plausible that the protective activity of ⌬HSF and Hsp72 expression is based on a modulation of PrP Sc -and A␤-induced signaling pathways by cytosolic chaperones. A potential candidate for such an intracellular signaling molecule is the stress kinase JNK as Hsp72 can alleviate toxic effects of various stressors by suppression of JNK signaling (for review, see Ref. 97). In support of such a scenario are data showing that a JNK inhibitor suppressed toxic effects of PrP Sc (47). A different activity of Hsp72 was recently described in a mouse model of severe muscular dystrophy. This study indicated that Hsp72 can slow progression of disease by interacting with the sarcoplasmic/endoplasmic reticulum Ca 2ϩ -ATPase (SERCA) (98). In this context it is important to note that PrP C can restrict Ca 2ϩ -influx into the cell by limiting excessive N-methyl-D-aspartate (NMDA) receptor activity. Notably, this inhibitory activity of PrP C is lost upon interaction with A␤ (82,84,99). Interestingly, an activated HSR and increased Hsp72 expression also efficiently prevented toxic effects of the pathogenic PrP mutant PrP⌬HD. PrP⌬HD is located at the plasma membrane and does not form protein assemblies related to PrP Sc or A␤. Different models have been proposed to explain the toxic activity of PrP⌬HD, including the interaction with a yet unidentified receptor or a channel-forming activity of PrP⌬HD (for review, see Refs. 70 and 100). Irrespective of the exact mechanism, our results indicate that structurally unrelated pathogenic proteins can activate similar cellular pathways and that PrP⌬HD toxicity might be related to that of PrP Sc and A␤. An Extracellular Chaperone Interferes with PrP Sc -and A␤induced Cell Death but Not with Neurotoxic Signaling of a PrP Mutant-Our study on clusterin revealed interesting activities of this extracellular chaperone. Similarly to Hsp72, clusterin protected against PrP Sc -and A␤-induced toxicity; however, it could not interfere with toxic effects of PrP⌬HD expression. A variety of activities has been reported for clusterin, including modulation of amyloid formation by interacting with prefibrillar structures (101), clearance of extracellular misfolded proteins (102), and sequestration of oligomeric forms of A␤ (103). Thus, we suggest that despite a similar protective activity against PrP Sc -and A␤-induced toxicity, Hsp72 and clusterin exert different modes of action. Although Hsp72 seems to modulate intracellular pathways induced by PrP Sc or A␤ (see above), clusterin obviously interferes with PrP Sc -and A␤-induced toxicity by a direct interaction with the toxic protein assemblies, most likely in the extracellular compartment. As a consequence, PrP Sc or A␤ no longer interacts with PrP C at the plasma membrane, which in our cell culture model is the major cell surface receptor of PrP Sc -or A␤-induced toxicity. The failure of clusterin to interfere with PrP⌬HD-induced toxicity indirectly supports such a mode of action, as PrP⌬HD-mediated toxicity seems not to be linked to the formation of ␤-sheet-rich protein assemblies (for review, see Refs. 70 and 100). Our findings emphasize complex interrelations between the HSR and neurotoxic proteins. For example, toxic oligomers can both sensitize and desensitize the HSR in a time-dependent manner. As a consequence, it might be beneficial to interfere with the HSR at an early phase of the disease, whereas HSR stimulation is a possible strategy at later time points. Indeed, the protective effect of Hsp72 and clusterin supports the concept to use forced expression of individual chaperones or pharmacological induction of the HSR to delay progression of neurodegenerative disease. In addition, a combination of chaperones promises additive or synergistic effects as different chaperones can target distinct steps in neurotoxic signaling pathways.
v3-fos-license
2020-01-16T09:05:10.607Z
2019-12-30T00:00:00.000
212582358
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.sciencerepository.org/articles/6-gingerol-decreases-clonogenicity-and-radioresistance-of-human-prostate-cancer-cells_COR-2019-5-107.pdf", "pdf_hash": "e82495647fa43ed180365a07e913702981d463f2", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2452", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7e445ffb0e52e699fca102e49b265533e6351967", "year": 2019 }
pes2o/s2orc
[6]-Gingerol Decreases Clonogenicity And Radioresistance Of Human Prostate Cancer Cells The phenolic compound [6]-Gingerol, isolated from Zingiber officinale, has been demonstrated to have antitumor activity for different types of malignant tumours. Prostate cancer is the most common malignancy among males worldwide, being the second leading cause of cancer death in men. In the present study, we investigated the antitumor action of [6]-Gingerol on a human prostate cancer cell line (LNCaP). Our data shows that [6]-Gingerol treatment induced a dose-dependent decrease in the cell viability. Compared with the vehicle control, the cell viabilities were 79.90 ± 3.56% and 53.06 ± 7.82% when the LNCaP cells were exposed to 150 μg/mL and 300 μg/mL of [6]-Gingerol, respectively. The treatment of LNCaP with 300 μM of [6]-Gingerol led to a significant reduction (~25%) on the clonogenic survival of these cells. Furthermore, [6]-gingerol acted as a radiosensitizer for LNCaP cells. The pretreatment of these cells with [6]-Gingerol significantly enhanced the killing effects of ionizing radiation with a dose enhancement ratio of 1.25. Our results demonstrate the anti-tumour activities of [6]-Gingerol. Further studies are needed to elucidate the mechanisms involved. © 2019 Maria Helena Bellini. Hosting by Science Repository. All rights reserved. Introduction Prostate cancer (PCa) is the second most prevalent malignancy and second leading cause of cancer-related deaths among men in the world [1]. The choice of treatment modality depends on the stage of the disease and the patient´s clinical conditions [2]. Radical prostatectomy combined with radiotherapy (RT) is standard treatment for clinically localized PCa. Unfortunately, a significant percentage of RTtreated patients develop locally persistent or recurrent tumours [3,4]. II Cell Culture LNCap cells were cultured in RPMI-1640 with 10% Foetal Bovine Serum (FBS) along with 100 U/ml penicillin and streptomycin at a concentration of 300 µg/mL. The cell line was maintained at 37°C in a humidified atmosphere of 5% CO2 and were sub-cultured twice weekly. III Cell viability (MTS) Cell viability was assessed by using a [3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium (MTS)]based assay. The assay was based on the reduction of tetrazolium salt through the mitochondrial dehydrogenase of intact cells into a purple formazan product. The cells were seeded into 96-well plates (2,5×10 3 cells/well), and incubated for 5-6 hr to facilitate attachment. Cells were then treated with 150 and 300µg/mL of [6]-Gingerol or vehicle alone (0.1% DMSO) in serum containing media, and incubated for 24hr at 37°C. After incubation, MTS solution was added to the plate at a final concentration of 0.5 mg/mL. The cells were incubated for 2 hr in the dark at 37°C. The resulting MTS-products were determined by measuring the absorbance at 490 nm with an ELISA reader. IV Clonogenic assay The clonogenic assay was performed to evaluate in vitro cell survival following treatment with [6]-Gingerol. For the colony formation assay, LNCaP cells (1×10 2 cells/dish) were divided into treatment groups with 300µg/mL of [6]-Gingerol and no treatment and seeded into 60 mm culture dishes, followed by incubation at 37°C. Ten days later, cell colonies were fixed and stained with methanol 20% and crystal violet 0.5% and colonies of at least 50 cells were counted. V Radiosensitivity measurements The LNCaP cells were seeded at a concentration of 100 -2.400 cells/dish and were divided into two groups: cells which served as irradiated controls and cells treated with [6]-gingerol and irradiated. Cells were irradiated by a 60 Co source in the range from 4 to 15 Gy, using the GammaCell 220 -Irradiation Unit of Canadian-Atomic Energy Commision Ltd. (CTR-IPEN). After 14 days of culture in normoxic conditions, cell colonies were fixed and stained with methanol 20% and crystal violet 0.5%; colonies of at least 50 cells were counted. The surviving fraction was calculated as the ratio of the plating efficiency of treated cells to the control cells. The dose enhancement ratio (DER) was calculated as the dose (Gy) that yielded a surviving fraction of 0.03 for control divided by that for the [6]-Gingerol treated cells. VI Statistical Analysis The results are presented as the mean ± S.E. Single comparisons of the mean values were completed via a Student`s t-test. Multiple comparisons were assessed by One-way ANOVA, followed by Bonferroni´s tests with GraphPad Prism version 6.0 software. A p-value < 0.05 was considered statistically significant. VII Results and discussion Radiotherapy is frequently combined with prostatectomy to treat localised tumours, but many patients present with recurrent or persistent disease [14]. One approach to improve the efficacy of RT is the use of radiosensitizers. The use of natural compounds as radiosensitizers could be a good therapeutic tool in oncology [15,16]. In this work, we first investigated the effect of [6]-gingerol on viability of LNCaP prostate cancer cells. Our results demonstrated that [6]-Gingerol treatment induced a dose-dependent decrease in the cell viability ( Figure 2). Compared with the vehicle control, the cell viabilities were 79.90 ± 3.56% and 53.06 ± 7.82% when the LNCaP cells were exposed to 150 μg/mL and 300 μg/mL of [6]-Gingerol, respectively. The inhibitory effect on cell viability was more prominent at a dose of 300 μM of [6]-Gingerol after 24-h of pre-treatment (P<0.001). A significant difference in cell viability was also observed between the cells treated with 150 μg/mL and 300 μg/mL of Each bar represents means ± SE, n=6. Statistical analysis was performed using One-way ANOVA followed by Bonferroni's test. We further investigated the effects of 300 μM of [6]-Gingerol treatment on the drug sensitive and radioresistance assays. The drug sensitive effect of [6]-Gingerol on LNCaP cells was determined by using a colony formation (clonogenicity survival) assay. This assay has been previously employed for the evaluation of drug sensitivity in tumour cell lines [17]. After a 10-day culture period, colonies were stained and counted ( Figure 3A). The efficiency of colony formation was 75.11± 5.07% when the LNCaP cells were exposed to 300 μg/mL of [6]-Gingerol. (P<0.05) ( Figure 3B). These results indicate that [6]-Gingerol decreased the ability of LNCaP cells to form and sustain cell proliferation. A similar effect of [6]-Gingerol was also observed in pancreatic cells [18].
v3-fos-license
2018-10-29T05:22:13.669Z
2012-01-01T00:00:00.000
53500138
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3391/ai.2012.7.3.001", "pdf_hash": "0d848f6e4cb40fb58ef163e49e140c8eba7cfd68", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2453", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "0d848f6e4cb40fb58ef163e49e140c8eba7cfd68", "year": 2012 }
pes2o/s2orc
Western Atlantic introduction and persistence of the marine bryozoan Tricellaria inopinata Most species of bryozoans have short-lived larvae with limited dispersal potential, yet many of these species possess global distributions. In this study, we report the first occurrence from the western Atlantic Ocean of the widely distributed arborescent bryozoan Tricellaria inopinata d’Hondt and Occhipinti-Ambrogi, 1985. This species was collected in Eel Pond, Woods Hole, Massachusetts, in September 2010. At that time, T. inopinata colonies had already formed dense conspecific aggregations at some collection sites, despite the presence of several other arborescent bryozoans. Sites were monitored throughout 2011 to track the success of this introduction, and to assess the reproductive timing of T. inopinata in Eel Pond. To determine the likelihood of T. inopinata persisting in Eel Pond and competing with previously established bryozoans, rates of metamorphic initiation, metamorphic completion, and overall offspring survivability were compared to one of the other dominant arborescent species. Finally, we provide taxonomic details to aid in identifying these animals, consider the potential mode of transport, and discuss the potential ecological implications resulting from this introduction. Introduction The unintentional transport of organisms via shipping traffic is a well-known means of dispersal for many marine species (e.g., Allen 1953;Carlton 1985;Carlton and Geller 1993).Indeed, anthropogenic transport has a disproportionate effect in certain phyla, allowing for numerous species to achieve distributions that far exceed their inherent dispersal potential.Such is the case for the phylum Bryozoa, which is dominated by sessile species that have shortlived larvae with limited dispersal capability.For instance, Bugula stolonifera Ryland, 1960 releases non-feeding larvae that will usually initiate metamorphosis within four hours of release (e.g., Woollacott et al. 1989;Wendt and Woollacott 1999).Due to anthropogenic dispersal, however, this species can be found in sub-tropical and temperate waters worldwide (see Rodgers andWoollacott 2006, Ryland et al. 2011).Watts et al. (1998) examined the geographic distribution of 197 globally distributed species of bryozoans and found that species abundance coupled with the animals' ability to foul, best explained the observed distributions.We report here on the introduction, establishment, and potential ecological implications of another widely distributed bryozoan, Tricellaria inopinata d 'Hondt and Occhipinti-Ambrogi, 1985.Prior to our study, T. inopinata was not known to occur on the western side of the Atlantic Ocean.In 2010, however, colonies of this arborescent species were recovered in Eel Pond, Woods Hole, Massachusetts, where they have established a persistent population that is poised to spread to surrounding areas. Tricellaria inopinata is a recently described species that was first found in a small portion of the Lagoon of Venice in 1982.Because of the ongoing long-term surveying effort within the lagoon stemming from 1978, it was thought to have been a recent introduction (d 'Hondt and Occhipinti-Ambrogi 1985).Although the vector of transport that introduced these animals into the area was unknown, it has been hypothesized that the introduction could have occurred via shipping traffic, or in association with the shellfish fishery (Occhipinti-Ambrogi 1991;2000).By 1989, T. inopinata colonies could be found throughout much of the lagoon (area ≈ 550 km 2 ), and were seemingly only restricted by areas that routinely received an influx of fresh water (Occhipinti-Ambrogi 1991).Tricellaria inopinata spread throughout the lagoon despite the presence of numerous, previously established bryozoans and was found to overgrow several other species of arborescent bryozoans.Additionally, T. inopinata was epibiotic on various other organisms, including mussels, sponges, ascidians, and barnacles (Occhipinti-Ambrogi 1991), documenting a generalist larval settlement pattern.Such a pattern could provide these animals with a competitive advantage after being introduced to new areas, particularly when available substrate is a limited resource. Globally, the distribution of T. inopinata is disjointed, with populations reported on the Pacific Coast of North America, Japan, Australia and New Zealand, in addition to those described in the Mediterranean (see Occhipinti-Ambrogi and d 'Hondt 1994 and references therein) and northern European waters (De Blauwe 2009).Specimens collected in the Pacific were originally identified as T. occidentalis Trask, 1857, and some confusion existed as to whether or not T. inopinata was synonymous with T. occidentalis (e.g., Gordon and Mawatari 1992).Dyrynda et al. (2000), however, reanalyzed descriptions and specimens of T. inopinata and T. occidentalis, and documented that sufficient anatomical differences existed between them to allow for their identification as separate species.Further, these authors concluded that material collected from the Pacific that was anatomically similar to T. inopinata from the Adriatic and Atlantic should be assigned to T. inopinata.Shortly after its establishment and spread in the Venice Lagoon, T. inopinata was found in the Atlantic in 1996 in the northwest of Spain (Fernández-Pulpeiro 2001).The species was subsequently collected in southern England in 1998 (Dyrynda et al. 2000), in various locations in the Netherlands, Belgium, andFrance in 2000 (De Blauwe andFaasse 2001), and has recently been reported in Wales and Ireland (Ryland et al. 2009).Prior to our report, however, T. inopinata had not been reported elsewhere in the Atlantic.In this study, we document the first occurrence of T. inopinata in the western Atlantic Ocean, and provide taxonomic details to aid in identifying this species.Additionally, we provide insight into the reproductive timing of the populations established in Eel Pond, as well as empirical data on offspring survival and timing of metamorphic initiation and completion, in comparison to a dominant Eel Pond bryozoan, B. stolonifera. Methods As part of an ongoing research program, bryozoan assemblages in Eel Pond have been continuously monitored since 2006.Tricellaria inopinata was not known to occur in the area, but was found at several collecting sites in 2010.These sites were followed for the remainder of 2010, and throughout 2011, to track the success of this initial introduction, and to assess the survivability and reproductive timing of T. inopinata in Eel Pond.To aid in this, PLEXIGLAS ® settling plates (15 × 15 cm) were submerged in early April 2011 under the Woods Hole Marine Biological Laboratory pier.At the time of submergence, none of the species of erect bryozoans that survived over winter in Eel Pond were found to possess polypides.Settling plates were routinely examined for bryozoan ancestrulae using a dissecting scope.To be able to continually monitor new recruitment, the settling plates were scraped clean after examination. Induction of larval release To assess reproductive effort over time, as well as to procure larvae for subsequent experimentation, bryozoan colonies were routinely collected and induced to release larvae.Bryozoans collected from Eel Pond were returned to the laboratory and maintained overnight in 38-liter glass aquaria equipped with a power filter providing water flow and aeration.Unfiltered seawater (UFSW) collected from Eel Pond concurrent with animal collection was used in the aquaria, and the temperature was set to mimic ambient water temperature at the time of collection.To induce larval release, dark-adapted colonies were removed from the aquaria, transferred to 1.5-liter glass bowls containing UFSW, and exposed to fluorescent light.Many bryozoan larvae are positively phototactic on release and will aggregate on the illuminated side of the bowl, facilitating larval collection.Groups of larvae were transferred to polystyrene weighing dishes, which were then placed in the dark to induce larval settlement.After the Offspring survival and timing to metamorphic initiation and completion As a means to assess overall health of T. inopinata colonies in Eel Pond, as well as to determine the likelihood of this species establishing and competing with other bryozoans, experiments were conducted examining the time to metamorphic initiation, time to metamorphic completion, and overall offspring survivability, as compared to one of the other dominant arborescent bryozoans in Eel Pond, B. stolonifera.Gravid colonies of both species were collected on 27 July 2011 and maintained in the dark in glass aquaria.Larval release was conducted as previously described with one exception.Approximately 45 min after exposure to light, all larvae from both species were removed from the glass dishes and discarded.Larval release was allowed to occur for an additional 15 min, after which larvae were sampled and immediately utilized in the experiment.With this modification, we were able to ensure that larvae only differed in age by up to 15 min.Ten groups of larvae (T.inopinata: n=14-23; B. stolonifera: n=19-24) were transferred to polystyrene weighing dishes and placed in the dark, as light has previously been shown to effectively prevent metamorphic initiation in bryozoans under laboratory conditions (e.g., Wendt 1996).Initiation of metamorphosis was assessed hourly for a total of four hours, after which free-swimming larvae were counted, removed, and the dishes then submerged in glass aquaria.Metamorphic completion was initially assessed at 18h after release, and then in two-hour intervals until both species had achieved a 95% completion rate.The experiment was allowed to continue for a total of 72h, at which time those individuals that had not completed metamorphosis were counted and the experiment terminated. Taxonomic description Colonies of Tricellaria inopinata from Eel Pond appear whitish-grey to straw-colored and grow as erect, compact tufts (Figure 1), generally not exceeding 4 cm in height.Superficially, colonies resemble two other common Eel Pond bryozoans, Bugula stolonifera and B. simplex Hincks, 1886; due to heavier calcification in T. inopinata, however, the species can be distinguished by touch.Additionally, anatomical differences become readily apparent under even slight magnification, and the following characteristics can be used to distinguish T. inopinata from other erect bryozoans.In colonies of T. inopinata, zooids are arranged bi-serially and do not possess vibracula (Figure 2).Moveable pedunculate avicularia, similar to those described in Bugula spp., are not found in T. inopinata.Large lateral avicularia, however, are found on many, but not all, zooids.Pronounced spination about the operculum of the zooid is common in this species, with generally two internal and three external spines for each zooid.The most basal of the three external spines is often forked, but this characteristic is not constant within colonies.The scutum is prominent in this species, but the shape can vary dramatically within an individual colony from slender to broad and from forked to wavy (Figure 2).In some instances, the scutum was missing entirely from zooids immediately preceding a bifurcation (Figure 2a).Ovicells are situated distally to the maternal zooid and are multi-pored.The height and width of the ovicells was approximately equal (n = 20), a consistent characteristic within and among colonies.Larvae of T. inopinata have previously been described in detail (Occhipinti-Ambrogi and d 'Hondt 1994).They are barrel-shaped nonfeeding coronate larvae, also referred to as buguliform.Expanded coronas extend aborally, equatorially, and orally in position, with small pallial sinuses (type AEO/ps) (see Zimmer and Woollacott 1977).Early-stage embryos can appear pink while in the ovicell, but larvae are cream-colored with orange-red eyespots.As with the larvae of many bryozoans that brood their embryos, T. inopinata larvae are positively phototactic on release and rapidly initiate metamorphosis once sequestered in the dark.Completion of metamorphosis results in a squat ovoid ancestrula that lacks a scutum (Figure 3).Ancestrular spines are pronounced, although spine length is highly variable.Spines generally number between 8 and 10, and can be arranged symmetrically or asymmetrically around the operculum.Characteristic of most of these newly metamorphosed individuals, are two rhizoids that often proceed down the length of the ancestrula and expand into broad to tripartite tips (Figure 3). Observation of occurrence in Eel Pond Tricellaria inopinata colonies were first collected from Eel Pond in September 2010 (Salinity = 34 psu, Temperature ≈ 25ºC).During a routine collection conducted in July, these animals were not observed.Rather, collection sites were dominated by two arborescent bryozoans common to the area, B. stolonifera and B. turrita Desor, 1848.Prior to the September collection, however, there was a dieback of both of these species, possibly due to decreased salinity in Eel Pond resulting from heavy rainfall in late August and early September (http://water.weather.gov/precip/).At the time of first observation, T. inopinata could be found attached to submerged substrates throughout Eel Pond.Indeed, because of the decrease in abundance of the two previously dominant bryozoans, T. inopinata had already begun to form dense aggregations at several sites.Additionally, T. inopinata colonies were found to be epibiotic on several different Eel Pond organisms, including fucoid algae and the solitary ascidian Styela clava Herdman, 1881 (Figure 4), as well as on surviving B. stolonifera and B. turrita colonies.Aggregations of T. inopinata persisted throughout the fall, but began to diminish in early December (34 psu, 5ºC).By January 2011 (35 psu, 4ºC), T. inopinata colonies had died back, leaving only a few, sporadic isolated colonies.Some of these colonies survived the near-freezing temperatures and ice formation common to Eel Pond in the winter and persisted through March 2011 (36 psu, 5ºC), but no functional autozooids were found in any collected colony through this time. Reproductive timing of Tricellaria inopinata in Eel Pond Collection sites within Eel Pond were monitored weekly beginning in March 2011 for initial colony re-growth.No sign of re-growth was observed until late May (35 psu, 14ºC).Colonies that overwintered were found to possess newly budded autozooids at the tips of the colonies, although no functional autozooids were found in the interior of the colony.Additionally during this time period, numerous small colonies were observed, potentially having arisen from overwintering rhizoids.Functional autozooids were found throughout these smaller colonies.None of the autozooids on any collected colony possessed a filled ovicell, nor were any ancestrulae found on submerged settling plates.By early June (35 psu, 17ºC), colonies were found to possess brooded embryos, which appeared pink in the multi-pored ovicells.None of the collected colonies were found to release larvae after exposure to light.Within one week, however, collected colonies were found to release larvae, and numerous T. inopinata ancestrulae and juveniles were found growing on the submerged plates (33 psu, 20ºC).By late June (33 psu, 21ºC), collected colonies possessed numerous brooded embryos within the colony, and exposure to light resulted in the release of thousands of larvae from collected colonies.High rates of larval release were found throughout the summer and fall (31-35 psu, ≤ 25ºC), but began to decrease in mid-December (35 psu, 8ºC).Reduced larval output was observed in collected colonies until early January (35 psu, 6ºC), when approximately 35 colonies released only 8 larvae.None of these larvae initiated metamorphosis, and no brooded embryos were found in any colony examined after release. Offspring survival and timing to metamorphic initiation and completion Both species tested experienced high rates of metamorphic initiation and completion over the duration of the experiment (Figure 5).For B. stolonifera, 213 out of 216 (98.6%) of the larvae sampled initiated metamorphosis, and 210 (98.6 %) of those that initiated completed metamorphosis.For T. inopinata, 174 out of 185 (94.1%) larvae initiated metamorphosis, of which 169 (97.1%) completed metamorphosis.Timing for metamorphic initiation and completion were similar for the two species as well.For B. stolonifera, 90% of sampled individuals initiated metamorphosis within 1h, while 90% completed metamorphosis within 30h after release (Figure 5).For T. inopinata, 90% of sampled individuals initiated metamorphosis within 2h, and 90% had completed metamorphosis within 32h. Taxonomic verification As documented in previous descriptions of Tricellaria inopinata (e.g., d 'Hondt and Occhipinti-Ambrogi 1985;Dyrynda et al. 2000;De Blauwe and Faasse 2001), colonies collected in Eel Pond displayed a high degree of anatomical variation (see Figures 2 and 3).For instance, spine count, pattern, and size were found to vary across ancestrulae.In adults, the presence of a bifid spine was inconsistent from zooid to zooid, as was the presence of lateral avicularia.Perhaps the most striking example, however, occurred in the shape and size of the scutum, which displayed large amounts of variation even within a colony.Indeed, it was this type of anatomical variation that initially led to confusion as to the proper identification of these animals, relative to previous species' descriptions of other Tricellaria congeners.In their description of the bryozoans of New Zealand, Gordon and Mawatari (1992) remarked that it was puzzling that T. inopinata was erected as a new species, as the description given by d 'Hondt and Occhipinti-Ambrogi (1985) was within the range of variation for T. occidentalis.As previously mentioned, however, Dyrynda et al. (2000) re-analyzed descriptions and specimens of T. inopinata and T. occidentalis, and concluded that the scutum was one of the distinguishing features for these species, and described the scuta in T. occidentalis as "invariably slender or only slightly spatulate."Further, according to d 'Hondt and Occhipinti-Ambrogi (1985) and De Blauwe and Faasse (2001), only two other Tricellaria species possess multi-pored ovicells: T. occidentalis and T. prasescuta Osburn, 1950.In T. occidentalis, the ovicell is reported to be 1.5-2.0times wider than it is high, in T. prasescuta the ovicell is reported to be 1.5-2.0times higher than it is Time after release (h) wide, while in T. inopinata the ovicell height and width are roughly equal.Hence, colonies collected in Eel Pond are characteristic of T. inopinata. Vector of transport As with the invasion by T. inopinata into the Mediterranean, it remains unclear how these animals were transported across the Atlantic Ocean and introduced to the Woods Hole, MA region.Occhipinti-Ambrogi (1991, 2000) suggested shipping traffic and the shellfish fishery as likely vectors that introduced T. inopinata to the Mediterranean.Due to a lack of an appropriate aquaculture fishery in the Woods Hole region, it is unlikely that T. inopinata could have been introduced in such a manner.Therefore, shipping traffic appears to be the most likely vector.As previously stated, shipping has been implicated in the dispersal of many marine organisms.For instance, Schwaninger (1999) provided convincing genetic evidence that the invading population of the bryozoan Membranipora membranacea Linnaeus, 1767 in the Gulf of Maine stemmed from populations in northern Europe.For the introduction of T. inopinata, there are no major shipping lanes that include the Woods Hole region, but there are vessels that routinely conduct trans-Atlantic voyages that could potentially connect Woods Hole to northern Europe or the Mediterranean.The Woods Hole Oceanographic Institution possesses several ships capable of trans-Atlantic voyages.For example, in 2008 the Research Vessel (R/V) Knorr travelled from Woods Hole to northern Europe and back in late summer and fall (http://strs.unols.org/Public/diu_schedule_view.aspx?ship_id=10037&year=2008). More recently, the R/V Knorr travelled to Aveiro, Portugal, in July 2010, and returned to Woods Hole on August 1, 2010 (http://strs.unols.org/Public/diu_schedule_view.aspx?ship_id=10037& year=2010).Interestingly, T. inopinata was reported in a nearby, heavily used port in Ria de Aveiro (Marchini et al. 2007).While it seems unlikely that an erect bryozoan colony attached to a ship's hull could survive the trans-Atlantic voyage, it is worth noting that many arborescent bryozoans undergo an annual cycle of colony die-back and re-growth.During this cycle, the arborescent portion of the colony will die off, most likely due to deterioration in environmental conditions.When conditions improve, however, colonies will grow back, presumably stemming from the root-like projections that remained attached to the substrate.Numakunai (1967) found that B. neritina Linnaeus, 1758 rhizoids collected during winter budded zooids after approximately 10 days of incubation at 20ºC.Hence, if even a portion of the rhizoids survived the trans-Atlantic trip, it remains possible that at the completion of the voyage, a new zooid could form that would eventually develop into a reproductively mature colony. Ecological implications Shortly after its initial description in the Venice Lagoon, T. inopinata was documented to undergo a rapid range expansion, colonizing most of the lagoon and spreading to various localities in the northeastern Atlantic (e.g., Occhipinti-Ambrogi 1991; De Blauwe and Faasse 2001).Further, this species not only spread rapidly, but also appears to have had a negative effect on previously established bryozoan populations.For instance, T. inopinata in Venice Lagoon was initially observed to coexist with several bryozoan species that possessed similar growth forms (Occhipinti-Ambrogi 1991).Shortly thereafter, however, the previously established bryozoan populations decreased in abundance, such that T. inopinata became the dominant species at these collection sites (see Occhipinti-Ambrogi 2000).A similar phenomenon could be occurring in Eel Pond.Prior to 2010, the dominant bryozoans in Eel Pond for the majority of the reproductive season were Bugula stolonifera and B. turrita.Indeed, B. stolonifera was commonly found forming dense aggregations on much of the available substrate, essentially carpeting floating docks and pier pilings where it occurred.After the observed introduction of T. inopinata in 2010, all three species were found to become abundant late in the reproductive season.Throughout 2011, however, B. stolonifera never reached the abundance that had been observed in previous years, and by mid-season was completely absent from several collecting sites, which were dominated by T. inopinata.Bugula turrita was also found in reduced abundance, although its decline was not as drastic.It is unclear why this decrease in abundance occurred, but it could be a consequence of reproductive timing and competitive advantage by T. inopinata. Although the timing to metamorphic initiation and completion and overall survival between B. stolonifera and T. inopinata were similar (Figure 5), there were differences in onset of reproduction in the two species.In 2011, the onset of reproduction in T. inopinata occurred in early June, and by mid-June, numerous ancestrulae and young colonies were found on the submerged settling plates.In contrast, the onset of reproduction in B. stolonifera did not occur until late June.This difference in timing could have provided T. inopinata sufficient time to recruit to available substrate and begin growing, preventing B. stolonifera from forming dense aggregations where it had done so previously.Alternatively, the ability of T. inopinata to overgrow local species, as has been previously documented, could be the overriding factor.Throughout the summer in Eel Pond, numerous T. inopinata ancestrulae and young colonies were found attached to B. stolonifera and B. turrita colonies.Conversely, very few T. inopinata colonies were observed with nonconspecific individuals attached.Conspecific larval settlement, whereby larvae attach and metamorphose on adults of the same species, can be common in some bryozoans (e.g., Johnson and Woollacott 2010).The ability of T. inopinata larvae to foul and grow on other bryozoan species, coupled with the inability of other species to settle on T. inopinata adults, could provide a competitive advantage that allows this species to outcompete previously established arborescent bryozoans, even after a recent introduction.It remains unknown what effect this type of settlement has on growth and reproductive output of the previously established bryozoans.What appears clear, however, is that within a year of its first observance in Eel Pond, T. inopinata has established itself as the dominant bryozoan despite the presence of several previously established arborescent species, and appears poised to spread to surrounding areas.The species' rapid range expansion and increase in the northeastern Atlantic since its introduction to European and British shores in the 1990s, particularly its recent success in southern England (Arenas et al. 2006), highlights the need for periodic monitoring of nearby coastal areas. Figure Figure 1.Tricellaria inopinata colony (A) and close-up of an individual branch (B) showing biserially arranged autozooids, large lateral avicularia, and filled ovicells.The specimen was fixed in 95% EtOH prior to imaging, causing the embryos to loose pigmentation and appear white.Scale bars = 5 mm (A) and 150 µm (B). dishes were transferred into the aquaria and maintained there until completion of metamorphosis. Figure 2 . Figure 2. SEM of non-ovicellate (A) and ovicellate (B)Tricellaria inopinata autozooids.The scutum, a modified spine that partially covers the frontal membrane, is prominent and highly variable, ranging in shape from slender to broad and from forked to wavy.Occasionally, it is missing entirely (A).Autozooid spines are prominent as well, and the most basal of the 3 external spines is often forked (B).Scale bars = 200 µm (A) and 150 µm (B). Figure 5 . Figure 5. Percentage of individuals initiating and completing metamorphosis over time for Bugula stolonifera and Tricellaria inopinata offspring.Similar rates of overall survival, metamorphic initiation and metamorphic completion were observed between the two species throughout the duration of the experiment.Bars = 1 S.E.
v3-fos-license
2018-12-08T08:16:00.152Z
2017-03-08T00:00:00.000
55339481
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/amse/2017/3214696.pdf", "pdf_hash": "c7afdd32a505ce5191db6f436a9c8e72b0a87a68", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2455", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "c7afdd32a505ce5191db6f436a9c8e72b0a87a68", "year": 2017 }
pes2o/s2orc
Investigation on Durability Performance in Early Aged High-Performance Concrete Containing GGBFS and FA The significance of concrete durability increases since RC (Reinforced Concrete) structures undergo degradation due to aggressive environmental conditions, which affects structural safety and serviceability. Steel corrosion is the major cause for the unexpected failure of RC structures. The main cause for the corrosion initiation is the ingress of chloride ions prevailing in the environment. Hence quantitative evaluation of chloride diffusion becomes very important to obtain a chloride diffusion coefficient and resistance to chloride ion intrusion. In the present investigation, 15 mix proportions with 3 water-to-binder ratios (0.37, 0.42, and 0.47) and 3 replacement ratios (0, 30, and 50%) were prepared for HPC (high-performance concrete) with fly-ash and ground granulated blast furnace slag. Chloride diffusion coefficient was measured under nonstationary condition. In order to evaluate the microstructure characteristics, porosity through MIP was also measured. The results of compressive strength, chloride diffusion, and porosity are compared with electrical charges. This paper deals with the results of the concrete samples exposed for only 2 months, but it is a part of the total test plan for 100 years. From the work, time-dependent diffusion coefficients in HPC and the key parameters for durability design are proposed. Introduction Reinforced concrete (RC) structures are an economical and versatile construction material in civil infrastructure such as bridges, buildings, and nuclear reactors, and involve major construction to the tune of millions of dollars [1,2].However, the long term durability and service life of RC structures are one of the major problems faced by the construction industry for the past few decades [3].The durability of reinforced concrete is largely affected by the migration of aggressive ions (chloride and sulphate) via capillary absorption and hydrostatic pressure of cementitious matrix.The ions reach the reinforced rebar and destroy the passive film [4,5] subsequently corroding the steel.Chloride induced corrosion of RC structure has become a major problem worldwide [6][7][8], especially in buildings, bridges, parking decks, tunnels, and other buildings exposed to seawater or deicing salts.As a result of this deterioration of the RC structure, the repair costs nowadays constitute a major part of spending on infrastructure.Further, corrosion of steel does not only damage the RC structure but also causes safety concerns. Corrosion of steel in the RC structure can be mitigated by adopting various preventive measures, namely, cathodic protection, using corrosion inhibitors, coating to steel rebar, coating to concrete, using blended cement, and realkalization of concrete [9][10][11][12][13][14].The use of corrosion inhibitors is the more appropriate method, and its maintenance cost is high.One alternative way to prevent the corrosion of high-performance concrete (HPC) by improving impermeability, resistance to chloride ion diffusion [15], and abrasion resistance can be achieved through the partial replacement of cement with industrial byproducts (supplementary cementing materials (SCMs)) such as fly-ash (FA) [16], ground granulated blast furnace slag (GGBFS) [17], silica fume, rice husk ash [18], and micro silica.In South Korea, the annual production amount of FA was 8.5 million tons in coal-fired power plants and GGBFS (produced by POSCO and Hyundai Steel) was about 12 million tons/year as of 2011 [19].Compared to the Ordinary Portland Cement (OPC), it is very low regarding emission of environmental load.Hence, it has been mostly used as a component of low carbon emission concrete [20].Replacing OPC with GGBFS and FA in high-performance concrete (HPC) and self-compacting concretes (SCC) are becoming increasingly common in civil engineering structures [21].Utilizing FA and GGBFS for the production of HPC and SCC not only reduces the total material cost of the construction industry but also results in considerable benefits to the environment [22].In addition, the use of FA and GGBFS in the SCC has a unique and distinctive effect on the properties of the HPC.Further, the replacement of OPC with GGBFS and FA in SCC had a lower resistance to carbonation than the pure OPC-blended SCC.This effect appeared to be more pronounced with an increase in the replacement level of FA and GGBFS [23].Moreover, utilizing FA and GGBFS in SCC is more effective in resisting the chloride ion and sulphate ion migration and reducing the capillary pores.The chloride ion diffusion coefficient of the SCC mixtures with FA and GGBFS was lower than the control SCC [24][25][26].In addition, GGBFS and FA have pozzolanic activity which is attributed to the presence of SiO 2 and Al 2 O 3 .It reacts with calcium hydroxide during cement hydration to form additional calcium silicate hydrate (CSH) and calcium aluminate hydrate (CAH), which are effective in forming denser matrix leading to higher strength and better durability [27][28][29].Furthermore, according to Yuan et al., GGBFS and FA increase chloride binding due to the high content of aluminate hydrates and hence chloride migration was reduced in HPC [30].The research field on the evaluation of chloride diffusion in concrete is growing with consideration for diffusion, permeation [31], and binding capacity of chloride ions [32,33].Recently, numerical techniques covering chloride diffusion in partially saturated condition [34], chloride behavior in concrete with early-age cracking [35], and micro structures formation modeling in high-performance concrete [36,37] were proposed based on behavior in early-age concrete considering hydration and micro pore structure. In this work, 4000 specimens for 15 mix proportions with various water-binder (/) material ratios (0.37, 0.42, and 0.47) and various replacement percentages of GGBFS and FA (0%, 30, and 50%) are prepared and cured in tap water.As a part of the work, half of the samples analysis is going on at KCL (Korea Conformity Laboratory, Korea) by exposing to tidal, atmosphere, and submerged conditions to know long term durability for 100 years.The apparent diffusion coefficient and porosity will be measured by using RCPT () and MIP to know long term durability for 100 years.The samples will be measured at different intervals of time period: 0.6, 1, 2, 3, 4, 5, 7, 10, 20, 40, 80, and 100 years.The compressive strength, electrical charges, and chloride diffusion coefficient in nonsteady state and steady state conditions were measured.In order to evaluate the microstructure characteristics, porosity through MIP was also measured.From this work, timedependent diffusion coefficients in HPC and key parameters for durability design are proposed.The results showed that partial replacement of OPC with GGBFS and FA contributed considerable improvement to various properties of HPC. Concrete Mix Proportion. In this study, a total of 15 HPC mixtures were prepared; OPC partial replacement with GGBFS and FA in three replacement ratios like 0, 30, and 50% were considered for HPC mixtures.Three water-to-binder material (/) ratios (0.37, 0.42, and 0.47) were used.The details of the mixing proportions of the HPC are shown in Table 2. Compressive Strength. The compressive strength of the various concrete mixers was calculated according to ASTM C39/C39M [39] using cylindrical specimens of 100 mm in diameter and 200 mm in height, cast with different percentage of FA and GGBFS (HPC).After curing in room condition for 24 hours, the specimens were demoulded and immersed in water for curing at 25 ∘ C. The compressive strength was measured after 28 and 49 days of curing, the concrete cylinders were tested in the compression-testing machine, with 100 T capacities at the rate of loading 140 kN/min.The ultimate load at which the cube failed was taken. Rapid Chloride Ion Penetration Test (RCPT). The rapid chloride ion permeability test (RCPT) was conducted in accordance with ASTM C1202-10 [38] using a concrete disc of size 100 mm diameter and 50 mm thickness.After 28 days of curing, the concrete specimens were subjected to RCPT test by impressing a voltage of 60 V between two containers filled with 3% NaCl solution and 0.3 N NaOH solutions, as shown in Figure 1.Electrical current was measured every 30 minutes for up to 6 hours.The amount of electrical current passing through the specimen was measured and the total charge passed (in coulombs) was used as an indicator of the resistance of the concrete to chloride ion penetration.The total charge passed through the concrete specimens was calculated using the following formula [38]: where is charge passed (coulombs), 0 is current (amperes) immediately after voltage is applied, and is current (amperes) at min after the voltage is applied. Chloride Diffusion. Chloride diffusion coefficient method is the extension of the RCPT test.The chloride diffusion coefficient values were calculated by the two conditions such as steady state condition based on the results from ASTM [38,40] and nonsteady state condition from Tang's method [40,41].In RCPT test, the time duration is 6 hours where it is under nonsteady state condition.The diffusion cell and experimental set up is provided in ASTMC 1202 [38] and the calculation of the diffusion coefficient is performed by an electrical method proposed by previous researches [40,41].Silver nitrate solution (0.1 N, AgNO 3 ) is used as an indicator [42,43].In this test, chloride diffusion coefficient in nonsteady state conditions and steady state condition (effective diffusion) was calculated using (2a) and (2b) and (3), respectively. where cpd is diffusion coefficient in nonsteady state and steady state condition from RCPT (m 2 /s), is universal gas constant (8.314J/mol K), is absolute temperature (K), is thickness of specimen (m), is ionic valence (=1.0), is Faraday constant (=96,500 J/V mol), is applied potential (V), is test duration time (s), is the chloride concentration at which the color changes when using a colorimetric method to measure based on the reference [40,42], 0 is chloride concentration in the upstream solution (mol/l), is an experimental constant, and erf is 23,600 (m −1 ) in this study. eff means effective diffusion coefficient in RCPT test. Porosity Measurement. The porosity of concrete specimens at the age of 28 days was also investigated through the mercury intrusion porosimetry (MIP) test (ASTM D 4404) [44].MIP (Micromeritics, Autopore IV 9520, USA) has been one of the most widely used methods to analyze the pore structure of HPC samples.When preparing samples for MIP test, aggregates were avoided from sampling and about 1 cm 3 volumes (2.4∼3.6 g) were placed in the quanta chrome porosity analyzer and a sample was used for each measurement.The intruded volume could be read to an accuracy of ±0.001 cm 3 . Compressive Strength. The compressive strength of HPC with different percentages of GGBFS and FA at the age of 28 and 49 days as per KS F 2405 is shown in Figure 2. It was observed from the results that, at the age of 28 days and above, there has been an increase in compressive strength up to 30-50% replacement level of GGBFS in 0.37 of / ratio.The increase in 28 days strength of HPC mixes is due to the improvement in the effectiveness of the mineral admixture.The pozzolanic action of GGBFS reacts with OPC, which yields early strength at lower / ratios.The similar results are also reported by Tripathi et al. [45].They reported that, at the age of 28 days at lower / ratio, the compressive strength of concrete with ISF slag was higher than control mix concrete even at 60% replacement level.Moreover, the compressive strength values of GGBFS concrete at 49-day curing period was higher when compared to all the other HPC mixes, where the mix proportions of OPC : GGBFS were 70 : 30 and 50 : 50 at / of 0.37. On the other hand, another system of compressive strength of HPC with different percentage of FA at the age of 28 and 49 days has increased up to 30% replacement level of FA in 0.37 of / ratio and decreasing thereafter (50% replacement of FA).An increase in the compressive strength of HPC by up to 30% in replacement FA when compared with the control mix may be due to the pozzolanic action and packing effect of FA particles.FA contains more of silica and alumina; it reacts with calcium hydroxide to form C-A-S-H and C-S-H, which contributes to the higher strength of 30% of FA in HPC.Further, by replacing FA by up to 50%, strength has been reduced due to the weak bonding between cement paste and fly-ash particles and insufficient alkali from the reduced OPC amount. In addition, the compressive strength was slightly reduced by increasing / (0.42 and 0.47) ratio in HPC mix with GGBFS and FA replacement level.The increasing / ratio dilutes the cement paste and creates more water-filled pore space between the grains, that is, less nuclei for the hydrates in each volume unit.Hydrates have to grow larger and larger to cover the spatial gap (the water) between them and to interact and to develop strength-either physically (interlocked growth) or chemically (e.g., van-der-Waals attraction). Rapid Chloride Ion Penetration Test (RCPT). Rapid chloride permeability test was conducted to investigate the performance of HPC against chloride ingress.The total charge passed through the concrete matrix is lower and it means the resistance to chloride penetration is higher.Table 3 shows the classification of concrete for chloride ion penetrability based on total charge passed ASTM C1202 [38]. Figure 3 shows the RCPT test results at the end of 28 days for the HPC with various percentages of GGBFS and FA. Figure 3 illustrates that the cement replacement with GGBFS in HPC (0.37% of /) had lesser coulomb values when compared to control mix.For example, the 30% and 50% replacement level of GGGFS in 0.37% / ratio of HPC charge passed coulomb values are 1659.6 and 829.8 coulomb.This may be due to the reaction of pozzolanic materials like GGBFS reacting with Ca(OH) 2 to form C-S-H gel.The C-S-H gel considerably reduces pores between fine aggregate and coarse aggregate, so charge passed values may be significantly reduced through the GGBFS replaced HPC.At the same time with increasing / (0.42 and 0.47%) ratio, the charge passed coulomb values slightly increase.It may be due to the more dilute cement paste which creates more water-filled pore space between the grains. Figure 3 shows the charge passed results in cement replacement with FA in HPC.From the figure, it is observed that 30% of FA in HPC has lower values when compared to the control mix.For example, the charge passed result in the system with 30% replacement and 0.37 of / ratio is 2012.4coulomb.Further, the replacement of FA up to 50% charge passed coulomb values (2541 coulomb) significantly increases when compared to 30% of FA in HPC.This result indicates that the presence of 50% GGBFS and 30% FA in HPC can be more efficient in preventing chloride ion migration. Nonsteady State Condition. The chloride diffusion coefficient at nonsteady state condition (6 hours during test) in HPC containing various replacement ratios (0%, 30%, and 50%) and / ratios are shown in Figure 4.In the condition of 28-day curing.In Figure 4(a), the results in HPC with replacement level of 50% GGBFS at various / ratios of 0.37%, 0.42%, and 0.47 at 28 days are 6.4640 × 10 −12 , 6.5330 × 10 −12 , and 7.1102 × 10 −12 , respectively.The incorporation of GGBFS into the HPC resulted in a lower chloride ion diffusion coefficient in nonsteady state condition when compared to control mix. Figure 4(b) shows the results in the HPC with FA from 0% to 30%, which shows that the diffusion coefficients in the HPC drastically decrease.Regarding the replacement level of FA up to 50%, the chloride diffusion coefficient in the FAblended HPC begins to increase with increasing / ratio of HPC with FA and it has also significant increases when compared to lower / ratio of HPC.Zhao et al. [21] also reported that, at the age of 28 days at lower / ratio, the chloride diffusion coefficient of concrete with FA was lower than control mix even at 30% replacement level.Further, they also reported that increasing the replacement level FA up to 50% of FA the chloride ion diffusion coefficient also increased. Steady State Condition (Effective Diffusion). The chloride diffusion coefficient at steady state condition in HPC with the replacement of GGBFS and FA in different / ratio systems is shown in Figure 5.The chloride diffusion coefficient also decreases with the replacement level of 30% and 50% GGBFS in HPC when compared to the control mix.For example, control mix and 50% of GGBFS in HPC (0.37 /) are 1.2053 × 10 −11 m 2 /sec and 0.2916 × 10 −11 m 2 /sec, respectively.The reduction of chloride diffusion is 75.8% when compared with the control mix.This diffusion coefficient data confirms the better performance of 50% GGBFS in HPC. On the other hand it was observed that 30% of FA in HPC has lower chloride diffusion coefficient when compared to the control mix.For example, control mix and 30% of FA in HPC (0.37 /) are 1.2053 × 10 −11 m 2 /sec and 0.6137 × 10 −11 m 2 /sec, respectively.The reduction of chloride diffusion is 49.1% when compared to control mix.At the same time, the chloride diffusion coefficients of HPC with the replacement of GGBFS and FA (lower / ratio) at steady state condition significantly decrease when compared to nonsteady state condition.It may be due to absorption of chloride ion and formation of bound chloride (Friedel salt) [46].Moreover, the HPC with GGBFS exhibits lower chloride diffusion coefficient values than HPC with FA.It may depend on the aluminate content, where aluminate in GGBFS and FA forms AFm phases which react with chloride and also produces the calcium chloroaluminate hydrate and Friedel's salt [47].Moreover, at a given replacement level, the HPC samples with GGBFS had a lower chloride ion diffusion coefficient than those with FA. Porosity Measurement. Figure 6 shows the total porosity measurement for GGBFS (a) and FA (b) replaced HPC by MIP test at 28 days of water curing.As shown in Figure 6(a), an increase in the percentage of GGBFS with HPC reduces the average pore size diameter by 8.53 and 7.33% for 30 and 50% GGBFS replacement, respectively.Further increase in the / ratio (0.42 and 0.47) in 30% and 50% GGBFS on porosity at 28 days of curing significantly increases.For example, at various 0.37, 0.42, and 0.47%/ ratios of 50% GGBFS replacement of HPC, the porosity results are 7.33, 8.95, and 10.38%.As shown in Figure 6(b), the 30% of FA with HPC, porosity is 7.29% at 0.37%/ ratio.Increasing the percentage of FA and / ratio, the porosity slightly increases.At a given replacement level, the HPC samples with GGBFS had a lower pore diameter than those with FA. 7-9 present the relationship between compressive strength (CS), chloride diffusion coefficient (CDC), and porosity () against charge passed coulombs (CPC).Table 4 shows the numerical representation to determine the correlation between these properties of HPC with GGBFS and FA. Correlation between Test Results. Figures Figures 7(a) and 7(b) show the increase in CS attributed to the decrease of the CPC for all mixes of HPC.However, CS decreases with an increase in / ratio for all mixes of HPC.Also, it meagerly increases by replacing GGBFS and FA of HPC (Figures 7(a) and 7(b)).In addition, a good correlation is observed between CPC and CS.The same CS value is obtained in all mixes of HPC that showed different values of CPC.For example, at CS of 40 MPa in all mixes of HPC, the CPC values are around 5266.8 coulombs for control mix and The correlation equation can be expressed by the following single formula as reported elsewhere [51]: where DI is the durability index and "" and "" are the experimental constants.The CPC charge passed coulomb.The constants "" and "" were obtained through the regression analysis of the data in Table 4.The best-fit values of constants and and the coefficient of determination ( 2 ) are summarized in Table 5. 2 over 0.85 indicates an excellent correlation between the fitted parameters [52].Therefore, the data in Table 5 indicate a valid agreement between the CPC and CS of the replacement with GGBFS and FA of HPC.Furthermore, the data in Table 5 also indicates that an excellent fit correlation between CPC is plotted against CS and CDC of all types of mix concrete.However, in the case of control mix of concrete, the degree of the bit between the CPC and porosity of concrete is on the lower side ( 2 < 0.83) but reasonably correlated. Conclusion The conclusions drawn from this work are as follows. (1) The 0.37 / ratio of HPC containing 50% GGBFS and 30% FA yields the highest compressive strength values at 28 and 49 days of curing.(3) Among 15 mixes of concrete, 30% FA and 50% GGBFS in HPC with 0.37 / ratio show the least pores, minimum current flow, and less chloride diffusion coefficient.This study indicates that better durability performance for chloride environment is obtained in HPC containing GGBFS and FA compared to a control mix. (4) Through regression analysis, several durability performances such as compressive strength, diffusion coefficient, and porosity are compared with charge passed from RCPT results.They have high determination coefficient over 0.85 for all the cases, which indicates that electrical charge from RCPT can be another index for both durability and structural performance. 2 Advances in Materials Science and Engineering Figure 3 : Figure 3: RCPT results of HPC with GGBFS and FA. Figure 7 :Figure 8 : Figure 7: Correlation graph between charge passed coulomb versus compressive strength: (a) GGBFS in HPC and (b) FA in HPC. Figure 9 : Figure 9: Correlation graph between charge passed coulomb versus porosity: (a) GGBFS in HPC and (b) FA in HPC. Table 1 : Chemical composition and physical properties of OPC, GGBFS, and FA. Table 2 : Mixing proportion of HPC. Figure 1: The photograph image of RCPT experiment setup.
v3-fos-license
2018-04-03T01:50:02.287Z
2016-08-24T00:00:00.000
14811020
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/joph/2016/7013709.pdf", "pdf_hash": "90bc7eb767af5e882d2e8e684a0837d7b55afc04", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2456", "s2fieldsofstudy": [ "Medicine" ], "sha1": "90bc7eb767af5e882d2e8e684a0837d7b55afc04", "year": 2016 }
pes2o/s2orc
Outcomes of Sutureless Iris-Claw Lens Implantation Purpose. To evaluate the indications, refraction, and visual and safety outcomes of iris-claw intraocular lens implanted retropupillary with sutureless technique during primary or secondary operation. Methods. Retrospective study of case series. The Haigis formula was used to calculate intraocular lens power. In all cases the wound was closed without suturing. Results. The study comprised 47 eyes. The mean follow-up time was 15.9 months (SD 12.2). The mean preoperative CDVA was 0.25 (SD 0.21). The final mean CDVA was 0.46 (SD 0.27). No hypotony or need for wound suturing was observed postoperatively. Mean postoperative refractive error was −0.27 Dsph (−3.87 Dsph to +2.85 Dsph; median 0.0, SD 1.28). The mean postoperative astigmatism was −1.82 Dcyl (min −0.25, max −5.5; median −1.25, SD 1.07). Postoperative complications were observed in 10 eyes. The most common complication was ovalization of the iris, which was observed in 8 eyes. The mean operation time was 35.9 min (min 11 min, max 79 min; median 34, SD 15.4). Conclusion. Retropupilary iris-claw intraocular lens (IOL) implantation with sutureless wound closing is an easy and fast method, ensuring good refractive outcome and a low risk of complication. The Haigis formula proved to be predictable in postoperative refraction. Introduction The development of intraocular surgical technique of refraction correction in aphakic eyes has been observed recently. Aphakia is commonly the result of complications arising from cataract surgery. The most common risk factors of intraoperative complication are weakness of zonular fibers mostly due to PEX or trauma. Despite a lack of capsular support or its insufficiency, when the implantation of intraocular lens (IOL) into the ciliary sulcus is unmanageable, it is still possible to achieve satisfactory refraction. There are many possibilities to provide acceptable refraction in such eyes by implanting IOL in the anterior or posterior segment of the eye during primary or secondary operation, which is still debatable. The location of the implantation and its method of fixation determine complexity of the surgery and potential side effects. Placement of IOL in the anterior chamber (AC-IOL) is technically easy and fast but such location can harm corneal endothelium and structures of the anterior chamber angle. Growing evidence in the 1980s of complication connected with rigid closed-loop, angle-supported AC-IOL as endothelial cell loss leading to pseudophakic bullous keratopathy, uveitis, uveitis-glaucoma-hyphema syndrome, chronic macular edema, angle structure damage, formation of peripheral anterior synechiae, fibrosis of haptics into the angle, pupillary block, and hyphema led to the development of open-loop AC-IOLs; however they also induced complications [1]. For that reason they are contraindicated especially in patients with glaucoma or endothelial problems [2]. Sclera-fixated IOLs (SF-IOLs) are affordable and readily available. The IOL is located in natural position, near the focal point of the eye and further from corneal endothelium and structures of the angle. Different variants of sclerafixation procedure are proposed, but they all are characterized by difficult intraocular manipulation and timeconsuming surgery. Potential degradation of stitch and its interaction with sclera may be associated with suture erosion in the long term. Knot exposure may result in an increased incidence of endophthalmitis. Other possible complications include tilt and decentration of the IOL, open angle glaucoma, suprachoroidal hemorrhage, and retinal detachment [2,3]. Although it was demonstrated that secondary SF-IOL implantation is associated with less early postoperative complications than primary AC-IOL, there were no longterm differences in the visual outcomes and complication profiles [3]. The iris-claw lens method was invented by Worst in 1980 in order to correct the refraction in aphakic eyes [4]. The principle of the lens fixation has remained unchanged for 30 years. As the decrease of endothelial cell density is observed [5], in order to avoid complications characteristic of the presence of an IOL in the anterior chamber, the technique of posterior fixation of iris-claw lenses was proposed by Amar [6] and later modified by Mohr et al. [7]. This technique preserves the natural anatomy of the eye. The popularization of this implantation technique has been observed recently. Although its implantation is technically easy, disadvantages of this method include the size of the incision, which when sutured usually generates astigmatism, and the relatively high cost of the IOL. The aim of this study, therefore, was to analyze the results of sutureless iris-claw IOL retropupillary implantation during primary and secondary surgery. Materials and Methods The study of case series was comprised of consecutive patients operated on at the Ophthalmologic Clinic University Erlangen between March 2007 and May 2013 who underwent retropupillary implantation of Artisan/Verisyse iris-claw IOL (Ophtec BV; Advanced Medical Optics, Inc.). Data were collected retrospectively from hospital documentation for preoperative, intraoperative, and early postoperative period. Late postoperative follow-up data were collected with questionnaire from regional ophthalmological offices. All patients were routinely fully informed about the risk and benefits of the surgery and the written consent was obtained. Preoperative data included demographic data, corrected distance visual acuity (CDVA) measured with Snellen's decimal scale, refraction, intraocular pressure (IOP), preexisting pathology, history of the disease and former operations, and cause of the lack of the posterior capsule and biometry. Intraoperative data included operation time, size and place of incision, course of operation, documentation of additional procedures, and intraoperative complications. Postoperative data from follow-up visits included CDVA, refraction, and slit lamp findings, especially iris-related abnormalities. CDVA was measured with Snellen's chart and decimal notation. For CDVA analysis finger counting and hand movement were calculated as decimal values [8]. Refraction. Preoperatively, the refraction was measured with autorefractometry. The eyes which did not allow for autorefractometry due to cataract density were examined with subjective method. Corneal astigmatism was measured with IOL Master (Zeiss Meditec, Jena, Germany). Refraction was measured postoperatively with autorefractometer (RK-700 A; Nidek Co. Ltd., Gamagori, Japan). For this device deviation from the nominal value spherical and cylindrical vertex power is ±0.25 D for 0.00 to ±10.00 D and deviation from the nominal value of the ARK Cylinder axis for cylinder power is ±10 ∘ for 0.25 D to ±0.50 D, ±5 ∘ for >0.5 D to 3.00 D, and ±3 ∘ for >3.00 D. Surgical Technique. All operations were performed by one experienced consultant. Because of the variety of cases and preexisting pathologies, the surgical procedures differed and were individually modified. All patients, however, had iris-claw IOL attached to the posterior surface of the iris. Anterior vitrectomy, posterior vitrectomy, removal of remnants of the capsule, and removal of IOL were performed if necessary. For IOL implantation a corneal or sclerocorneal tunnel was used. In most cases the existing cataract operation tunnel was extended to 5.5 mm. The IOL was implanted to the anterior chamber and moved with special tweezers through the iris to posterior chamber. With a help of the second instrument (spatula) haptics were attached to the iris in 3 and 9 o' clock position. No incision was stitched. Statistical Analysis. For statistical analysis Kolmogorov-Smirnov test was applied to test for a normal distribution. Parametric -test was used for comparison of variables (GraphPad Software Inc., La Jolla, USA). Differences were considered statistically significant at < 0.05. Patients. The study comprised 47 eyes (45 patients: 30 female and 15 male). The mean age of the patients was 73,6 years (range 35 to 91 years; median 78, SD 14.5). The mean follow-up time was 15.9 months (ranging from 1 to 47 month; median 13, SD = 12.2). Observation time is shown in Figure 1. Coexisting pathologies of the patients are shown in Table 1. Indications 2.6.1. Primary Operation. The iris-claw IOL was implanted during primary operation in 6 eyes (12.8%), in which local conditions did not allow for intracapsular or sulcus IOL implantation. In this group zonulysis occurred intraoperatively during complicated cataract surgery: in four eyes it was caused by PEX, in one eye by trauma, and in one eye by intraoperative floppy iris syndrome. In all these eyes anterior vitrectomy and capsule removal were performed. One eye Changes of CDVA in all groups are shown in Table 2. In the group with glaucoma postoperative CDVA was significantly lower than in the rest of the groups ( = 0.017, unpaired -test). Besides this no significant differences in preoperative and postoperative CDVA were observed in the remaining groups. Figure 2. In the eyes with postoperative complications and abnormalities connected with iris and IOL, astigmatism was reduced in 40% and increased in 60% of eyes. The mean difference between preoperative corneal astigmatism and postoperative total astigmatism was −0.52 Dcyl (maximal reduction −1.45 Dcyl, maximal rise −2,9 Dcyl; SD 1.27). It was lower than −1 Dcyl in 40.0% of eyes. Mean shift of the cylinder axis was 38.25 ∘ (SD 32.7), which is not significantly different from the rest of the eyes ( = 0.32, unpaired -test). The postoperative astigmatism of all eyes is shown in Figure 3. Table 4. Table 5. The mean duration of operation was 35.9 min (SD 15.4). The shortest operation (11 min) was performed after complicated cataract surgery in aphakic eye, which did not require anterior or posterior vitrectomy. The most time-consuming operation (79 min) was performed in the eye with luxated IOL material. Eyes which required posterior vitrectomy took more operation time (which was not statistically significant). Discussion The best method of achieving acceptable refraction in eyes without capsular support for IOL is still a matter of discussion. Such conditions could appear after the lens or IOL luxation due to trauma or insufficient zonular fibers, for example, associated with PEX. In the following study PEX coexisted in 62% of the presented eyes. In most of the eyes it resulted in aphakia due to failed primary operation or to IOL luxation. Apart from trauma, PEX was also the cause of preoperative lens subluxation. Retropupillary localization, due to increased distance from corneal endothelium and angle structures, has protective significance for endothelium and IOP rise, which is especially important for PEX and glaucoma patients. Both in this and in similar studies no clinical influence on corneal condition [9,10] or intraocular pressure [9,10] were observed, whereas after anterior fixation of iris-claw IOL IOP tended to rise in 9.5% of cases [11]. Implantation of iris-claw IOL onto anterior surface of the iris led to the reduction of endothelial cell density by 9.78% within 3 years [12] and up to 12.35% within 5 years [13], resulting in corneal decompensation in 1.7% within 2 years [11]. One of the possible explanations is intraocular manipulation in the anterior chamber [13]. It could also be attributed to the mechanical irritation of the anterior chamber due to IOL donesis [4]. The impact of mechanical manipulation in case of retropupillary IOL implantation should be even higher and donesis is the most frequent and obvious finding after iris-claw IOL implantation, which is not considered a complication. The most common complication found in this study was ovalization of the iris. It had no influence on postoperative CDVA. A comparable frequency of ovalization of the iris was observed in other studies [9,10,14]. Ovalization, which could be explained by too tight enclavation in midperipheral iris stroma, tended to normalize over time [9]. Although biconvex architecture of the IOL and reduced contact with the iris surface should ensure no influence on stromal blood perfusion, one study indicated association of iris ovalization with the lack of iris perfusion associated with anterior implantation in phakic eye [15]. This factor could be related to iris atrophy, the second most frequent abnormality observed in this study. Atrophy of the iris is most common in places of enclavation and theoretically could be potentially associated with pigment dispersion [7]. Iris atrophy could also explain small tendency (up to 9% [9]) to decantation of the IOL. PEX leads to degenerative and atrophic changes of the iris muscle cells [16]. Ovalization of the iris was observed in 3 eyes with PEX, suggesting that the significance of PEX in the explanation of this phenomenon is limited. In the following study, the improvement of CDVA was achieved in 63% of the eyes. This result is similar to corresponding studies [10,13]. Theoretically, noncomplicated IOL implantation should not influence CDVA. The observed deceleration rate of CDVA agrees with the results of similar studies [10]. It could be explained by progressive coexisting pathologies like PEX glaucoma and macular atrophy. The reduction of CDVA did not correlate with iris/IOL abnormalities which occurred in the observation time. Except for one eye diagnosed prior to the operation, no case of postoperative macular edema was reported. In other studies, macular edema after retropupillary IOL fixation is observed in 1.2% to 8.7% [7,14]. The same frequency occurs in the case of anterior chamber iris-claw IOL [11]. It is comparable with the cases of AC-IOL and SF-IOL, where the rate of edema is observed in 2.7% to 10.4% [1,17]. However, the limitation of this study is its retrospective character and the lack of the regular OCT screening. The iris-claw IOL implantation rarely correlates with retinal detachment [11,14]. It is difficult to determine, if it had any association with the IOL or the operation technique. In one particular case in this study the eye had former vitrectomy due to IOL luxation to the vitreous. It should be remembered that most of the eyes which required secondary IOL implantation had pathologies which may cause retinal detachment. In the cases of scleral fixation retinal detachment is observed more frequently and suprachoroidal hemorrhage could occur [3], which was not observed in iris fixation. In the case of sclera fixation it could be explained by 6 Journal of Ophthalmology major intraoperative mechanical manipulation in posterior segment. Most manipulations during retropupillary iris fixation are performed in the anterior chamber where haptics are more controllable and can be easily observed. Even then retropupillary iris-claw IOL implantation is quite an easy technique, resulting in twice as short an operating time and significantly shorter time in aphakic cases in comparison to sclera fixation. Even during primary complicated cataract surgery combined with posterior vitrectomy the mean operation time was shorter than it was reported in cases of scleral fixation in aphakic eyes [16]. In this study the Haigis formula for IOL power calculation was used providing −0.27 ± 1.28 D of the mean postoperative refraction error. Other authors used the SRK II formula with A-constant of 116.8 [10] or the SRK/T formula with constant of 116.9 [9], which resulted in 0.43 ± 1.93 D and 0.00 ± 1.21 D of refractive error, respectively. The SRK/T formula with Aconstant of 116.5 resulted in −1.42 D ± 1.22 D in posttraumatic and −1.5 ± 1.15 in postcataract surgery aphakic group [14]. In the case of anterior chamber implantation A-constant of 115.0 was used resulting in +0.12 ± 1.76 D [11]. Although the Haigis formula in this study has better postoperative refractive results compared to the other formulas [18], it requires anterior chamber depth defined as the distance from the corneal vertex to the anterior lens capsule which is not possible in aphakic eyes. Therefore, it could be used only in cases with biometry performed before primary operation. Verisyse IOL has rigid PMMA construction; it requires large, at least 5.5 mm incision, which is likely to induce a high amount of surgery induced astigmatism. In this study the mean difference between postoperative and preoperative corneal astigmatism was −0.86 Dcyl. In 72.8% it was less than 1 Dcyl, which is even less than that in cases with implantation of such IOL through scleral tunnel incision, where it reached −2.01 Dcyl [10]. Closing the wound with the Nylon 10-0 suture with the use of the same implantation technique generated slightly higher (−3.64 ± 3.34 Dcyl) astigmatism, suggesting that the suture played a moderate role in deformation of corneal surface [10]. To reduce postoperative astigmatism different incision site and design can be used alternatively with combination of corneal refractive surgery techniques such as limbal relaxing incisions, LASIK, or PRK [19]. In presented study in the most cases the incision architecture was determined by primary tunnel incision localization. The problem of postoperative astigmatism could probably be reduced with foldable lenses, which can be inserted through 3 mm incisions. Another possible problem with such a large, nonsutured incision could be leakage and hypotonia. Although no wound in this study was closed with sutures, no signs of leakage, bleb formation, or hypotony were observed postoperatively. Due to retrospective character of this study, limitations of this study are lack of statistical power analysis, small subgroup sample size, and various observation time. This is, however, to the best of our knowledge the first analysis of the results of sutureless iris-claw IOL retropupillary implantation during primary and secondary surgery. Conclusions Retropupillary iris-claw IOL combines the ease of anterior chamber IOL implantation with optical and physiological advantages of posterior IOL location, ensuring a good refractive outcome and a low risk of complication. With careful wound construction surgery does not require suturing, which can reduce generated astigmatism. To our knowledge, the application of the Haigis formula in retropupillary iris-claw IOL was for the first time reported in postoperative refraction calculation. This type of implantation should be considered especially in all aphakic patients with contraindications for anterior chamber implant because of glaucoma or endothelial abnormality. The most common abnormalities after retropupillary iris-claw IOL implantation are ovalization and atrophy of the iris, which have no influence on visual or refractive outcomes as well as on intraocular pressure. The same concerns patients with glaucoma and PEX. Retropupillary iris-claw IOL implantation is a safe and relatively fast method in the cases of iatrogenic failure, which does not allow for intracapsular or sulcus implantation during primary complicated cataract surgery.
v3-fos-license
2018-04-03T03:13:56.311Z
2010-08-01T00:00:00.000
1637643
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://academic.oup.com/hmg/article-pdf/19/15/2947/8225789/ddq200.pdf", "pdf_hash": "8add41291a484b47dec60ea7f184554a95ad9de3", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2457", "s2fieldsofstudy": [ "Biology" ], "sha1": "3d3d3b0e5813565e605609b922259e115313c340", "year": 2010 }
pes2o/s2orc
Tau Ser262 phosphorylation is critical for Abeta42-induced tau toxicity in a transgenic Drosophila model of Alzheimer's disease. The amyloid-beta 42 (Abeta42) peptide has been suggested to promote tau phosphorylation and toxicity in Alzheimer's disease (AD) pathogenesis; however, the underlying mechanisms are not fully understood. Using transgenic Drosophila expressing both human Abeta42 and tau, we show here that tau phosphorylation at Ser262 plays a critical role in Abeta42-induced tau toxicity. Co-expression of Abeta42 increased tau phosphorylation at AD-related sites including Ser262, and enhanced tau-induced neurodegeneration. In contrast, formation of either sarkosyl-insoluble tau or paired helical filaments was not induced by Abeta42. Co-expression of Abeta42 and tau carrying the non-phosphorylatable Ser262Ala mutation did not cause neurodegeneration, suggesting that the Ser262 phosphorylation site is required for the pathogenic interaction between Abeta42 and tau. We have recently reported that the DNA damage-activated Checkpoint kinase 2 (Chk2) phosphorylates tau at Ser262 and enhances tau toxicity in a transgenic Drosophila model. We detected that expression of Chk2, as well as a number of genes involved in DNA repair pathways, was increased in the Abeta42 fly brains. The induction of a DNA repair response is protective against Abeta42 toxicity, since blocking the function of the tumor suppressor p53, a key transcription factor for the induction of DNA repair genes, in neurons exacerbated Abeta42-induced neuronal dysfunction. Our results demonstrate that tau phosphorylation at Ser262 is crucial for Abeta42-induced tau toxicity in vivo, and suggest a new model of AD progression in which activation of DNA repair pathways is protective against Abeta42 toxicity but may trigger tau phosphorylation and toxicity in AD pathogenesis. INTRODUCTION Alzheimer's disease (AD) is a progressive neurodegenerative disease without effective therapies (1,2). Pathologically, AD is defined by an extensive loss of neurons and by formation of two characteristic protein deposits in the brain, extracellular amyloid plaques and intracellular neurofibrillary tangles [NFTs (1)]. The major components of amyloid plaques are the 40 or 42 amino acid amyloid-b peptides (Ab40 or Ab42) (3,4). Ab peptides are derived from a type 1 transmembrane protein, the amyloid precursor protein (APP), by sequential cleavage by band g-secretases (5). Molecular genetic studies of early-onset familial AD patients have identified causative mutations in APP, Presenilin 1 and Presenilin 2 (6), which increase Ab42 production and/or Ab aggregation (7,8). These results provide a strong causative link between Ab42 and AD (7). NFTs are intracellular protein inclusions composed of the hyperphosphorylated microtubule-associated protein tau (9 -12). NFTs are detected in many neurodegenerative diseases (13), and multiple tau gene mutations and polymorphisms are associated with tauopathies, including hereditary frontotemporal dementia and parkinsonism linked to chromosome 17 (13). Tau mutations have not been associated with any known form of familial AD to date; however, tau haplotypes driving slightly higher tau expression increase the AD risk (14,15), suggesting that tau plays a role in the pathogenesis of AD as a modulator of disease progression. * To whom correspondence should be addressed at: 900 Walnut Street, JHN410, Philadelphia, PA 19107, USA. Email: koichi.iijima@jefferson.edu (K.I.); kanae.iijima-ando@jefferson.edu (K.I. -A.) An imbalance in phosphorylation and/or dephosphorylation of tau has been suggested to initiate the abnormal metabolism and toxicity of tau in AD (13,16,17). At least 30 putative Ser/ Thr phosphorylation sites in tau are phosphorylated in NFTs (16). In vitro and in vivo studies have demonstrated that tau phosphorylation at some of the disease-associated sites plays critical roles in tau binding to microtubules (18 -20) and tau fibril formation (21)(22)(23)(24). Approximately half of AD-related sites are targets for serine/proline (SP) or threonine/proline (TP) kinases (25). In transgenic animal models, overexpression of kinases that phosphorylate tau at SP/TP sites, including GSK-3b and Cdk5 modify tau phosphorylation, NFT formation and tau toxicity (26)(27)(28)(29)(30)(31)(32)(33). In addition, phosphorylation of tau at AD-related, non-SP/TP sites such as Ser262/356 increases tau phosphorylation at SP/TP sites and promotes tau toxicity (34)(35)(36). Accumulating evidence suggests that Ab and tau synergistically contribute to the pathogenesis of AD (17). Studies in human AD cases following Ab immunization have shown decreases in amyloid burden and in phosphorylated tau in neurites surrounding the amyloid plaques (37,38). In transgenic mice overproducing human Ab and tau proteins, Ab facilitates the abnormal phosphorylation of tau at AD-related sites and enhances the formation of NFTs (39)(40)(41). Ab immunization removes amyloid pathology as well as early stage tau lesions (42), and ameliorates cognitive decline in transgenic mice that form plaques and tangles (43)(44)(45). Knockdown of tau expression suppresses Ab-induced neurotoxicity in cultured neurons (46,47), and lowering or eliminating endogenous tau expression in transgenic mice suppresses Ab-induced behavioral deficits (48). In a Drosophila model expressing human Ab42 and tau, Ab42 synergistically enhances tau-induced neurodegeneration, and tau phosphorylation at AD-related SP/TP sites is important for Ab42-induced tau toxicity (49). These reports suggest that Ab lies upstream of aberrant phosphorylation and toxicity of tau in the pathogenesis of AD. However, the molecular mechanisms by which Ab induces abnormal phosphorylation and toxicity of tau in vivo are not fully understood. The transgenic Drosophila models of tauopathy, in which human tau is overexpressed, have been used as effective genetic model systems to reveal the mechanisms underlying human tau-induced neurodegeneration (33,34,(49)(50)(51)(52)(53)(54)(55)(56)(57). Accumulation of disease-associated conformational changes and phospho-epitopes in tau has been detected in the fly brain and eye (34,50,53). NFT formation is not observed in fly neurons (50), indicating that tau toxicity is not conferred by large insoluble aggregates of tau in the Drosophila models. These results suggest that Drosophila models of tauopathies may recapitulate early pre-tangle events in tau-associated neurodegeneration (58). In order to study the pathogenic interactions between Ab42and tau-induced toxicity in vivo, we use a transgenic Drosophila expressing human tau (50) in combination with a transgenic Drosophila model of human Ab42 toxicity (59-61) (Please see the 'Materials and Methods' section for the details of the Ab42 fly model). Double transgenic flies expressing human Ab42 and tau enable the examination of the effect of Ab42 on tau pathology and toxicity, in the absence of the effects of APP and other fragments of APP including C-terminal and N-terminal fragments and other Ab species. Here we show that tau phosphorylation at Ser262 plays a critical role in Ab42-induced tau toxicity. We also demonstrate that expression of DNA repair genes, including DNA damage-activated Checkpoint kinase 2 (Chk2), is increased in Ab42 fly neurons as a protective response against Ab42 toxicity. Since Chk2 phosphorylates tau at Ser262 and enhances tau toxicity (36), our results suggest that the increased activity of the DNA repair pathways in response to Ab42 may be one of the mechanisms that mediate Ab42-induced tau phosphorylation and toxicity in vivo. Human Ab42 enhances human tau-induced toxicity in transgenic fly eyes and brains Expression of wild-type human tau (0N4R, see the 'Materials and Methods' section) in Drosophila eyes using the panretinal gmr-GAL4 driver causes eye degeneration characterized by small eye size, rough surface and reduced retinal thickness (33,50) (Fig. 1B, F and I). Co-expression of Ab42 with tau significantly enhanced the reduction in the external size of eyes (Fig. 1C) as previously reported (49) and internal retina thickness ( Fig. 1G and I), whereas Ab42 expression alone did not significantly affect eye structures at this level of expression (Fig. 1D, H and I). Similar results were obtained from two independent Ab42 transgenic fly lines (Ab42#1 and Ab42#2, Fig. 1I). These results indicate that Ab42 expression exacerbates tau toxicity in vivo and that the fly eye can be used to investigate the molecular mechanisms of Ab42-induced tau toxicity. Expression of tau in neurons by the pan-neuronal elav-GAL4 driver caused an abnormality in fly brain structures. The mushroom body structures are paired structures, formed from approximately 2500 cells in each hemisphere, which play a central role in olfactory learning and memory in flies (Fig. 1J) (62). The calyx is a dendritic region in the mushroom body, and the calyx neuropil in flies expressing tau were smaller than those in controls ( Fig. 1K) or were missing in some of the fly brains ( Fig. 1N) (63). Co-expression of Ab42 and tau caused a complete loss of calyx structures ( Fig. 1L and N). In contrast, a normal calyx structure was observed in flies expressing Ab42 alone ( Fig. 1M and N). These results indicate that Ab42 enhances tau-induced toxicity also in fly brain neurons. Human Ab42 increases human tau phosphorylation at Ser202, Thr231 and Ser262 in transgenic fly eyes and brains We examined whether enhancement of tau-induced toxicity by co-expression of Ab42 in fly eyes and brains was accompanied by an increase in tau phosphorylation levels at AD-related sites. Tau phosphorylation at 16 AD-related sites (indicated in Fig. 2A), in the presence or absence of Ab42 expression, was examined by western blotting using phospho-tau specific antibodies. This systematic analysis revealed that tau phos- 2948 Human Molecular Genetics, 2010, Vol. 19,No. 15 phorylation levels were significantly increased at Ser202, Thr231 and Ser262 when tau was co-expressed with Ab42 in both eyes (using the gmr-GAL4 driver, Fig. 2B) and brains (using the pan-neuronal elav-GAL4 driver, Fig. 2C). These results were confirmed in three independent transgenic fly lines carrying Ab42, and using two different antibodies to detect Ser202, Thr231 and Ser262 phosphorylation. To examine whether Ab42 affects tau solubility in fly brains, fly brains expressing human tau in the presence or absence of Ab42 were extracted with sarkosyl, and the sarkosyl-soluble and -insoluble fractions were subjected to western blotting with anti-human tau antibody. Expression of Ab42 did not affect the distribution of tau in the sarkosyl-soluble and -insoluble fractions (Fig. 2D). Moreover, paired helical filaments were not detected in the sarkosyl-insoluble fractions from fly brains co-expressing Ab42 and tau by transmission electron microscopy. The Ser262 phosphorylation site of tau is critical for the pathogenic interaction between Ab42 and tau in transgenic flies Tau phosphorylation at Ser262 has been shown to play a critical role in tau toxicity (34)(35)(36). To test whether the Ser262 phosphorylation site is required for the pathogenic interaction between Ab42 and tau, we have established transgenic fly lines carrying human tau with an alanine mutation at the Ser262 site (S262A tau), which express comparable levels of S262A tau to wild-type human tau (Fig. 3A) (36). Consistent with the previous report using S262A/S356A tau (34), phosphorylation at Ser202 (AT8 epitope) was found to be significantly lower in S262A tau than in wild-type tau, whereas phosphorylation at Thr231 (AT180 epitope) was not significantly altered (Fig. 3A). The S262A single mutant dramatically suppressed tau toxicity in the retina as reported previously (Fig. 3B) (36), and also in the brain ( Fig. 3D and F). Co-expression of Ab42 and S262A tau using the pan-retinal gmr-GAL4 driver did not cause any reduction in eye size (Fig. 3C). In addition, co-expression of Ab42 and S262A tau by the pan-neuronal elav-GAL4 driver did not cause structural defects in the mushroom body ( Fig. 3E and F). These results indicate that the Ser262 site is critical for the pathogenic interaction between Ab42 and tau. Expression of Chk2, as well as a number of genes involved in DNA repair pathways, is increased by human Ab42 expression in transgenic fly brains We have recently shown that the human DNA damage-activated Chk2 phosphorylate tau at Ser262 in vitro (36). Overexpression of Drosophila Chk2 enhances tau toxicity, and the Ser262 phosphorylation site is critical for the Positions of the phosphorylation sites tested in this study are shown. Gray boxes, four repeats of the microtubule-binding domain. (B and C) Fly heads expressing human tau alone (tau) or with Ab42 (tau + Ab42) driven by the pan-retinal gmr-GAL4 at 1 dae (B) or pan-neuronal elav-GAL4 at 25 dae (C) were subjected to western blotting with anti-tau (total tau), anti-phospho-tau (pS202, pT231 and pS262) and anti-Ab42 antibodies. Flies carrying the driver only were used as the negative control (control). The phosphorylation levels in the eye and brain of flies co-expressing tau and Ab42 (tau + Ab42) are shown as a ratio relative to that in flies expressing tau alone (tau). Representative blots are shown. Asterisks indicate significant differences from tau alone (tau) [n ¼ 4 or 5, * P , 0.05 (Student's t-test)]. (D) Co-expression of Ab42 did not increase sarkosyl-insoluble tau in the fly brain. Western blotting of sarkosylsoluble and -insoluble fractions of head extracts from flies expressing tau alone (tau), or tau and Ab42 (tau + Ab42), driven by the pan-neuronal elav-GAL4 driver. Head extracts from flies expressing Ab42 alone (Ab42) was used as a negative control. Flies are at 35 dae. 2950 Human Molecular Genetics, 2010, Vol. 19,No. 15 toxic interaction between Chk2 and tau in a transgenic fly model (36). Using quantitative real-time PCR, we found that mRNA levels of Chk2 were increased in the fly brain by the expression of human Ab42 in neurons (Fig. 4). The increase in Chk2 mRNA levels was more prominent in the fly brains expressing Ab42 with the familial Alzheimer's disease Arctic mutation (Ab42Arc: E22G substitution), which shows more enhanced accumulation and toxicity than Ab42 (60) (Fig. 4). These results suggest that the Ab42-induced increase in Chk2 expression may underlie the increase in tau phosphorylation and the enhancement of tau toxicity. We examined whether a genetic reduction of Chk2 ameliorates the enhancement of tau toxicity caused by Ab42. Because the homozygous null mutants of Chk2 are lethal (64), we tested the effect of a heterozygous loss-of-function mutation of Chk2 on tau-induced retinal degeneration in the presence or absence of Ab42. A heterozygous loss-of-function mutation of Chk2 did not significantly suppress the enhancement of tau-induced retinal degeneration caused by Ab42 (data not shown), suggesting that loss of one copy of Chk2 is not sufficient to reduce the enhancement of tau toxicity caused by Ab42. Why is Chk2 expression upregulated in the Ab42 fly brain? Chk2 is a DNA damage transducer, which is activated in response to double-strand breaks in DNA (65,66). The nonhomologous end-joining DNA repair pathway is the major repair pathway for DNA double-strand breaks in post-mitotic neurons (67). Genes involved in non-homologous end-joining DNA repair pathways (Ku70, rad50, rad54 and Ligase 4) and Nijmegen breakage syndrome (nbs), a component of the double-strand break sensor MRN complex, were upregulated in Ab42 and Ab42Arc fly brains (Fig. 4F). Increased expression of the replication factor C subunit 40 (RfC40) and proliferating cell nuclear antigen (PCNA), which is important for both DNA synthesis and DNA repair (68) and is abnormally re-expressed in human AD brains and animal models of AD (51,69,70), was also detected. In addition, expression of the Drosophila homologs of genes involved in direct repair (O-6-alkylguanine-DNA alkyltransferase), base excision repair (XRCC1) and mitochondrial single-strand Activation of DNA repair pathways is a protective response against Ab42-induced toxicity While genes involved in the DNA repair response are upregulated (Fig. 4), damaged DNA and apoptosis were not detected in the brains of flies expressing Ab42, as indicated by TUNEL staining (71) and EM analysis (60). These results suggest that expression of DNA repair genes are induced as a protective response against Ab42 toxicity. We tested this possibility by blocking the function of the tumor suppressor p53, which is a transcription factor that regulates DNA damage-induced transcription (72)(73)(74). Drosophila p53 regulates induction of pro-apoptotic genes and DNA repair genes, including components of the non-homologous end-joining repair pathway, after DNA damage (75). Two dominant negative forms of p53 have been used to disrupt p53 functions in Drosophila (76). DN-p53-259H carries a point mutation in the p53 DNAbinding domain, and DN-p53-Ct is a C-terminal p53 fragment. Both mutants form tetramers with endogenous p53, but fail to bind DNA and disrupt p53 functions (76). To test whether a reduction of p53 function would enhance Ab42 toxicity, we examined the effect of neuronal expression of the dominant negative forms of p53 on Ab42-induced locomotor defects. Ab42 flies show age-dependent, progressive locomotor dysfunction starting around two weeks after eclosion, which can be detected by a climbing assay (59,60). In this assay, flies were placed in an empty plastic vial and tapped to the bottom. The number of flies at the top, middle or bottom of the vial was scored after 10 s. The neuronal expression of DN-p53-259H or DN-p53-Ct enhanced the locomotor defects induced by Ab42 (Fig. 5A, 19 day and 29 day). In contrast, neuronal expression of DN-p53-259H or DN-p53-Ct alone did not cause locomotor defects at up to the age of 36 days after eclosion (Fig. 5B). These results indicate that the induction of DNA repair responses is protective against Ab42 toxicity. DISCUSSION Elucidation of the mechanisms by which Ab42 induces abnormal phosphorylation and toxicity of tau is crucial to understanding the complex pathogenesis of AD. We have demonstrated here that, in transgenic Drosophila expressing both human Ab42 and tau, Ab42 increases tau phosphorylation at AD-related sites including Ser262 and enhances tau-induced neurodegeneration (Figs 1 and 2). Co-expression of Ab42 and tau carrying the non-phosphorylatable Ser262Ala mutation did not cause neurodegeneration (Fig. 3), suggesting that the Ser262 phosphorylation site is required for the pathogenic interaction between Ab42 and tau. Tau phosphorylation at Ser262 is increased in pre-tangle neurons in AD (77,78). Increased tau phosphorylation at Ser262 is observed in cellular and animal models such as the cultured neurons treated with Ab42 (79), brains of double transgenic mice expressing human APP and tau (80), and monkey cortex after injection of Ab (81). Our results are consistent with these reports and suggest that the double transgenic fly model co-expressing human Ab42 and tau recapitulates a pathological phosphorylation of tau induced by Ab42 in mammalian neurons. In transgenic mice overproducing human Ab and tau, Ab enhances the formation of NFT (39 -41). In contrast, in the double transgenic fly model, neither sarkosyl-insoluble tau nor PHF tau was detected (Fig. 2). These results suggest that large tau aggregates are not involved in the enhancement of Ab42-induced tau toxicity in the transgenic fly model. What are the mechanisms by which Ab42 enhances tau toxicity through Ser262 phosphorylation? Ser262 is located in the microtubule-binding domain of tau, and phosphorylation at 2952 Human Molecular Genetics, 2010, Vol. 19,No. 15 Ser262 reduces tau binding to microtubules (19), which may increase the chances of abnormal phosphorylation at other AD-related sites and, consequently, enhance tau toxicity (17,82,83). In the Drosophila model, tau phosphorylation at Ser262 triggers a temporally ordered series of phosphorylations at several proline-directed kinase target sites (SP/TP sites) and generates disease-associated phospho-epitopes (34). In the double transgenic fly model, we detected that phosphorylation of tau at two of the SP/TP sites, Ser202 and Thr231, was increased by Ab42 (Fig. 2). Moreover, studies of transgenic flies co-expressing Ab42 and tau have revealed that phosphorylation at AD-related SP/TP sites is involved in Ab42-induced tau toxicity (49,84). These results suggest that the increase in tau phosphorylation at SP/TP sites followed by Ser262 phosphorylation may be one of the mechanisms underlying Ab42-induced enhancement of tau toxicity. Interestingly, a recent study has shown that the introduction of the S262A/ S356A mutation to tau can suppress the toxicity of tau hyperphosphorylated at SP/TP sites (35). This raises a possibility that tau phosphorylation at Ser262 affects tau toxicity not only by increasing tau phosphorylation at SP/TP sites but also controlling toxicity of tau phosphorylated at SP/TP sites. Widespread single and double-strand DNA breaks have been detected in neurons in the brains of patients with AD and with mild cognitive impairment (85)(86)(87)(88)(89)(90)(91)(92)(93)(94)(95)(96)(97)(98). More DNA damage was found in the aging hippocampus, one of the vulnerable regions of the brain in AD, than in the aging cerebellum (99). In postmortem brains from patients, the neurons that show NFT formation in AD are the same as those that show age-related accumulation of DNA damage (100). The Ab42 peptide is known to cause oxidative stress (101,102), and damage to nucleic acids caused by reactive oxygen species includes base modifications such as 8-hydroxydeoxyguanosine, single-strand breaks and double-strand breaks if single-strand breaks are in close proximity (103). Expression of genes involved in DNA repair responses, including the DNA damage-activated Chk2, was increased in response to human Ab42 expression in the fly brain (Fig. 4). Since Chk2 phosphorylates tau at Ser262 and enhances tau toxicity in a transgenic Drosophila model (36), these results suggest that increased expression of the DNA repair transducer Chk2 may be one of the mechanisms underlying Ab42-induced phosphorylation and toxicity of tau in vivo. While genes involved in the DNA repair response are upregulated (Fig. 4), damaged DNA and apoptosis were not detected in the brains of flies expressing Ab42 (60,71). Furthermore, blocking the function of p53, which mediates expression of DNA repair genes, enhanced Ab42-induced behavioral deficits (Fig. 5). These results suggest that the upregulation of genes involved in DNA repair pathways is protective against Ab42 toxicity, but the increased activity of DNA damage-activated kinases such as Chk2 may cause tau phosphorylation and toxicity in AD progression. In summary, this study has demonstrated that tau phosphorylation at Ser262 is critical for Ab42-induced tau toxicity. Additionally, our results suggest that the activation of DNA damage-activated kinases by Ab42 may be involved in the pathogenic interaction between Ab42 and tau. Increases in DNA repair gene expression have been reported in aged brains (104) and in brains from Down's syndrome patients (105), and DNA repair efficiency is changed in AD brains (90,(106)(107)(108)(109)(110)(111)(112). The DNA damage-activated Chk1 and Chk2 are expressed in post-mitotic neurons in the brain (113,114), and it will be important to investigate whether Chk1 and Chk2 are activated in AD brains. Transgenic fly models of Ab42 toxicity We have established the transgenic fly line carrying human Ab42 and Ab42 with Arctic mutation, which has been described previously in detail (59,60,115). Briefly, to produce human Ab42 in the secretory pathway of fly neurons, the Ab42 peptide sequence is directly fused to a secretion signal peptide at the N-terminus. Mass spectrometry analysis has revealed that the Ab42 transgenic flies produce the intact human Ab42 peptide in the fly brain (59,60), and immuno-electron microscopy has shown that the expressed Ab42 is localized to the secretory pathways in neurons in the fly brain (60). These Ab42 flies show late-onset, progressive short-term memory defects, locomotor dysfunctions, neurodegeneration and premature death, accompanied by the formation of Ab42 deposits (59 -61). Fly stocks The transgenic fly line carrying the human 0N4R tau, which has four tubulin-binding domains (R) at the C-terminal region and no N-terminal insert (N), was a kind gift from Dr Mel Feany (Harvard Medical School) (50). We have previously established the transgenic fly lines carrying S262A mutant tau (36). Other fly stocks were obtained from: Drs Wei Du (the University of Chicago) (Chk2[E51]) and the Bloomington Drosophila Stock Center (Indiana University) (UAS-DN-p53-Ct, UAS-DN-p53-259H, gmr-GAL4 and elav-GAL4). Crosses were maintained on standard cornmealbased Drosophila medium at 258C. Histological analysis To analyze internal eye structure, heads of female flies at 1 day-after-eclosion (dae) were fixed in Bouin's fixative (EMS) for 48 h at room temperature, incubated 24 h in 50 mM Tris/150 mM NaCl and embedded in paraffin. Serial sections (6 mm thickness) through the entire heads were prepared, stained with hematoxylin and eosin (Vector), and examined by bright-field microscopy. Images of the sections that include the retinal were captured, and retina thickness was measured using Image J. To analyze fly brain structures, paraffin sections of heads of 1 dae males were prepared as described previously (115). Heads from five to ten flies were analyzed for each genotype. Western blotting Total tau was probed with anti-tau monoclonal antibody (Tau46, Zymed), and phosphorylated tau was probed with phospho-tau specific antibodies against phospho-Thr175/181 (AT270, Pierce), phospho-Ser199 (Biosource), phospho-Ser202 (CP13, Peter Davis), phospho-Ser409 (Biosource) and phospho-Ser422 (Biosource). Fifteen fly heads for each genotype were collected at 1-3 dae and homogenized in SDS-Tris-Glycine sample buffer, separated by 10% Tris-Glycine gel and transferred to nitrocellulose membrane. The membranes were blocked with 5% milk (Nestle), blotted with the antibodies described above, incubated with appropriate secondary antibody and developed using ECL plus Western Blotting Detection Reagents (GE Healthcare). The signal intensity was quantified using ImageJ (NIH). Western blots were repeated a minimum of three times with different animals and representative blots are shown. Extraction of sarkosyl-soluble tau Sarcosyl-insoluble tau was prepared as described in (50,116,117). Briefly, fly heads were homogenized in 10 volumes buffer and centrifuged for 20 min at 15 000g. The supernatant was brought to 1% N-lauroylsarcosinate, incubated for 1 h at room temperature with shaking and then further centrifuged for 1 h at 100 000g. The resultant highspeed pellet was re-suspended at 10 ml per 50 mg of starting material. Tau levels in the sarcosyl-soluble and -insoluble fraction were analyzed by western blot. This sarcosylinsoluble fraction was subjected to transmission electron microscopy. Flies were at 35 dae. Climbing assay The climbing assay was performed as previously described (60). Approximately 25 flies were placed in an empty plastic vial. The vial was gently tapped to knock the flies to the bottom, and the number of flies at the top, middle, or bottom of the vial was scored after 10 s. Experiments were repeated more than three times, and a representative result was shown.
v3-fos-license
2022-01-06T16:29:36.042Z
2022-01-01T00:00:00.000
245735970
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "aa06b1ea2ec4341ad5e6e16de3b2f26ecc31f118", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2459", "s2fieldsofstudy": [ "Medicine" ], "sha1": "19a6b9b1911f6d0b5f9eb7b42b2fdb3565c04775", "year": 2022 }
pes2o/s2orc
Utility of 3T MRI in Women with IB1 Cervical Cancer in Determining the Necessity of Less Invasive Surgery Simple Summary 3T MRI can estimate more precisely the tumor volume of early cervical cancer than physical examination. Women with IB1 cervical cancer, which is invisible on 3T MRI, have no parametrial invasion so that parametrectomy can be skipped or minimized. Vagina invasion or lymph node metastasis is rare in these women so that vaginectomy or lymph node dissection can be performed less aggressively. Therefore, less invasive surgery can be one of the treatment options if IB1 cervical cancer is invisible on 3T MRI. Abstract Purpose: Cervical cancer that is invisible on magnetic resonance imaging (MRI) may suggest lower tumor burden than physical examination. Recently, 3 tesla (3T) MRI has been widely used prior to surgery because of its higher resolution than 1.5T MRI. The aim was to retrospectively evaluate the utility of 3T MRI in women with early cervical cancer in determining the necessity of less invasive surgery. Materials and methods: Between January 2010 and December 2015, a total of 342 women with FIGO stage IB1 cervical cancer underwent 3T MRI prior to radical hysterectomy, vaginectomy, and lymph node dissection. These patients were classified into cancer-invisible (n = 105) and cancer-visible (n = 237) groups based on the 3T MRI findings. These groups were compared regarding pathologic parameters and long-term survival rates. Results: The cancer sizes of the cancer-invisible versus cancer-visible groups were 11.5 ± 12.2 mm versus 30.1 ± 16.2 mm, respectively (p < 0.001). The depths of stromal invasion in these groups were 20.5 ± 23.6% versus 63.5 ± 31.2%, respectively (p < 0.001). Parametrial invasion was 0% (0/105) in the cancer-invisible group and 21.5% (51/237) in the cancer-visible group (odds ratio = 58.3, p < 0.001). Lymph node metastasis and lymphovascular space invasion were 5.9% (6/105) versus 26.6% (63/237) (5.8, p < 0.001) and 11.7% (12/105) versus 40.1% (95/237) (5.1, p < 0.001), respectively. Recurrence-free and overall 5-year survival rates were 99.0% (104/105) versus 76.8% (182/237) (p < 0.001) and 98.1% (103/105) versus 87.8% (208/237) (p = 0.003), respectively. Conclusions: 3T MRI can play a great role in determining the necessity of parametrectomy in women with IB1 cervical cancer. Therefore, invisible cervical cancer on 3T MRI will be a good indicator for less invasive surgery. Introduction Radical hysterectomy, vaginectomy, and lymph node (LN) dissection have been considered as the standard treatment in treating International Federation of Gynecology and Obstetrics (FIGO) stage IB1 cervical carcinoma. Many investigations have demonstrated that these surgical procedures provide good long-term survival rates in women with this early cervical cancer [1][2][3][4]. However, these surgical procedures also may induce various complications. Bladder anatomy is gradually deformed, and the bladder function becomes poor because radical hysterectomy is associated with parametrectomy, leading to autonomic nerve injury [2][3][4]. Furthermore, this nerve injury may cause anorectal motility disorder and sexual dissatisfaction [5][6][7]. If the vaginectomy becomes excessive, women with early cervical cancer cannot feel sexual satisfaction postoperatively. Lymph node dissection may lead to lymphedema in women with IB1 cervical cancer [8][9][10]. Higher availability of screening examination helps to detect early cervical cancer in relatively young women. Subsequently, they have to face the poor quality of life resulting from life-long postoperative morbidities. Magnetic resonance imaging (MRI) is more precise in estimating tumor volume than is physical examination because MRI enables accurate measurement of three-dimensional tumor axes [11][12][13]. Only a few investigations have reported on the usefulness of such MRI findings, showing that early cervical or endometrial cancer can be treated with less invasive surgery if the tumor is invisible on MRI [14][15][16]. However, these studies did not deal with the role of 3 tesla (3T) MRI in evaluating early cervical cancer. 3T MRI provides a higher image resolution or shorter scan time compared to 1.5T MRI [17][18][19]. Therefore, we hypothesized that 3T MRI can provide useful imaging findings to determine whether less invasive surgery is necessary. The aim of this study was to retrospectively evaluate the utility of 3T MRI in women with early cervical cancer in determining the necessity of less invasive surgery. Materials and Methods This study (File No.: 2018-06-114) was approved by our institutional review board in Samsung Medical Center and informed consent was waived due to the retrospective design. Patients Between January 2010 and December 2015, a total of 427 patients with FIGO IB1 cervical cancer underwent MRI prior to radical hysterectomy ( Figure 1). Among them, 85 patients were excluded due to MRI examinations that were scanned using a 1.5T scanner or were done in a local hospital. Finally, 342 patients were included in the study population when they underwent 3.0T MRI at a single institute. Of them, 105 women (cancer-invisible group) had a cancer that was invisible on MRI. The remaining 237 women (cancer-visible group) had a cancer that was visible on MR images. The medical records of the cancerinvisible group (age range, 27-81 years; mean ± standard deviation, 48.1 ± 11.4 years) and cancer-visible group (25-81 years; 50.1 ± 11.5 years) were reviewed. Colposcopic biopsy and conization were performed in 70.8% (242/342) and 29.2% (100/342), respectively. Bimanual pelvic and rectovaginal examinations were done to identify the disease extent. Laboratory tests, chest radiography, cystoscopy, and sigmoidoscopy were routinely performed for the clinical FIGO staging [1]. The time interval between MRI and hysterectomy ranged from 1 to 115 days (15.4 ± 12.8 days) in the cancer-invisible group and from 0 to 79 days (14.3 ± 9.3 days) in the cancer-visible group. Bimanual pelvic and rectovaginal examinations were done to identify the disease extent. Laboratory tests, chest radiography, cystoscopy, and sigmoidoscopy were routinely performed for the clinical FIGO staging [1]. The time interval between MRI and hysterectomy ranged from 1 to 115 days (15.4 ± 12.8 days) in the cancer-invisible group and from 0 to 79 days (14.3 ± 9.3 days) in the cancer-visible group. The MR images were preoperatively interpreted by one of two radiologists who had approximately 5 years of experience in gynecologic imaging and were additionally reviewed by one radiologist who had approximately 19 years of experience in gynecologic imaging. The MRI diagnoses of only three cases were changed from the cancer-invisible group to cancer-visible group. Radical hysterectomy, vaginectomy, and LN dissection were performed in all women. Additional surgical procedures depended on the clinical stage and the surgeons' decision. When pelvic lymph nodes were suspicious for metastasis at frozen sectioning, para-aortic LNs were dissected. Two pathologists examined radical hysterectomy, vaginectomy, and LN specimens. They recorded the size of the cervical cancer, histologic type, depth of stromal invasion, lymphovascular space (LVS) invasion, parametrial invasion, vaginal invasion, resection tumor margin, and LN metastasis. After primary treatment, all patients received adequate follow-up procedures. During this period, patients underwent physical examination, Pap smear, and tumor marker every 3 months for the first 2 years, and every 6 months for the next 3 years. Imaging studies, such as abdomiopelvic CT or pelvis MRI, were conducted every 6-12 months for the first 2 years and then annually for the next 3 years. MR Imaging The pelvis was scanned with a 3T MRI scanner (Intera Achiva 3T; Philips Medical System, Best, The Netherlands). The upper abdomen was scanned with a 3T MRI or CT. MRI sequences of the pelvis included T2-weighted images, T1-weighted images, diffusionweighted images, and dynamic contrast-enhanced images. T2-weighted images were obtained into axial, sagittal, and coronal planes. The other sequences were obtained in the axial planes. The upper abdomen was scanned from lower lung to aortic bifurcation. T2-weighted and T1-weighted axial images were obtained using a fast spin echo sequence. We used the same MR parameters as those that Park et al. used [2]. Data Analysis Invisible cancer was defined when the cervical tumor was not seen on either the T2-weighted images, diffusion-weighted images, or contrast-enhanced T1-weighted images ( Figure 2) [3]. Visible cancer was defined when the cervical cancer is hyperintense on T2weighted images and hyperintense on diffusion-weighted images, hypointense on apparent diffusion coefficient map images, and poorly enhanced on contrast-enhanced T1-weighted images, as compared to neighboring cervical tissue ( Figure 3) [3]. Post-biopsy inflammation was differentiated from cervical cancer with the following findings: it was hyperintense on T2-weighted images. However, it had no diffusion restriction and showed iso-or higher enhancement compared to neighboring cervical tissue on post-contrast MR images [3]. Cancer-invisible and cancer-visible groups were compared regarding patient age, biopsy type, histologic type, and squamous cell carcinoma (SCC) antigen. These groups were also compared in terms of residual tumor size, depth of stromal invasion, LVS invasion, parametrial invasion, vaginal invasion, and LN metastasis. Post-biopsy tumor sizes on MR images were correlated with those on radical hysterectomy. Recurrent tumor was assessed on the follow-up CT or MR images. Recurrence-free and overall 5-year survival rates were calculated. Cancer-invisible and cancer-visible groups were compared regarding the recurrent rate and recurrence-free or overall 5-year survival rate. apparent diffusion coefficient map images, and poorly enhanced on contrast-enhanced T1-weighted images, as compared to neighboring cervical tissue ( Figure 3) [3]. Postbiopsy inflammation was differentiated from cervical cancer with the following findings: it was hyperintense on T2-weighted images. However, it had no diffusion restriction and showed iso-or higher enhancement compared to neighboring cervical tissue on postcontrast MR images [3]. apparent diffusion coefficient map images, and poorly enhanced on contrast-enhanced T1-weighted images, as compared to neighboring cervical tissue ( Figure 3) [3]. Postbiopsy inflammation was differentiated from cervical cancer with the following findings: it was hyperintense on T2-weighted images. However, it had no diffusion restriction and showed iso-or higher enhancement compared to neighboring cervical tissue on postcontrast MR images [3]. Statistical Analysis Patient age, tumor size, SCC antigen, and invasion depth were compared with a Mann-Whitney test because these data had a non-Gaussian distribution. Proportions of biopsy types, cancer histology, LVS invasion, parametrial invasion, vaginal invasion, LN metastasis, and recurrent rate were compared with Fisher's exact test. Odds ratios and 95% confidence intervals were calculated using the approximation of Woolf. When a value was zero, 0.5 was added to each to make the calculation possible. Recurrence-free and overall 5-year survival rates were compared with Kaplan-Meier survival curves. Commercially available software SPSS 24.0 for Windows (SPSS Inc., Chicago, IL, USA) was used for statistical analyses. A p-value of <0.05 was considered statistically significant. Among the post-hysterectomy histologic findings, parametrial invasion provided the highest odds ratio, 53.8 in the cancer-visible group versus the cancer-invisible group ( Table 3). The other odds ratios were 31.4, 15.7, 5.8, 5.1, and 0.3 regarding recurrent tumor, residual tumor, LN metastasis, LVS invasion, and vaginal invasion, respectively. Note-LN, lymph node; LVS, lymphovascular space. Tumor size and invasion depth are shown as the median ± standard deviation (range). Discussion Our study showed that the cancer-invisible group had no parametrial invasion postoperatively. Furthermore, the residual tumor and invasion depth were much smaller Cancers 2022, 14, 224 7 of 10 than those in the cancer-visible group. The incidences of LN metastasis and LVS invasion were also much lower than those in the cancer-invisible group. Subsequently, the cancerinvisible group had a much higher long-term recurrence-free or overall survival rate than the cancer-visible group. From this point of view, invisible cancer on 3T MRI can be a much stronger indicator than small tumor size. Our study showed that parametrial invasion was postoperatively absent in the cancer-invisible group. Moreover, the odds ratio of this histologic finding was much higher compared to those of the other histologic findings. Accordingly, invisible cancer on 3T MRI can strongly suggest no sign of parametrial invasion [3]. Park et al. also have reported that there was no parametrial invasion in patients with 1B1 cervical cancer that was invisible on 1.5T or 3T MRI [2]. However, the number of patients undergoing 3T MRI was small in their study. Kamimori and Yamajaki et al. also have reported that parametrial invasion is rare in small cervical cancer preoperatively measured on preoperative MRI [12,13]. They did not state whether their MRI scanner is 1.5T or 3T. The other prognostic factors are still difficult to precisely identify with preoperative MRI. LN metastasis was most commonly assessed with MRI. Previously reported papers have showed that MRI sensitivity for detecting LN metastasis was only 30-73% [14][15][16][17]. They used the lymph node size in order to determine if there was metastasis. Diffusionweighted MRI improves the diagnostic accuracy, but the sensitivity and specificity were 86% and 84%, respectively [18]. Our study also showed that if IB1 cancer was invisible on 3T MRI, then the incidence of LN metastasis was much smaller than that in their study [18] with diffusion-weighted imaging. However, further investigation is necessary to determine if LN dissection can be skipped in invisible IB1 cervical cancer. The incidence of LN metastasis was slightly higher in the cancer-invisible group compared to that in the Park et al. investigation [2]. LVS invasion or depth of stromal invasion is still impossible to detect with preoperative MRI. After all, assessing tumor depiction on 3T MRI is a good prognostic factor for predicting parametrial invasion in patients with 1B1 cervical cancer. Ultrahighfield MRI at 7T or higher will be introduced in the near future and these prognostic factors can then be assessed more precisely [19,20]. Parametrectomy may injure the ureter or nerve in the parametrium [5][6][7]21]. Ureter injury manifests postoperatively in urine leakage or ureter obstruction [21]. This complication needs interventional or surgical procedures. Furthermore, many bundles of autonomic nerves are interrupted by parametrectomy so that autonomic function of the bladder, vagina, or rectum can be impaired. Subsequently, follow-up CT and MRI can show gradual deformity of the urinary bladder, such as a thick wall, coarse trabeculation, residual urine, or over-distended bladder [5][6][7]. Neurogenic bladder is not uncommon after radical hysterectomy, as many patients undergo urinary frequency changes, recurrent cystitis, or self-catheterization. The autonomic nerve injury may also induce sexual or anorectal dysfunction [22][23][24]. Therefore, parametrectomy should be avoided in patients who do not have parametrial invasion. The incidence of invisible 1B1 cancer on MRI is not well-known. Park et al. have reported that it accounts for 24.9% (86/346) among the IB1 cervical cancers [2]. Our study showed that it slightly increased to 30.7% (105/342). More available screening tests and MRI examinations may increase the early detection of IB1 cervical cancer [3]. Besides, the ongoing development of MRI techniques will provide more precise information on cancer detection. Importantly, most of those patients have early stage disease. As a result, parametrectomy can be skipped in a larger number of patients with 1B1 cervical cancer. The incidence of postoperative complications will be reduced with less invasive surgery. Nevertheless, gadolinium has been found to cross the placenta and to stimulate malformations in animal models [25]. Hence, its use during pregnancy is contraindicated in the first trimester of pregnancy in patients with cervical cancer and improved MRI techniques are warranted [26]. Compared to the results of Park et al.'s study, the incidences of residual tumor, vaginal invasion, lymph node metastasis, and lymphovascular space invasion were slightly higher in our study. Our study showed that the proportion (38.1%) of adenocarcinoma was relatively higher compared to that (29.1%) in their study. Thus, we think that increasing the proportion of non-SCC cervical cancer might influence the incidence differences of other pathologic findings. An invisible tumor on 3T MRI does not contribute to skipping vaginectomy and lymph node dissection in patients with 1B1 non-SCC cervical cancer. Further investigations are necessary to compare SCC and non-SCC in terms of MRI findings or clinical outcomes. This study has some limitations. First, it was conducted using a retrospective design. The likelihood for selection bias cannot be excluded. This limitation may influence the histologic type of cervical cancers. Second, baseline characteristics were not matched with propensity scores between the groups. Third, the incidence of non-squamous cell cancers was relatively higher than that of previously published studies. This finding might result from selection bias in that many squamous cell carcinomas unfit given the inclusion criteria were excluded. These cancers showed poor behavior compared to SCC. This finding might influence the residual tumor, vaginal invasion, LVS invasion, or LN metastasis. Fourth, post-operative complications were not qualitatively or quantitatively assessed to compare between the groups. We relied on follow-up CT or MRI findings to detect anatomical changes. However, this assessment is not sufficient to precisely identify functional changes. Fifth, our scanners were the initial version of 3T MRI. Currently these scanners are replaced with upgraded 3T MRI. Furthermore, non-SCC cervical cancer had a greater number of residual tumors than SCC cervical cancer. These factors may result in discordant findings between 3T MRI and pathologic examination in terms of residual tumors. Advanced ultrasonography with 3D and color Doppler display as well as transvaginal elastography also may be useful to evaluate early cervical cancer [27]. Conclusions 3T MRI can be a useful application to guide surgical interventions in patients with IB1 cervical cancer. If a cervical cancer is not depicted on 3T MRI, gynecologists can skip parametrectomy or remove as little as possible of the parametrial tissue. Subsequently, postoperative complications, such as bladder dysfunction, sexual dissatisfaction, or anorectal dysfunction, will be reduced in patients with invisible IB1 on 3T MRI. However, invisible cancer on 3T MRI cannot completely exclude the likelihood of vaginal invasion or LN metastasis, even if the incidences are much lower than those in patients with visible IB1 cervical cancer on MRI. Future introduction of higher than 3T MRI will contribute to better determining if less invasive surgery is necessary.
v3-fos-license
2020-11-12T09:09:22.993Z
2020-11-10T00:00:00.000
226302738
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1128/msystems.00759-20", "pdf_hash": "858fb79838ee83eb5d2547738bb96f668b3898f2", "pdf_src": "ASMUSA", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2462", "s2fieldsofstudy": [ "Biology" ], "sha1": "552862cb4aac703ec00fbc587335875aa8410288", "year": 2020 }
pes2o/s2orc
Characterization of the Plasmidome Encoding Carbapenemase and Mechanisms for Dissemination of Carbapenem-Resistant Enterobacteriaceae Global dissemination of carbapenem-resistant Enterobacteriaceae (CRE) threatens human health by limiting the efficacy of antibiotics even against common bacterial infections. Carbapenem resistance, mainly due to carbapenemase, is generally encoded on plasmids and is spread across bacterial species by conjugation. Most CRE epidemiological studies have analyzed whole genomes or only contigs of CRE isolates. Here, plasmidome analysis on 230 CRE isolates carrying blaIMP was performed to shed light into the dissemination of a single carbapenemase gene in Osaka, Japan. The predominant dissemination of blaIMP-6 by the pKPI-6 plasmid among genetically distinct isolates was revealed, as well as the emergences of pKPI-6 derivatives that acquired advantages for further disseminations. Underlying vast clonal dissemination of a carbapenemase-encoding plasmid, heteroresistance was found in CRE offspring, which was generated by the transcriptional regulation of blaIMP-6, stabilization of blaIMP-6 through chromosomal integration, or broadened antimicrobial resistance due to a single point mutation in blaIMP-6. was found in CRE offspring, which was generated by the transcriptional regulation of bla IMP-6 , stabilization of bla IMP-6 through chromosomal integration, or broadened antimicrobial resistance due to a single point mutation in bla IMP-6 . KEYWORDS Enterobacteriaceae, IMP-1, IMP-6, carbapenem resistance, carbapenemase, chromosomal integration, heteroresistance, plasmid analysis, plasmid dynamics, plasmidome T he rapid global dissemination of multidrug-resistant Enterobacteriaceae threatens health care systems worldwide (1). Carbapenem-resistant Enterobacteriaceae (CRE) are of major concern because alternative treatment options are limited (2). Carbapenem resistance is primarily conferred by carbapenemases that hydrolyze carbapenem (3). KPC, NDM, and OXA-48 are the most commonly detected carbapenemases (3). Carbapenemase genes are generally plasmid encoded and are frequently transmitted across species (4). Therefore, genetic tracking of plasmids encoding carbapenemase genes has allowed the monitoring of the spread of CRE isolates. For example, structural similarities among plasmids from isolates obtained in a single hospital outbreak allowed elucidating links between patients carrying the isolates (5)(6)(7), and plasmid data accumulated globally revealed the worldwide spread of an epidemic plasmid carrying bla KPC. (8). However, most regional surveillance studies compared the whole genomes or only contigs of CRE isolates without analyzing the clonality of the spreading carbapenemase-encoding plasmids, and few studies have comprehensively analyzed carbapenemase-encoding plasmids broadly spreading in a certain region (9). We previously conducted a surveillance study of CRE in 1,507 patients from 43 hospitals in northern Osaka (population, 1,170,000; area, 307 km 2 ), Japan (10), and we reported that 12% of the patients carried CRE and 95% of CRE isolates harbored bla IMP-6 , the predominant carbapenemase in Japan. The predominance of this particular carbapenemase gene might have resulted from vigorous horizontal spreading of a specific plasmid carrying bla IMP-6 in this region. The aim of the present study was to analyze the plasmidome transmitting carbapenemase genes in order to unveil the mechanisms for their regional dissemination. RESULTS Dissemination of pKPI-6. All bla IMP -positive CRE isolates of Escherichia coli (n ϭ 135) and Klebsiella pneumoniae (n ϭ 95) were classified into seven groups based on the results of S1-PFGE followed by Southern blotting hybridization with probes for the bla IMP and repA genes encoded on the IncN-type plasmid pKPI-6, sporadically reported as a plasmid carrying bla IMP-6 (11) (Fig. 1). Ninety-nine of the 135 E. coli isolates (73%) and 88 of the 95 K. pneumonia isolates (93%) carried plasmids classified as group pKPI-6 based on plasmid size and replicon type (see Fig. S1 in the supplemental material). Next, we compared the similarity between pKPI-6 and 39 representative plasmids categorized as group pKPI-6 based on whole-genome sequencing (WGS) data using Illumina HiSeq 3000 or Illumina MiSeq (see Fig. S1). The overall sequence identity was 99% Ϯ 0.28%, and the sequence coverage was 98% Ϯ 4.0% (mean Ϯ the standard deviation). The complete sequences of three plasmids were previously confirmed as clonal with pKPI-6 using a combination of PacBio RsII, Illumina HiSeq 3000, and Southern blot methods (12). These analyses confirmed that pKPI-6 was the predominant plasmid responsible for the transmission of bla IMP-6 in the study area (187 of 230 [81.3%] bla IMP -positive CRE isolates). Barring occasional isolations of organisms coharboring different carbapenemase genes (13,14), few studies have shown the coexistence of two identical carbapenemase genes on different plasmids within an isolate (15). WGS revealed that isolate E119 carried pKPI-6 and an IncF-type plasmid (pEC743_1) that had a bla IMP-6 cassette from pKPI-6 integrated (49) (see Fig. S3B and C). Characterization of IncF plasmids encoding bla IMP-6 . In addition to the K. pneumoniae isolates carrying group non-IncN KP plasmids, E. coli isolates carrying plasmids without IncN replicon were found in a single hospital (hospital D; Fig. 1A). WGS of these isolates revealed that they harbored nearly identical bla IMP-6 -encoding plasmids with an IncFIA-type replicon (categorized as group IncF) ( Fig. 2A; see also Table S1). These plasmids were generated by integration of a cassette carrying bla IMP-6 on pKPI-6 into another IncF plasmid at IS26. This IncF plasmid (pEC302/04; Fig. 2B) has been reported to transmit antimicrobial resistance since 1965 (16). The MICs of meropenem for the E. coli isolates carrying group IncF plasmids were low compared to those of E. coli isolates harboring other bla IMP-6 -encoding plasmids, such as pKPI-6 (see Fig. S4). Mutations or deletions in the porin (OmpF) gene in E. coli have been reported to enhance resistance to ␤-lactams (17). However, all E. coli isolates carrying group IncF plasmids had a premature termination codon within ompF, whereas the other isolates carried wild-type ompF (Table 1; see also Table S2). MICs of meropenem were low for these group IncF plasmid-carrying isolates, despite them being OmpF deficient. To investigate carbapenem resistance in the same genetic background, plasmids from representative isolates in each bla IMP-6 carriage group were transformed into the E. coli TOP10 strain and MICs for the transformants were determined. Transformant T305 carrying pE305_IMP6 single of group IncF from E. coli isolate E305 was more susceptible to meropenem than transformants carrying bla IMP-6harboring plasmids of groups ( Table 2). The transcription of bla IMP-6 in the pE305_IMP6 single transformant was significantly lower than that in the pKPI-6 transformant (see Fig. S5A), although the plasmid copy numbers in the bacterial cells were comparable (see Fig. S5B). These results indicated that the lower MICs of meropenem in E. coli isolates carrying group IncF plasmids were due to the reduced transcription of bla IMP-6 . WGS of E305 and E318 revealed the complete sequence of pE318_IMP6; however, it failed to determine the complete sequence of pE305_IMP6. Therefore, to analyze the structure of pE305_IMP6, we used a combination of WGS, Southern blotting, and qPCR analysis. The length and depth of each contig of pE305_IMP6 deduced from WGS are shown in the de novo assembly graphs generated using the Bandage software (19) in Fig. 3A. The total length of pE305_IMP6 deduced from WGS data were ϳ149 kbp. In addition to showing high similarity to each other, the region containing bla IMP-6 bracketed by a set of IS26 was identical to a part of pKPI-6. Block arrows indicate confirmed or putative open reading frames (ORFs), and their orientations. Arrow size is proportional to the predicted ORF length. The color code is as follows: red, carbapenem resistance gene; yellow, other antimicrobial resistance gene; light blue, conjugative transfer gene; blue, mobile element; and purple, toxin-antitoxin. Putative, hypothetical, or unknown genes are represented as gray arrows. The gray-shaded area indicates regions with high identity between the two sequences. Accession numbers of the plasmids are indicated in brackets. (B) Ancestor of plasmid pE301_IMP6. The backbone of plasmid pE301_IMP6 which is representative of the plasmids in group IncF, corresponded to the structure of plasmid pEC302_04 reported in Malaysia in 2004. However, according to Southern blotting results, pE318_IMP6 and pE305_IMP6 were ϳ145 and ϳ200 kbp in size, respectively (Fig. 3B). Based on the depth of each contig, the copy number of each contig was predicted as follows: Contig3, 1 copy; Contig2 and Contig5, 6 copies; Contig1 and Contig6, 3 copies; and Contig4, 5 copies (Fig. 3A). Therefore, pE305_IMP6 was predicted to have an ϳ19-kbp repeat region consisting of triplication of Contig1 and Contig6, sextuplication of Contig2 and Contig5, and quintuplication of Contig4 (Fig. 3C). Except for the repeat region, pE305_IMP6 and pE318_IMP6 exhibited high sequence similarity (identity, 99.27%; coverage, 100%) (Fig. 3D). The bla IMP-6 gene was located on Contig6 and was predicted to be triplicated. qPCR analysis corroborated that pE305_IMP6 carried three copies of bla IMP-6 , whereas pE318_IMP6 harbored a single copy (see Fig. S5C). bla IMP-6 transcription was significantly higher in isolate E305 than in isolate E318 (Fig. 3E), even though the bla IMP-6carrier plasmid copy numbers in the cells of these isolates were not significantly different (see Fig. S5D). Triplication of bla IMP-6 in tandem resulted in a higher transcription level in E305 and thus a higher level of resistance to meropenem. Subculture of the clonal isolate E305 in broth medium revealed a mixture of subpopulations of bacteria carrying a plasmid with multiple bla IMP-6 copies (which represented the majority) and bacteria carrying a plasmid with a single bla IMP-6 copy. In Southern blotting analyses for bla IMP-6 , a faint band at ϳ145 kbp was observed in addition to the major band at ϳ200 kbp (Fig. 3B). It was also found that T305 (a a Groups correspond to those in Fig. 1 transformant of pE305_IMP6 single extracted from E305) carried an ϳ145-kbp plasmid without bla IMP-6 amplification due to recA deficiency in the recipient E. coli TOP10 strain (see Fig. S5E) (20). qPCR analysis confirmed that T305 carried one bla IMP-6 copy on its plasmid (see Fig. S5F). These results indicated the existence of a subpopulation carrying a Groups correspond to those presented in Fig. 1 a plasmid with one bla IMP-6 copy within E. coli isolate E305, whereas the majority of the population carried a plasmid harboring three copies of bla IMP-6 . Comparison of CRE isolates carrying pKPI-6 with those carrying other groups of plasmids harboring bla IMP-6 . bla CTX-M-2 , which is an ESBL gene located distant from bla IMP-6 on pKPI-6, compensated for the narrow range of hydrolysis of ␤-lactams by IMP-6 (11,18). However, these two ␤-lactamase genes were not always transferred together from pKPI-6 to another plasmid. Plasmids categorized as group non-IncN KP and group IncF did not carry ESBL genes (see Table S3) and rarely conferred resistance to penicillins, in contrast to pKPI-6, which confers broad resistance to ␤-lactams (Fig. 1). We next measured the conjugation efficiency of representative plasmids in each group ( Table 2). pKPI-6 plasmids and group IncN plasmids, which had the entire pKPI-6 plasmid incorporated, showed a higher conjugation efficiency than group non-IncN KP/IncF plasmids. These characteristics may have facilitated the vast horizontal dissemination of pKPI-6 in the study area. Compared with the chromosomal diversity among E. coli isolates bearing pKPI-6, K. pneumoniae isolates carrying pKPI-6 exhibited higher clonality as indicated by pulsedfield gel electrophoresis with XbaI (XbaI-PFGE) analysis (Fig. 1). This may be explained by the presence of the kikA gene on pKPI-6, the product of which reportedly promotes cell death of K. pneumoniae following conjugation (21). The conjugation efficiency of pKPI-6 into K. pneumoniae ATCC 13883 was considerably lower than that into E. coli TUM3456 (3.3 ϫ 10 Ϫ4 and 3.7 ϫ 10 Ϫ1 , respectively). Maybe only "kikA-resistant" K. pneumoniae are able to acquire pKPI-6, leading to clonal similarity among the K. pneumoniae isolates bearing pKPI-6. DISCUSSION IMP-producing Enterobacteriaceae have been reported sporadically on a global basis (2). IMP-4-producing Enterobacteriaceae are endemic to Australia (22), and IMP-1, -4, and -8 producers have been occasionally detected in China (23). Our study revealed the exclusive dissemination of IMP-6 producers (95% of CRE isolates) in northern Osaka, Japan, consistent with findings in previous studies (11,24,25). By analyzing the plasmidome transmitting bla IMP , we clarified the relationships between bla IMP -harboring isolates that seemed diverse based on XbaI-PFGE analysis or comparison of short-read WGS results. The present study revealed predominant dissemination of pKPI-6 in the study area, which may have resulted in the emergence of diverse derivatives. Group IncF plasmids possessed similar genomic structures, consisting of the globally disseminated IncF plasmid and a bla IMP-6 cassette cointegrated on the pKPI-6 genome, without accompaniment of bla CTX-M-2 (Fig. 2). Our analysis revealed that bla IMP-6 transcription was lower from group IncF plasmid (pE305_IMP6 single ) than from pKPI-6 in E. coli cells of the same genetic background (see Fig. S5A). Low carbapenemase gene transcription is considered one of the reasons for reduced resistance to meropenem (26). Therefore, CRE isolates carrying group IncF plasmids might have a reduced fitness cost for the carriage of bla IMP-6 , leading to further environmental dissemination of bla IMP-6 (27). Unlike for other plasmids in group IncF, the complete sequence of pE305_IMP6 could not be obtained by long-read or short-read sequencing because of a signature 19-kbp repeat sequence unit. Based on combined WGS, Southern blotting, and qPCR data, we proposed a hypothetical structure of pE305_IMP-6 (Fig. 3C). Our results indicated that, despite its clonal origin, CRE isolate E305 comprised two different populations: a major population carrying pE305_IMP-6 with multiple bla IMP-6 copies and a minor population carrying pE305_IMP-6 single with a single bla IMP-6 copy ( Fig. 3B; see also Fig. S5E and F). Moreover, the amplification of bla IMP-6 on the IncF plasmid enhanced the transcription of bla IMP-6 ( Fig. 3E), resulting in increased resistance to meropenem (Table 3). These results are consistent with previous studies reporting higher resistance to carbapenem through amplification of bla OXA-58 (28) and bla NDM-1 (20). All E. coli isolates carrying group IncF plasmids were found to possess ompF with a premature termination codon (see Table S2). When an isolate producing wild-type OmpF carries this plasmid with a single copy of bla IMP-6 , the isolate is difficult to detect due to weaker resistance to meropenem. However, when an isolate with a porin mutation acquires a group IncF plasmid with multiple bla IMP-6 copies, it may abruptly exhibit strong resistance to meropenem without any direct trace of horizontal transfer. These types of plasmids may act as "hidden transmitters" of bla IMP-6 . Moreover, we demonstrated chromosomal integration of group IncF plasmids in some E. coli isolates. Carbapenemase genes have been reported to be transmitted primarily through plasmid conjugation (4), and chromosomal integration has been reported in a limited number of strains (29). In our study, 3 of 135 E. coli isolates (2.2%) exhibited chromosomal integration of bla IMP-6 , which presumably occurred during the vast horizontal spread of pKPI-6. Compared to bla IMP-6 on plasmids, chromosomal bla IMP-6 was not readily transmissible to another patient. However, these isolates may stably possess bla IMP-6 within a patient and not lose carbapenem resistance through the elimination of plasmids harboring bla IMP-6 . In the early 1990s, some unique metallo-␤-lactamases were reported in Japan (30, 31), followed by the identification of IMP-1 (32). Since then, these ␤-lactamases have The genomic structure of pE105_IMP1 (group IMP1) was compared to plasmids pKPI-6 and pE013_IMP6 (group pKPI-6) obtained from K. pneumoniae isolate E013. Differences between pE105_IMP1 and pE013_IMP6 are visually extended at the bottom. The color code is the same as that described in the legend of Fig. 2. (B) Schematic chart of homologous recombination. The 713-bp region of plasmid pE013_IMP6 was removed by homologous recombination at the 32-bp region. been frequently identified in Japan (33). The single amino acid variant, IMP-6, was identified in 2001 (18). IMP-1 producers have disseminated mainly in eastern Japan, including Tokyo (24,34), whereas IMP-6 producers have been almost exclusively found in western Japan, including Osaka (7,10,11,25). Consistent with these findings, in the present study only one K. pneumoniae isolate carrying bla IMP-1 , E105, was isolated in hospital A, where CRE carrying pKPI-6 were dominant. The patient carrying CRE isolate E105 was hospitalized for 512 days with other inpatients carrying CRE with pKPI-6, and the isolate showed ϳ83% similarity with a cluster of K. pneumoniae isolates carrying pKPI-6 in the XbaI-PFGE phylogeny (Fig. 1B). In addition, WGS of the plasmids revealed that a 714-bp region bracketed by 32-bp homologous regions was the only difference between pE105_IMP1 and pE013_IMP6 (Fig. 5A). This very small fragment appeared to have been removed by homologous recombination in pE105_IMP1 (Fig. 5B). Our results suggest that bla IMP-6 had disseminated via the transmission of pKPI-6, and spontaneous mutation may have generated the bla IMP-1 -encoding plasmid providing broader antimicrobial resistance, resulting in increased fitness in the clinical setting. This multi-institutional surveillance study uncovered the clonal dissemination of a plasmid encoding a specific carbapenemase IMP-6 and demonstrated that a seemingly clonal horizontal dissemination of CRE isolates had embraced heterogeneous minor subpopulations, which exhibited broadened antimicrobial resistance, stable carriage of bla IMP-6 through chromosomal integration, or heteroresistance related to covert bla IMP transmission. Such diverse gene adaptations might also be common among CRE isolates carrying other carbapenemase genes. By multifaceted analysis of the plasmidome, this study revealed the vast regional dissemination of a carbapenemase-encoding plasmid, along with the presence of diverse derivatives that would ensure and facilitate the dissemination of carbapenemase genes in various environments, resulting in serious complications in clinical settings. CRE isolates and PFGE phylogenetic analysis. We performed a CRE surveillance study of 1,507 patients hospitalized in 43 hospitals located in northern Osaka between December 2015 and January 2016 (10). In the present study, we analyzed 230 CRE isolates carrying bla IMP obtained in the surveillance study, including 135 E. coli isolates and 95 K. pneumoniae isolates. All isolates were subjected to XbaI-digested PFGE for phylogenetic analysis (35). Dendrograms were generated from PFGE patterns by the UPGMA method using BioNumerics software (version 6.6; Applied Maths NV, Sint-Martens-Latem, Belgium). Classification of bla IMP carriage by PFGE and Southern blotting. The size and replicon type of bla IMP -harboring plasmids were determined by S1-nuclease-digested PFGE followed by Southern hybridization (S1 nuclease was obtained from TaKaRa Bio, Shiga, Japan). S1-PFGE and Southern blot hybridization for the bla IMP-6 and repA genes encoded on the IncN-type plasmid were performed as described in our previous study (12). The sizes of bla IMP -encoding plasmids were determined using BioNumerics software (version 7.5; Applied Maths NV). The modes of bla IMP carriage were classified into seven groups based on the sizes and replicon types of the plasmids carrying bla IMP . The groups and their associated characteristics are as follows: group pKPI-6, a pKPI-6-like bla IMP-6 -encoding plasmid (ϳ50 kbp, encoding repA for IncN plasmid); group IncN, a bla IMP-6 -encoding plasmid (not ϳ50 kbp, encoding repA for IncN plasmid); group non-IncN KP, a bla IMP-6 -encoding plasmid (without repA for IncN plasmid) harbored by K. pneumoniae isolates; group IncF, a bla IMP-6 -encoding plasmid (without repA for IncN plasmid) harbored by E. coli isolates; group double bla IMP-6 , multiple plasmids with bla IMP-6 harbored by a single isolate; group chromosome, chromosomal bla IMP-6 ; group non-typeable, a bla IMP-6 -encoding plasmid of unknown size; group IMP1, a bla IMP-1 -carrier plasmid. Isolates classified as chromosomal bla IMP carriers were further analyzed to identify the location of bla IMP . In brief, I-CeuI endonuclease-digested PFGE followed by Southern blotting using probes for bla IMP-6 and 16S rRNA genes was performed to confirm the location of the bla IMP gene in three E. coli isolates-E138, E300, and E302-as previously described (29). Antimicrobial susceptibility testing. Susceptibility to ampicillin, ampicillin/sulbactam, piperacillintazobactam, piperacillin, cefotaxime, cefepime, imipenem, and meropenem was determined by the broth microdilution method according to the Clinical and Laboratory Standards Institute document M100-S28 (36). MICs of meropenem were determined using Etest (bioMérieux, Marcy l'Etoile, France), following the manufacturer's instructions. E. coli ATCC 25922 was used as a control strain. Whole-genome sequencing and genomic analysis. Genomic DNA for long-and short-read sequencing was extracted by using a DNeasy PowerSoil kit (Qiagen, Hilden, Germany). Short-read sequencing was conducted on an Illumina HiSeq 3000 sequencer using the KAPA library preparation kit (Kapa Biosystems, Woburn, MA) or on an Illumina MiSeq sequencer using the KAPA HyperPlus Library Preparation kit (Kapa Biosystems). Long-read sequencing was conducted on a Nanopore GridION sequencer (Oxford Nanopore Technologies, Oxford, UK) using sn SQK-LSK109 1D ligation sequencing kit and sn EXP-NBD103 native barcoding kit. The reads were assembled and polished using Unicycler (37). In cases where the complete plasmid sequences could not be constructed, sequences were assembled with CANU (version 1.8) (38) or flye (39) and improved using Pilon (40) or Racon (41). The PlasmidFinder (42) and ResFinder (43) databases were used to identify antimicrobial resistance genes and plasmid replicon types, respectively. A detailed analysis of the insertion sequence was performed using ISfinder (44). The sequences were annotated with RASTtk (45), and the genomic structures were compared with EasyFig (46). Plasmids similar to those found in this study were identified using BLAST. Bacterial conjugation assays were performed using the transformants as donors and the sodium azide-resistant E. coli strain TUM3456 (47) as a recipient. After mixing overnight cultures of donors and recipients at a 1:10 volumetric ratio, the mixture (10 l) was incubated on LB agar for 24 h at 37°C. Transconjugants were selected on LB agar containing cefotaxime (2 g/ml) and sodium azide (150 g/ ml). The conjugation frequency was calculated from the CFU as the number of transconjugants divided by the number of donors plus transconjugants. Determination of the plasmid copy number per host bacterial cell. DNA of E. coli isolates E305 and E318, and E. coli transformants with plasmids pE188_IMP6 and pE305_IMP6 single (T188 and T305, respectively) was extracted using the DNA minikit (Qiagen). Using qPCR, the copy numbers of the repA2 gene on plasmids pE305_IMP6 and pE318_IMP6 and the bla IMP-6 gene on pE188_IMP6 were compared to the copy number of the rrsA gene encoding 16S rRNA on the chromosome. qPCRs were carried out using Thunderbird SYBR qPCR Mix (Toyobo Life Science, Osaka, Japan) on a LightCycler 96 system (Roche Life Science, Penzberg, Germany). Primers used for this assay are list in Table S4 in the supplemental material. qPCR analysis was performed using data from repeated experiments (n ϭ 6), and the plasmid copy number per cell was calculated from cycle threshold (C T ) values using the comparative C T method (48). Determination of the copy number of bla IMP-6 per plasmid. Plasmids of E. coli isolates E305 and E318 were extracted using a plasmid miniprep kit (Qiagen). Using qPCR, the copy numbers of the bla IMP-6 gene were compared to those of the repA2 gene on plasmids pE305_IMP6 and pE318_IMP6. qPCRs were carried out using Thunderbird SYBR qPCR Mix on a LightCycler 96 System. Primers used for this assay are listed in Table S4. qPCR analysis was performed using data from repeated experiments (n ϭ 5), and the bla IMP-6 copy number per plasmid was calculated from C T values using the comparative C T method. Transcription of bla IMP-6 . E. coli isolates E305 and E318, and E. coli transformants T188 and T305 were incubated in LB broth until the optical density at 600 nm reached 0.3 to 0.4. The total RNA was extracted using the RNeasy minikit (Qiagen). RNA was treated with ReverTra Ace qPCR RT Master Mix with gDNA remover (Toyobo Life Science) to remove contaminating DNA and to reverse transcribe the RNA into cDNA. For quality control, DNase-treated RNA that had not been reverse transcribed was subjected to a DNA contamination test by qPCR. The rrsA gene encoding 16S rRNA served as an endogenous control for normalization. qPCRs were carried out using Thunderbird SYBR qPCR Mix on a LightCycler 96 system. Primers used for this assay are listed in Table S4. qPCR analysis was performed using data from repeated experiments (n ϭ 7), and transcript levels were calculated from C T values using the comparative C T method. Data availability. The WGS data are available from the DDBJ (DNA Data Bank of Japan) database under accession numbers AB616660, AP019402, AP019405, and AP022349 to AP022369. Raw data of isolate E305 are available at NCBI under accession numbers DRX184368 and DRX182679. SUPPLEMENTAL MATERIAL Supplemental material is available online only. ACKNOWLEDGMENTS We thank Isao Nishi and Akiko Ueda, Osaka University Hospital, for assistance with antimicrobial resistance assays, and we thank Yoshikazu Ishii, Toho University Graduate School of Medicine, for providing E. coli TUM3456.
v3-fos-license
2018-12-31T07:39:36.983Z
2014-08-31T00:00:00.000
110183072
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/mpe/2014/701652.pdf", "pdf_hash": "31e49474578816bebb783e7ca399e5e6361d6504", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2463", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "31e49474578816bebb783e7ca399e5e6361d6504", "year": 2014 }
pes2o/s2orc
Computational Experiment Study on Selection Mechanism of Project Delivery Method Based on Complex Factors Project delivery planning is a key stage used by the project owner (or project investor) for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Buildmethodwhen the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners withmethods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance. Introduction In the construction industry, project delivery method not only distributes the rights and responsibilities but also organizes the coordination between project owner and contractors [1].PDM has a very important impact on the project strategic objectives, including project cost control, time planning, quality, and construction operations.Due to different modes relying on different situation, the project owners or project managers have to consider the experience and bias when they estimate PDM.More importantly, they need to seriously analyze many complex factors, such as construction environment, project features, ability of project owner, and ability of potential contractors. There are some literatures mainly researching the relationship between PDM and project performance.Based on the comparative study on 351 cases in American construction industry, Konchar and Snvido [2] indicated the performance differences among cost control, quality, and time planning under different PDMs.Ling et al. [3] built a prediction model for project performance in Design-Build (DB) project and Design-Bid-Build project by using multivariable method.In addition, there are some other researches on the decisionmaking methods to the selection of PDM based on different ways.The research of Mahdi [4] pointed out the indicators affecting the selection of PDM and the index weights with the help of analytic hierarchy process (AHP).Similarly, Mafkheri et al. [5] applied the improved AHP to establish selection model of PDM with multistandards and multilevels, as a result, the negative influence caused by the uncertainty from experts can be decreased.Luu et al. [6,7] showed the mechanism of how project owner's demand, project features, and project environment affected the PDM selection based on the empirical conclusion.By using artificial neural network (ANN), Ling and Liu [8] measured and calculated the project performance in the DB projects.Chen et al. [9] also used ANN to find the cardinal rules on PDM selection.Unlike [8], [9] verified the validity of the model by using cases from China. As mentioned above, PDM selection is very important to the project owner.Recent researches mainly focused on the foundation of index system.However, to establish a reasonable index system was usually affected by the rationality of indicators, the objective evaluation from experts, the validation of the large scale data, and so forth.Apart from this, recent researches did not describe the mechanism how these factors affect PDM selection.Therefore, this paper focused on two methods which are widely applied in construction industry, DBB and DB.In this paper, project owner and project contractor are regarded as agents.Based on the analysis of behavior and decision-making rules of project owner agent and contractor agent, we established a computational experiment model and discussed how these factors, including project complexity, governance strength, and market environment, affect the PDM selection. Complex Factors Affecting PDM Selection Project delivery method defines the stakeholders involved in the implementing sequence of project design, procurement, and construction, and it also defines the benefits and risks of various stakeholders by the types of contract and design of system.As a result, we can use three dimensions to redefine the project delivery mode: (1) logical dimensions, including tasks in three stages: design, procurement, and construction, (2) dimensions of management subject, including project owner, design units, construction units, and the general contractor, and (3) contract dimensions, including general contract of design and construction and separate contract of design and construction.These elements can be combined into different PDMs according to different dimensions.For example, Design-Build contains two tasks: design and build; the general contractor is the subject manager, who signs a single contract with project owner on both design and build.In addition, there are some methods which are frequently used in the construction industry, including Design-Bid-Build, Engineering Procurement Construction (EPC), Turnkey, and Fast Track.In the early years, the traditional DBB mode occupies the majority share of the construction market in China.Currently, with the development of both engineering technology and management science, DB is gradually applied to a number of large-scale infrastructure constructions, such as Hong Kong-Zhuhai-Macao Bridge, which applied the DB to its most complex subproject, Island-Tunnel project.Therefore, this paper mainly focused on DBB and DB mode, and we assumed that project owner had to make a choice only between DBB and DB. There are several advantages and disadvantages by using DB, compared with traditional DBB mode.The advantages include more close coordination relationship between design units and construction units, more specific boundary of rights and responsibilities, and better ability to cope with risks.The disadvantages include harder measurement of contributions of construction and design and harder maintenance of the individuality of design.In summary, any method has specific application conditions.Before project owners decide to choose PDM, they have to analyze the specific construction environments (not only the natural environment but also the political environment and the economic environment) and construction demands seriously in order to find the balance among environment, feature, and method.Ibrahim suggested that the project owner needs to consider 7 aspects, including owner's feature, project's feature, design feature, legal system for contract, the ability and bias of potential contractors, project risk, and compensation [4].Similarly, Chen inclined to project target (time planning, cost control, and project risk), project feature (type, scale, and complexity), owner's feature, contractor's ability, and project environment (construction industry, technology environment, and legal system) affecting the project delivery planning [9]. Here, we mainly discuss factors from inside and outside of the project.We establish a computational experiment model to revise how project complexity, governance strength, and market environment affect project owner in selecting project delivery method, Design-Build, or Design-Bid-Build, as seen in Figure 1.Specifically, we use 4 items, including project scale, technical complexity, construction environment, and construction targets, to describe the project complexity.As for governance ability, project owners have different abilities to govern.For DBB mode, the project owner does have the ability not only to revise the plans of both contractor and designer, but also to perform a delicate balancing act between the different sides in the conflict.For DB mode, the project owner should definitely have demands and the ability to choose appropriate contractors; besides, the project owner has to give the general contractors enough rights while transferring project risks to them.Finally, how to refine the market environment: there are many different angles to define market environment, such as the angle of economy and technology.In this paper, we mainly discuss the market consisting of different levels of cons;2truction company, and we focused on whether the growth of market can affect the PDM'selection.Accordingly, technology level, market scale, output, and development are indispensable to describe market environment.In addition, we use resource integration, internal coordination, and technical innovation to describe contractor's ability. Multiagent Design 3.1.Owner Agent.We consider a model which includes one project owner and 50 potential contractors (both designers and constructors).Owner has three activities in the experiment. The decision-making process for project owner is set in this experiment as Figure 2. (1) Market Investigation.The project owner has to estimate the potential contractors before bidding.In this model, we considered potential contractors as competent contractors depending on whether contractors' properties are equal to the project complexity index or not.When the amount of competent contractors is up to 3, the DB mode will be; available in the owner agent's optional list. (2) Choosing Bidding Mode.At the scene when DB mode can be applied, the probability for owner to select DB mode is calculated as follows: We use as the weight of project complexity and as the weight of governance strength. stands for the governance strength.If the project owner selects DBB mode, the project owner will require the potential bidders to join design and construction bidding, respectively. (3) Evaluating and Determining.In this model, we use synthetic evaluation method to calculate the weighted average value of each bidder and project owner on the design and construction task.Contractor got the initial value to describe their ability randomly at the beginning of each experiment, and synthetic score of market environment can be calculated by weighted average value between project targets and contractor's abilities.The decision-making process of contractor agent can be described in Figure 3. Contractor (1) Bidding.According to the selected bidding mode by the project owner, the contractor agent chooses different strategies.If the project owner selects the DB mode, the contractor agents submit entire design and build values as general contractor.Otherwise, as to DBB mode, the contractor will choose design bidding or construction bidding relying on the higher value. (2) Growing Up.The contractors have the ability to grow up after each experiment, and the value of contractor's ability will increase.In this paper, we build a growth model to make contractor agents increase their value of ability after each experiment tick.In the model, self-growth of agent's ability can be described as formula () stands for index array of contractor 's ability at the tick.In the construction industry, there will be a development bottleneck for each enterprise based on enterprise theory.Therefore, we use to describe the maximum ability once a contractor can reach. will be set up at the beginning and will stay unchangeable until the end of experiment.The market will get growth and the growth rate is growth . Experiment Scenarios and Initial Settings In order to make the model get to work, we adopt Netlogo to implement the agent-based modeling and simulation.The initial parameters are shown in Table 1.The initial value of contractors' ability follows the Gaussian distributions.We focus on the impacts of different complex factors on the PDM selection.First, we research on the impact of the project complexity.The method is that we change the parameter of project complexity in each experiment and keep other parameters, such as market environment, constant during all experiments.We change the values of project complexity from 0.1 to 0.9 to revise the selection probability.Then, we run the experiments to revise the relationship between governance strength and PDM selection by using the same method as the first one. Finally, we build two comparative experiment scenarios to study the relationship between the market environment and PDM selection.The contractor's growth is ignored in one experiment whilst the contractor's growth is considered in another experiment. In order to acquire reliable data and to decrease the deviation of the experiment results, each experiment will be repeated ten times and we will use the average level [10,11]. Experiment Results and Discussions 5.1.The Impact of Project Complexity.Project complexity affects the project delivery method mainly through two aspects.For one thing, the complexity of the particular project directly determines whether to select Design-build mode, due to risk transfer from project owner to general contractors, and it also can be considered as a risk reduction.For another thing, project complexity affects project owner's power of administering, including construction contracts, time planning, construction operations, estimating process, and construction labor [12]. In this experiment, the abilities of both owner and contractor remain unchanged, and the parameter of project governance strength is set as 0.5 during the whole experiment.Moreover, both designers and constructors not grow up. For the given values, the average ability of construction contractor agents is 0.751 and of designers is 0.749, we make the experiment run 100 times and each experiment repeats 10 times to acquire the average data for statistical analysis, and Figure 4 shows the results. Analysed from Figure 4, the following conclusions and discussions can be drawn.(1) Project owner inclines to Design-Bid-Build mode in simple or not complex project.When the complexity degree of a project is less than 0.5, the project is easy to be designed and built relying on the existing engineering technology and management.Besides, the project owner has enough experience and ability to administrate the building process.In the low-risk and regular project, case study in China has shown that the project owner/manager is more likely to choose DBB mode to enforce their administration on the whole construction period and the entire process and, in this way, the project can be implemented within high quality and low cost [13]. (2) With the increase of project complexity degree, project owner is more likely to choose Design-Build mode.When the complexity degree of a project is between 0.5 and 0.8, project owner needs high ability to control the building process and to coordinate the multirelationship (such as designer, different contractors, consultant, and even the local government) due to the increase of the project scale, the difficulty of construction technology, and environment uncertainty.Particularly, in many countries, the contractors are usually more professional than the project owners.As a result, project owner inclines to allocate more tasks of project management to experienced and competent contractors in order to control the budget and risk so well to reduce the project changes and claims [14].(3) Project owner prefers to select DBB mode in complex megaprojects.To be specific, when the complexity degree is up to 0.8, this project only can be realized by general contractors with extraordinary qualified credential in both design and build.It is very hard to find a construction company or a joint venture consisted of several companies including designer, constructor, and consultant, to deal with the technical and management difficulty.Therefore, project owner has to decompose the project into different modules and subprojects and, then, project owner finds the qualified contractors to fulfill the decomposed project under DBB mode. The Impact of Governance Strength. How to evaluate a good project owner: first of all, a good owner has to define the project clearly in order to organize the implementation effectively.Moreover, the capacity to estimate the quality of design and to coordinate the conflicts during the whole construction period is inevitable [15].Finally, a good owner can deal with emergency and risk properly and efficiently. During the experiment, we keep project complexity and market environment unchanged; the average ability of constructor and designer remains 0.742 and 0.756 respectively.The parameter of project complexity is set as 0.5 and both designers and constructors do not grow up after each experiment. Figure 5 shows the probability for owners to decide which project delivery mode to choose. As shown in Figure 5, project owners rely on DB mode to a great extent when they do not have enough ability to handle the project management.All of the involved contractors including designer, constructor, supervisor, and consultant take responsibility directly to project owner according to the contract, and project owner needs to coordinate different sections and to administrate the whole construction procedure [16]. However, general contractors are in charge of the design and build process under DB mode, and project owner has to revise the job of just one general contractor. In summary, with the impact of enhancement of governance strength, project owner will have a strong desire to participate in every procedure, and this makes DBB mode more and more popular to project owners.abundant experience, but also cultivate talents in different construction field.Even contractors fail in project bidding, yet they will recognize their inadequacy better, and then they will also grow up.Indeed, it is very useful to revise how the average level of construction industry affects PDM selection.For this purpose, we repeat the former two groups of experiments on the basis of adding self-growth model, and then we compare the new results with the former experiments. The Impact of Project Complexity under Market Growing. We let each index degree of contractor's ability increase every 50 experiment ticks and repeat the experiment process as in Section 5.1.Besides, in order to estimate the relationship between growth rate and PDM selection, we set different values and run three groups of experiments.To be more precise, all of the three groups share the same parameters except growth which stands for contractor's growth rate.The growth rate changes from 0, 0.05 to 0.1, meaning remainstagnant, slow-growth, and fast-growth, respectively. We can see from Figures 6(a), 6(b), and 6(c) that, when remain-stagnant agents compared with growth agents, DB mode has the chance to be chosen by the project owner while it is never considered in the highly complex projects within the complexity degree up to 0.8.With the help of practices, contractor's ability improved constantly.In this way, a number of experienced contractors with high reputation and excellent performance emerged in construction industry [17].These enterprises have the ability to become DB general contractors, and they can reduce the risk and cost if project owner decides to use DB.In consideration of potential benefits from project time control and quality, project owner takes DB mode as alternative solutions. Comparing slow-growth agents with fast-growth agents, as seen in Figures 6(b) and 6(c), project owner will incline to select DB mode even in the most complex project.Actually, project owner usually does not have enough confidence to handle rather complex project, especially in so-called unparalleled megaprojects, and the project owner will hope to transfer management risk to contractors in order to reduce project risk.The faster the contractor grows up, the more likely the ability of contractor administrates the complex project.As a result, project owner will be apt to use DB mode while there are fast-growth contractors in construction industry. The Impact of Governance Strength under Con- tractor's Self-Growth.Similar to experiment procedure in Section 5.3.1, we also repeat experiments in Section 5.2 and get three groups of experiment results. Figure 7 shows probability of project owner whether to choose DB or DBB as project delivery mode under different growth rate of contractor agents.It is undoubtedly that project owners used to choose DB for managing the building process when their ability level under about 0.5, or we can define these project owners as weak owners, from Figures 7(a), 7(b), and 7(c).For weak owners, they do not have enough experience and knowledge to administrate the whole building procedure.As mentioned earlier in this paper, weak owners need contractors who can undertake much of the risk of project management. When contract agents grow up, the value describing their ability will increase continuously.Therefore, there will be many contractors who can successfully act as designers, builders, and even general contractors.For strong owners (with the ability level up to 0.5) who can easily administrate the project delivery, it is difficult for them to determine which method is the best choice, as we can see from the comparison between Figures 7(a) and 7(b).Project owner will make the decision of PDM according to their preference and experience. When the value of governance strength rises at a certain level, the probability of selecting DB method and DBB mode are not far from each other.What is the key factor affecting decision of the project owner?From Figures 7(b) and 7(c), it is obvious that the project owner is more likely to select DB mode when the contractor agents in this experiment have higher growth rate.Compared with slow-growth contractors, fast-growth contractors have many advantages at planning, monitoring, coordinating, controlling, communication, and decision-making [18].All of these advantages greatly help project owner reduce the project risk and ease their burden.Therefore, in order to achieve the success of project efficiently, DB mode become project owner's first choice when the potential companies develop quickly in the construction industry. Based on the former experiments, we may find that market environment is a crucial factor affecting PDM selection.Above all, in order to promote Design-Build project delivery method, it is very important for companies to develop their ability by self-growth. Conclusions The scientific and reasonable selection of project delivery method is crucial for project owners to deal with project complexity and to achieve project success.This paper assumes that project owner focus on many factors that may affect the PDM selection, and these factors and their interrelationships contribute to a complex factor system.We establish an agent-based simulation model in order to analyze the selection mechanism of project delivery method based on computational experiment.And we revise the relationship between given factors and selection probabilities by statistical analysis of different groups of experiment results.This paper mainly discusses three factors, including project complexity, governance strength, and market environment. Experimental results show that project complexity, governance strength, and market environment have significant influences on PDM selection.Project owners are more likely to use Design-Build to deliver the project at the condition that their abilities are weak and the projects are complex.Otherwise, when project owners have the ability to administrate project procedure, they are willing to be in charge of every process of design and building, so they prefer to choose DBB method.In addition, the increasing ability of contractor reduces the risk for project owners to deliver the project by DB; thus they have greater preference to choose DB general contractors.Under different type of construction market, project owner prefer Design-Build method to Design-Bid-Build method when the potential contractors develop quickly. In summary, there are some other factors that should be revised, although we have obtained some useful results about PDM selection.In the near future, we will study more micro factors that affect project owner selecting project delivery modes. 1 Figure 6 : Figure 6: PDM selection under different project complexities considering market growing. Figure 7 : Figure 7: PDM selection under different governance strength considering market growing. Table 1 : Initializations of experiment parameters.
v3-fos-license
2023-09-24T16:32:46.813Z
2023-09-01T00:00:00.000
262160761
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2305-6304/11/9/785/pdf?version=1694767725", "pdf_hash": "5cf2179f4187fbc02c5effe5b7d14a74730f85d0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2464", "s2fieldsofstudy": [ "Biology" ], "sha1": "f9fc7e1d942900a5c589602c9cefefacdbafb05f", "year": 2023 }
pes2o/s2orc
Datasets Construction and Development of QSAR Models for Predicting Micronucleus In Vitro and In Vivo Assay Outcomes In silico (quantitative) structure–activity relationship modeling is an approach that provides a fast and cost-effective alternative to assess the genotoxic potential of chemicals. However, one of the limiting factors for model development is the availability of consolidated experimental datasets. In the present study, we collected experimental data on micronuclei in vitro and in vivo, utilizing databases and conducting a PubMed search, aided by text mining using the BioBERT large language model. Chemotype enrichment analysis on the updated datasets was performed to identify enriched substructures. Additionally, chemotypes common for both endpoints were found. Five machine learning models in combination with molecular descriptors, twelve fingerprints and two data balancing techniques were applied to construct individual models. The best-performing individual models were selected for the ensemble construction. The curated final dataset consists of 981 chemicals for micronuclei in vitro and 1309 for mouse micronuclei in vivo, respectively. Out of 18 chemotypes enriched in micronuclei in vitro, only 7 were found to be relevant for in vivo prediction. The ensemble model exhibited high accuracy and sensitivity when applied to an external test set of in vitro data. A good balanced predictive performance was also achieved for the micronucleus in vivo endpoint. Introduction Evaluation of genotoxicity represents an integral part of the authorization of any industrial or pharmaceutical substance due to the association with severe health hazards, including cancer.A standard test battery is required by regulatory bodies for comprehensive assessment of major genotoxicity endpoints, covering gene mutation and structural (clastogenicity) and numerical (aneuploidy) chromosome damage [1].The common strategy for genotoxicity testing, with slight modifications among various industrial sectors, includes in vitro mutagenicity testing by the reverse gene mutation (Ames) test, while chromosome damage is usually evaluated by in vitro micronucleus (MN) or chromosomal aberration (CA) assays, followed by in vivo tests.The choice of in vivo test largely depends on the range of genotoxic events detected in the in vitro studies [2].Thus, a positive in vitro MN test is commonly followed by an in vivo MN assay. The increasing number of chemicals under development represents a challenging task for regulatory agencies as a significant backlog of chemical substances that have either not undergone genotoxicity evaluation or have received insufficient assessment has appeared [3,4].On the other hand, developers of any industrial chemical are deeply interested in assessing the genotoxic potential of new candidates before investing significant resources. Thus, there is an urgent need for alternative high-throughput genotoxicity assessment methods.One such approach is in silico (quantitative) structure-activity relationship ((Q)SAR) modeling [5].(Q)SAR models aim to find the relationships between chemical structural features and biological activity [6].The cost-effective and time-saving nature of (Q)SAR approaches, along with their ability to address the concerns associated with the 3 Rs (replacement, refinement and reduction) principles of animal use, provide advantages over conventional testing methods.These characteristics make the in silico approach a valuable tool in early phases of product development, particularly for screening purposes.In recent years, (Q)SAR models have also been gaining importance in the regulatory frameworks [7][8][9].The development of (Q)SAR models for genotoxicity prediction has been boosted with acceptance of the ICH M7 guideline, which focuses on evaluating and managing DNA reactive (mutagenic) impurities in pharmaceuticals and accepts in silico models for their evaluation [7].During recent years, various models both commercially and publicly available for the prediction of the reverse gene mutation (Ames) test have been developed.The performance of these models on average reaches 80% accuracy, which is close to the reported inter-laboratory variation [5,10].In contrast, models for predicting other genotoxicity endpoints, such as chromosome damage, are relatively scarce and less reliable [11].One of the limiting factors appears to be the availability and quality of experimental test results databases [10,11].Another constraining element is the models' ability to handle imbalanced data, which is a very common problem in biomedical datasets, including genotoxicity data.In machine learning, imbalanced data represents a significant challenge, leading to a bias in a model's predictive performance towards a majority class [12].Thus, a classifier would have a good ability to predict samples that make up a large proportion of the data but perform poorly in predicting the minority.The selection of the algorithm and/or model architecture which is best suited for a particular task also presents a significant challenge.Moreover, (Q)SAR models should be constantly updated with new data to ensure broad chemical coverage, because models developed on small datasets have low predictive ability for new compounds. Taking these into account, in the present study: • We constructed a database for both in vitro and in vivo MN assays.This was achieved by searching through 35 million PubMed abstracts and extracting relevant data using the BioBERT pretrained large language model, which is designed for biomedical text mining [13].The extracted data was subsequently reviewed and normalized by human experts. • Chemotypes enrichment analysis was performed to identify substructures enriched in both datasets. • Conventional and cutting-edge individual QSAR models were constructed based on consolidated datasets. • Finally, an ensemble model was developed by combining these individual models. Data Collection and Curation In the present study two approaches were adopted for in vitro and in vivo MN dataset collection.First, data were retrieved from non-proprietary, publicly available databases which included: • CHEMBL database (version 29), which contains data on chemical compounds' structure and bioactivity extracted mainly from scientific literature [18]. Next, to extract data from publicly available literature we employed a pipeline based on the BioBERT model [13].BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining) is a state-of-the-art biomedical language representation model based on BERT architecture and pretrained on large-scale biomedical corpora.In the present study we used BioBERT-Base v1.0 (+PubMed 200K) with the named entity recognition (NER) mode freely available at https://github.com/dmis-lab/biobert(accessed on 5 December 2022).Since BioBERT fine-tuning requires the availability of annotated task-specific corpora, we first downloaded the relevant titles and abstracts from Pubmed using simple keywords, such as, "in vitro", "in vivo", "micronucleus", "micronuclei".This resulted in 20,000 abstracts, out of which 2000 were manually annotated by four annotators.Controversial cases were verified by the domain expert.The collected and annotated data were used to fine-tune the BioBERT [13], using default parameters.Transformers library [19] on top of Pytorch [20] framework was used.The subsequent results were reviewed by domain experts and data on experimental results and compounds used were extracted from the publications.At the same time, studies were reviewed for their compliance with the OECD 487 [21] MN in vitro and 474 [22] MN in vivo test guidelines, respectively.Equivocal or technically compromised studies were removed from the datasets.Specifically, for MN in vitro database only, experiments conducted on human peripheral blood lymphocytes, CHO, V79, CHL/IU, L5178Y, TK6, HT29, Caco-2, HepaRG, HepG2, A549 and primary Syrian Hamster Embryo cells were included, taking into account the use of rat liver extract (S9) for negative results.As for in vivo MN, database results on bone marrow and/or blood erythrocytes were selected considering the highest tested dose and duration of treatment.Additionally, only studies reporting a statistically significant increase in micronucleated cells in one or more experimental groups were included as positive results.In cases where conflicting records existed for the same compound, the compound was either excluded from the final dataset, or the record that complied with the current regulatory criteria was retained.Two separate datasets for experimental results performed on mice and rats were constructed.To obtain SMILES of the tested chemicals, PubChem querying was performed based on the CAS numbers and/or name provided in the original source.Data were further cleaned to remove mixtures, polymers and inorganic and organometallic compounds, and by neutralization of salts.Finally duplicates from all datasets were removed by InChiKeys comparisons and Canonical Smiles were generated using RDKit package [23]. The curated final dataset consists of 894 organic chemicals with binary (positive/negative) MN in vitro experimental data, containing 70% positive and 30% negative compounds.Accordingly, the mouse MN in vivo database includes 1222 chemicals with 32% positive and 68% negative experimental data.Additionally, a set of 87 chemicals with MN in vitro and 87 with MN in vivo results was obtained from Baderna et al. [24] and Morita et al. [25], which was used as an external test set (see Section 2.6).The names, SMILES and CAS numbers of chemicals are provided in Tables S1 and S2 for MN in vitro and in vivo, respectively. Structural Features Analysis by Chemotypes To identify chemical substructures (i.e., chemotypes) that differentiate negative and positive chemicals in the target dataset and compare chemical spaces, Toxprint chemotypes were generated using freely available ChemoTyper application version 1.0 (https: //chemotyper.org/,accessed on 12 May 2023).In total, 729 chemotypes were developed by Molecular Networks GmbH and Altamira, LLC for US Food and Drug Administration Center for Food Safety and Applied Nutrition and Office of Food Additive Safety based on different toxicity databases [26].The ToxPrint chemotype enrichment analysis workflow (CTEW) described previously by Wang et al. [27] was applied.Based on a binary CT fingerprint table, a confusion matrix was generated, where true positives (TP) were defined as chemicals that contained the chemotype (CT) and had a positive label; true negatives (TN) were compounds that both had a CT negative label; false positives (FP) had a negative label but contained the CT; and false negatives (FN) did not have the CT but had a positive label.ODDs ratio was calculated using the following formula: ODDs = (TP * TN)/(FN * FP) One sided Fisher's exact test was performed to evaluate significance of each CT and CTs were filtered based on the thresholds: ODDs ≥ 3 and p value < 0.5.Additionally, balanced accuracy (BA) for each CT and the full set of enriched CTs and Positive predictivity value (PPV) for each CT was calculated by: BA = (SE + SP)/2 PPV = (TP + TN)/TP Descriptors Calculation and Selection For each of the datasets, 1D and 2D molecular descriptors were generated using the RDkit package [23].In total, 208 descriptors were calculated, consisting mostly of physicochemical properties and fraction of a substructure.Highly intercorrelated (R 2 > 0.9), constant and low variance (std < 0.5) descriptors were removed at the preprocessing step.Finally, the optimal subset for each target dataset was determined using Genetic Algorithm [28].In all, 12 types of molecular fingerprints, namely Toxprint, MACCS, Daylight and ECFP2, FCFP4 and ECFP6 with various lengths were calculated.Toxprint fingerprints were generated using Chemotyper application version 1 (https://chemotyper.org/, accessed on 12 May 2023) based on Toxprint chemotypes, while the rest was calculated using RDkit. Data Balancing To address for data imbalance, class weighting [29] and/or Synthetic Minority Oversampling Technique (SMOTE) [30,31] was performed on the training set with ten-fold cross-validation, using a ratio of samples in the minority class with respect to the majority class corresponding to that of the training set.Class weighting allows for assigning weights to each class during the training step resulting in a balanced contribution of each one.The same balancing strategy was also applied for GCN using the Balancing Transformer as implemented in DeepChem library [32].The idea behind the SMOTE technique is to create new synthetic data similar to existing samples in the minority class by finding their k nearest neighbors.For comparison, models trained without balancing were benchmarked against the same models trained using class weighting and SMOTE. Model Development In the present study, five ML models, namely random forest (RF) [33], Support Vector Machine (SVM) [34], eXtreme Gradient Boosting (XGB) [35], Graph Convolutional Networks [36] (GCN) and BARTSmiles [37] were evaluated.The first three are conventional ML algorithms applied on descriptors and fingerprints.GCN is a type of neural network that operates directly on graph-structured data, while recently proposed BARTSmiles represents a large language model based on BART-like architecture, that has demonstrated competitive results with the state-of-the-art self-supervised models in a wide range of chemical and biological tasks [37].The BARTSmiles model is publicly available at https: //github.com/YerevaNN/BARTSmiles/(accessed on 16 June 2023).The hyperparameters optimization for RF, SVM and XGB models was carried out on the training set using a grid search in an inner ten-fold cross-validation with the scikit-learn library for Python [38].To reduce computational cost, GCN and BARTSmiles were optimized with respect to their hyperparameters using Butina split as implemented in RDkit [39]. The best-performing models were used to build an ensemble classifier.As has previously been shown, ensemble methods, which combine multiple individual models via voting or averaging, in general show better predictive performance than individual ones [40]. Model Performance Evaluation All models were evaluated using a ten-fold cross-validation by splitting the data into 90% training and 10% validation sets using Stratified shuffle split of scikit-learn [38].Additionally, models were evaluated on the external test set (see Section 2.1).For evaluating the performance of the models, the following metrics were used: the area under the curve (AUC), accuracy (Acc), sensitivity (SE) and specificity (SP).All metrics were calculated based on the confusion matrices created from the number of true-positive (TP), true-negative (TN), false-positive (FP) and false-negative (FN) predictions using the following formulas: where Acc displays the ability of the model to correctly predict all positive samples as positive ones; SE reflects the potential of the model to correctly classify a sample as positive, while SP is the ability to correctly classify a sample as negative taking into account all positive or negative data points, respectively.The AUC is the measure of the predictive ability of a model.The higher the AUC, the better the classifier's performance at differentiating between negative and positive classes. The parameters were determined for each fold of validation, and average values of each scoring matrix, including Acc, SE, SP and AUC, were calculated to select the best model. Datasets Chemical libraries for Q(SAR) models should constantly be updated to ensure better predictive performance and high coverage.To the best of our knowledge, only recently was the first dataset on MN in vitro, consisting of 380 samples, reported by Baderna et al. [24].By utilizing a cutting-edge text-mining technique, we managed to increase this number by almost three times.The mouse MN in vivo database increased by 308 chemicals compared to the lately published one by Yoo et al. [41]. The distribution of the main physicochemical properties, namely molecular weight (MW), octanol-water partition coefficient (logP) and aqueous solubility (logS) of the chemicals in the final MN in vitro and MN in vivo databases, is shown in Figure 1.MW and logP was calculated using the RDkit package, while the ALOGPS software was used to compute logS [42].As is evident from Figure 1, both datasets contain mostly small molecules (MW < 500), though a slightly higher number of heavier compounds with MW > 500 is found in in vivo data.The majority of chemicals in both datasets are characterized by logP values between −2 and 6 and logS above 10 −2 , which correlate with good bioavailability and solubility.Thus, there is no bias towards any specific type of chemicals with certain properties in both datasets. For more detailed description of the chemicals in datasets, we compared the chemical space occupied by these compounds to the one covered by chemicals from databases, which include REACH registered substances, FDA drugs, pesticides, biocides, substances of very high concern (SVHC) and endocrine disruptor candidates (ED candidates) [43][44][45][46][47][48].For comparison, Principal Component analysis (PCA) was performed based on MACCS fingerprints.It is worth mentioning that for some parts of these databases no structures could be retrieved; thus, the final number of chemicals in each dataset is as follows: REACH: n = 14,790; FDA Drugs: n = 3234; pesticides: n = 1028; biocides: n = 235; SVHC: n = 470; and ED candidates: n = 145.The results are shown in Figure 2, where structurally dissimilar chemicals are found far apart from each other.Both MN in vitro and in vivo datasets covered vast areas of the chemical space, indicating that the datasets contain highly diverse chemicals.The exceptions are the top-right and bottom-right areas, sparsely populated by substances from both datasets, which are primarily occupied by REACH chemicals and FDA Drugs.For more detailed description of the chemicals in datasets, we compared the chemical space occupied by these compounds to the one covered by chemicals from databases, which include REACH registered substances, FDA drugs, pesticides, biocides, substances of very high concern (SVHC) and endocrine disruptor candidates (ED candidates) [43][44][45][46][47][48].For comparison, Principal Component analysis (PCA) was performed based on MACCS fingerprints.It is worth mentioning that for some parts of these databases no structures could be retrieved; thus, the final number of chemicals in each dataset is as follows: REACH: n = 14,790; We also performed the evaluation of MN in vitro and in vivo substances by their main use and manufacturing using the PubChem database.The results are shown in Figure 3.The majority of substances in both datasets are represented by pharmaceuticals, followed by cosmetic ingredients and food additives. We also performed the evaluation of MN in vitro and in vivo substances by their main use and manufacturing using the PubChem database.The results are shown in Figure 3.The majority of substances in both datasets are represented by pharmaceuticals, followed by cosmetic ingredients and food additives. Structural Feature Analysis by Chemotypes To search for potential structure-activity associations, we applied chemotype enrichment analysis based on ToxPrint chemical features.Chemotype enrichment analysis results for MN in vitro yielded 18 positively enriched CTs.The full lists and statistics are provided in Table S3.Among the significantly positively enriched CTs were nitroso, steroids, alkyl halides and PAH-phenanthrene.In order to give a rough estimate of the coverage, 1 or more of the 18 positively enriched CTs was found in 263 compounds or only 39% of the MN in vitro positives.However, 95% of the 169 chemicals that contain 2 or more CTs were correctly predicted as MN positives.To evaluate a predictive performance of the full set of 18 positively enriched CTs, overall BA was calculated that reached 0.65, indicating overall a moderate predictive performance. For MN in vivo, 40 positively enriched CTs (Table S4) were identified.CTs significantly enriched in a positive space included nitroso, metal and phosphorous substructures, usually found in environmental chemicals, and "ring:hetero_" CTs common for drug-like compounds.Despite the high number of positively enriched CTs, 1 or more of these CTs was found only in 37% of TPs (157 out of 426 chemicals), while 77% of chemicals that contain 2 or more CTs were correctly predicted as positives.Using all the CTs enriched in positive space, the overall BA of 0.64 was found, which indicates a moderately good predictive performance of the full set. The positive CTs that are common for both endpoints represent a particular interest.Previously, based on expert assessment, Canipa et al. [49] reported 19 structural alerts that can predict both in vitro and in vivo chromosome damage without differentiating between chromosomal aberration and MN in vivo tests.In our study, we identified only 4 CTs enriched in the positive space of both datasets, particularly nitroso substructures, PAH_ phenanthrene and S(=O)O_sulfonicEster_alkyl_O-C_(H=0).To further explore the relevance of CTs over-represented in MN in vitro dataset for in vivo prediction, PPV for each CT enriched in MN in vitro was calculated for the MN in vivo dataset.CTs with PPV ≥ 70% were considered highly relevant for MN in vivo, while CTs with 50% < PPV < 70% and PPV < 50% were considered as moderately and poorly correlated with in vivo data, respectively (Figure 4).Among 18 CTs positively enriched in MN in vitro data, only 1 was found to be strongly associated with MN in vivo (PPV ≥ 70%), specifically "bond:S(=O)O_sulfonicEster_alkyl_O-C_(H=0)".Alkyl esters of alkyl or sulfonic acids induce genotoxicity via DNA intercalating mechanism and present a significant safety Structural Feature Analysis by Chemotypes search for potential structure-activity associations, we applied chemotype enrichment analysis based on ToxPrint chemical features.Chemotype enrichment analysis results for MN in vitro yielded 18 positively enriched CTs.The full lists and statistics are provided in Table S3.Among the significantly positively enriched CTs were nitroso, steroids, alkyl halides and PAH-phenanthrene.In order to give a rough estimate of the coverage, 1 or more of the 18 positively enriched CTs was found in 263 compounds or only 39% of the MN in vitro positives.However, 95% of the 169 chemicals that contain 2 or more CTs were correctly predicted as MN positives.To evaluate a predictive performance of the full set of 18 positively enriched CTs, overall BA was calculated that reached 0.65, indicating overall a moderate predictive performance. For MN in vivo, 40 positively enriched CTs (Table S4) were identified.CTs significantly enriched in a positive space included nitroso, metal and phosphorous substructures, usually found in environmental chemicals, and "ring:hetero_" CTs common for drug-like compounds.Despite the high number of positively enriched CTs, 1 or more of these CTs was found only in 37% of TPs (157 out of 426 chemicals), while 77% of chemicals that contain 2 or more CTs were correctly predicted as positives.Using all the CTs enriched in positive space, the overall BA of 0.64 was found, which indicates a moderately good predictive performance of the full set. The positive CTs that are common for both endpoints represent a particular interest.Previously, based on expert assessment, Canipa et al. [49] reported 19 structural alerts that can predict both in vitro and in vivo chromosome damage without differentiating between chromosomal aberration and MN in vivo tests.In our study, we identified only 4 CTs enriched in the positive space of both datasets, particularly nitroso substructures, PAH_ phenanthrene and S(=O)O_sulfonicEster_alkyl_O-C_(H=0).To further explore the relevance of CTs over-represented in MN in vitro dataset for in vivo prediction, PPV for each CT enriched in MN in vitro was calculated for the MN in vivo dataset.CTs with PPV ≥ 70% were considered highly relevant for MN in vivo, while CTs with 50% < PPV < 70% and PPV < 50% were considered as moderately and poorly correlated with in vivo data, respectively (Figure 4).Among 18 CTs positively enriched in MN in vitro data, only 1 was found to be strongly associated with MN in vivo (PPV ≥ 70%), specifically "bond:S(=O)O_sulfonicEster_alkyl_O-C_(H=0)".Alkyl esters of alkyl or sulfonic acids induce genotoxicity via DNA intercalating mechanism and present a significant safety challenge to drug producers and regulators [50].Meanwhile, 6 CTs showed PPVs between 50% and 70%, indicating moderate relevance for in vivo prediction. For further illustration, we concentrated on the "bond:C(=O)N_carbamate" CT, which was found positively enriched in the MN in vitro dataset and moderately associated with in vivo activity (PPV < 70%).Figure 5 demonstrates images of four representative compounds with their indicated CAS numbers (CASN) and MN activity.Three out of four representative compounds induce MN both in vitro and in vivo, while urethane (CASN 51-79-6) has been reported to induce MN only in vivo.Urethane belongs to the carbamates chemical class and has been reported to induce MN in vivo but not in vitro.Though structural alerts for carbamate mutagenicity [51,52] have been reported, more recent thorough evaluation of this group revealed that only a small number of compounds, particularly urethane, demonstrate mutagenic activity in Ames tests via DNA adducts formation.Moreover, this effect is observed only when urethane is tested at very high concentrations (above limits for relatively non-toxic compounds).In contrast, it tests positive in an MN in vivo test.The most widely accepted explanation for this discrepancy is that urethaneassociated DNA adducts are rather formed by its metabolite [53].The S9 fraction used in an in vitro test is likely deficient in some cytochrome 450 enzymes responsible for urethane metabolism, while the DNA reactive metabolite is readily formed in vivo.Contrary to urethane, the other three chemicals, namely carbendazim (CASN 10605-21-7), albendazole (CASN 54965-21-8) and thiophanate-methyl (CASN 23564-05-8) have been reported to induce MN both in vitro and in vivo by directly interacting with tubulin and thus causing aneugenicity [54,55].For further illustration, we concentrated on the "bond:C(=O)N_carbamate" CT, which was found positively enriched in the MN in vitro dataset and moderately associated with in vivo activity (PPV < 70%).Figure 5 demonstrates images of four representative compounds with their indicated CAS numbers (CASN) and MN activity.Three out of four representative compounds induce MN both in vitro and in vivo, while urethane (CASN 51-79-6) has been reported to induce MN only in vivo.Urethane belongs to the carbamates chemical class and has been reported to induce MN in vivo but not in vitro.Though structural alerts for carbamate mutagenicity [51,52] have been reported, more recent thorough evaluation of this group revealed that only a small number of compounds, particularly urethane, demonstrate mutagenic activity in Ames tests via DNA adducts formation.Moreover, this effect is observed only when urethane is tested at very high concentrations (above limits for relatively non-toxic compounds).In contrast, it tests positive in an MN in vivo test.The most widely accepted explanation for this discrepancy is that urethane-associated DNA adducts are rather formed by its metabolite [53].The S9 fraction used in an in vitro test is likely deficient in some cytochrome 450 enzymes responsible for urethane metabolism, while the DNA reactive metabolite is readily formed in vivo.Contrary to urethane, the other three chemicals, namely carbendazim (CASN 10605-21-7), albendazole (CASN 54965-21-8) and thiophanate-methyl (CASN 23564-05-8) have been reported to induce MN both in vitro and in vivo by directly interacting with tubulin and thus causing aneugenicity [54,55].In summary, CT enrichment analysis revealed a range of substructures, such as nitroso, quinone, polycyclic hydrocarbons and aziridine, all of which have previously been identified as genotoxicity-related structural alerts [24].In overall, the data mining ap- In summary, CT enrichment analysis revealed a range of substructures, such as nitroso, quinone, polycyclic hydrocarbons and aziridine, all of which have previously been identified as genotoxicity-related structural alerts [24].In overall, the data mining approach employed in this study using ToxPrints CTs is chemically intuitive and straightforward to implement and interpret. Selection of Data Balancing Method In this study, to deal with highly imbalanced data, we tried two types of data balancing methods, namely class weights [29] and SMOTE [30,31], aiming to obtain a model that can consistently predict positive and negative samples with balanced SE and SP, while maintaining a high AUC value.It is worth mentioning that no balancing method is available for BARTSmiles. To reduce the number of combinations and computational time, we assessed balancing strategies using the combination of RF with descriptors and MACCS fingerprints.The main reason for choosing the above-mentioned algorithm/fingerprint combination is that MACCS fingerprints and RF have been proven to be one of the most common and successful combinations in various fields of chemoinformatics over the years [56,57]. As shown in Figure 6, both balancing strategies improved the model's predictive balance for both datasets compared to the performance without balancing, despite similar AUC values.A comparison of strategies for MN in vitro data (Figure 6a) revealed that though SE and SP were comparable among the techniques, class weight balancing is characterized by a slightly lower AUC value (0.746 for descriptor-and 0.73 for fingerprint-based models, respectively) as opposed to SMOTE (0.77 for descriptor-and 0.75 for fingerprint-based models, respectively). Selection of Molecular Fingerprints and Model Development In the present study, we developed multiple models for each target endpoint using the combination of three classical ML algorithms (RF, SVM and XGB) with molecular descriptors and 12 types of fingerprints (MACCS, Daylight, Toxprint and ECPF with different bits) through ten-fold cross-validation.All models were trained using an appropriate balancing method. Following feature selection (see Section 2.3), 17 and 20 molecular descriptors were used for building MN in vitro and in vivo models, respectively.The full list of descriptors is presented in Table S7.It is worth mentioning that for both endpoints the selected descriptors predominantly represent structural fragments rather than physico-chemical ones.Fingerprints were used without feature reduction.We selected the best performing combination based on the AUC values and balanced performance, ensuring an equal ability to predict both positive and negative classes.The performance of descriptor-based models for both datasets is presented in Table 1.The obtained results suggested that all models performed equally well with a slight superiority of the RF algorithm for MN in vitro and XGB for MN in vivo.In contrast, training on the mouse MN in vivo data using SMOTE resulted in low SE (0.54 and 0.52 for descriptor-and fingerprint-based models, respectively) and high SP (0.8 and 0.81 for descriptor-and fingerprint-based models, respectively) (Figure 6b).At the same time, the class weight approach was found to give a more stable prediction accompanied by a higher AUC for the descriptor-based model.The detailed evaluation results are presented in Tables S5 and S6. Selection of Molecular Fingerprints and Model Development In the present study, we developed multiple models for each target endpoint using the combination of three classical ML algorithms (RF, SVM and XGB) with molecular descriptors and 12 types of fingerprints (MACCS, Daylight, Toxprint and ECPF with different bits) through ten-fold cross-validation.All models were trained using an appropriate balancing method. Following feature selection (see Section 2.3), 17 and 20 molecular descriptors were used for building MN in vitro and in vivo models, respectively.The full list of descriptors is presented in Table S7.It is worth mentioning that for both endpoints the selected descriptors predominantly represent structural fragments rather than physico-chemical ones.Fingerprints were used without feature reduction.We selected the best performing combination based on the AUC values and balanced performance, ensuring an equal ability to predict both positive and negative classes.The performance of descriptor-based models for both datasets is presented in Table 1.The obtained results suggested that all models performed equally well with a slight superiority of the RF algorithm for MN in vitro and XGB for MN in vivo.The performance of various combinations of fingerprints/models is shown in Figure 7.All models demonstrated AUC values around 0.7 for both datasets and across all combinations, indicating good predictive ability.However, based on the most optimal parameters of internal validation (i.e., AUC/SE/SP) MACCS with RF was chosen as a final combination for MN in vitro endpoints, while Toxprint and MACCS fingerprints with XGB were selected for MN in vivo.S8 and S9. Model Validation Two conventional ML methods (RF and XGB) combined with selected molecular descriptors and fingerprints (MACCS and MACCS and Toxprint fingerprints for MN in vitro and MN in vivo, respectively) and two cu ing-edge algorithms, namely GCN and BARTSmiles, were used for target endpoint prediction.The performance of the models obtained through a ten-fold cross-validation framework using balanced data where appropriate is presented in Figure 8.Among individual models, the best predictive performance for the MN in vitro dataset was achieved with RF in combination with descriptors using SMOTE balancing (0.77, 0.81 and 0.64 for AUC, SE and SP, respectively).In contrast, for MN in vivo, GCN showed a superior performance with AUC of 0.74, SE of 0.58 and SP of 0.77.It is worth mentioning that though both target datasets are highly imbalanced, BARTSmiles performed comparably to other models for the MN in vitro dataset in terms S8 and S9. Model Validation Two conventional ML methods (RF and XGB) combined with selected molecular descriptors and fingerprints (MACCS and MACCS and Toxprint fingerprints for MN in vitro and MN in vivo, respectively) and two cutting-edge algorithms, namely GCN and BARTSmiles, were used for target endpoint prediction.The performance of the models obtained through a ten-fold cross-validation framework using balanced data where appropriate is presented in Figure 8.Among individual models, the best predictive performance for the MN in vitro dataset was achieved with RF in combination with descriptors using To further assess the predictive power, the models were evaluated on the external test set.RF_Desc + SMOTE and RF_MACCS + SMOTE displayed equally good predictive potential on the MN in vitro dataset (Table 2a).On the MN in vivo external test set, most models showed a comparable prediction performance, with a slight predominance of the XGB model build using MACCS fingerprints (Table 2b).To further assess the predictive power, the models were evaluated on the external test set.RF_Desc + SMOTE and RF_MACCS + SMOTE displayed equally good predictive potential on the MN in vitro dataset (Table 2a).On the MN in vivo external test set, most models showed a comparable prediction performance, with a slight predominance of the XGB model build using MACCS fingerprints (Table 2b). To overcome the limitations of individual models, the ensemble model via majority voting was built.As expected, the ensemble model outperformed any single-base classifier, achieving higher Acc (78.4% and 73% for MN in vitro and in vivo data, respectively). Comparison with Previous Models Recently, Baderna et al. [24] reported a fragment-based model for MN in vitro prediction with Acc, SE and SP of 0.85, 0.98 and 0.62 in the validation set.Using the same set, which allowed us to directly compare the results, we achieved a lower prediction performance.Nonetheless, taking into account the high diversity of our dataset and the size of the training set, our model may have broader applicability and better predictivity for new compounds, which is highly practical for the early screening purposes of in silico models. Conversely to MN in vitro, a number of in vivo prediction models exist [40,41,58,59].Using commercial CASE Ultra software for MN in vivo prediction, Morita et al. [25] on the external dataset of 337 chemicals reported Acc, SE and SP of 0.72, 0.91 and 0.57.Though SE obtained in our study is lower, SP is particularly high.Moreover, the authors mention a possibility that the test and training set included the same chemicals, which is not the case in our study.More recently, Yoo et al. [41] developed a statistics-based model for the mouse dataset comprising 1001 compounds using Leadscope and CASE Ultra software.On the external test set of 42 compounds, the new models achieved SE of 67% and 83% and SP of 84% and 29% for Leadscope and CASE Ultra, respectively.Thus, compared to the models of Yoo et al. [41] our model reached balanced SE and SP, resulting in greater stability. Conclusions In this study, we first enriched the dataset for MN in vitro and mouse in vivo assays by leveraging freely available databases and conducting an extensive PubMed search, supported by the advanced text-mining approach based on the BioBERT large language model. Using the updated datasets, we identified chemotypes, i.e., structural features associated with MN induction in vitro or in vivo.At the same, seven chemotypes that are positively enriched in the MN in vitro dataset and possess predictive value against MN in vivo were found.We constructed a number of individual models using conventional ML methods, such as RF, SVM and XGB, in combination with various fingerprints, molecular descriptors and balancing methods.Our findings from ten-fold cross-validation highlighted the superior performance of the MACCS fingerprint for MN in vitro prediction, while Toxprint and MACCS fingerprints excelled for MN in vivo prediction.Additionally, our analysis of various balancing techniques revealed that SMOTE for MN in vitro and class weights for MN in vivo achieved the optimal balance in terms of SE and SP in predictive performance.We also explored advanced modeling approaches, such as GCN and BARTSmiles, a large pre-trained generative masked language model.The performance of individual models on MN in vitro achieved accuracy values ranging from 66.7% to 75.9%, while for in vivo the accuracy values ranged from 56.3% to 65.5%.To further enhance predictive performance, 16 Figure 1 . Figure 1.Distribution of physicochemical properties for MN in vitro (a) and MN in vivo (b) datasets.From top to bo om the following properties are presented: Molecular weight (MW), octanol-water partition coefficient (logP) and water solubility (logS).Dots are values of the property for each chemical, the violin plots represent the number of compounds with the same values (density).Positive compounds are colored in red and negatives in blue. FDA Drugs: n = 3234; pesticides: n = 1028; biocides: n = 235; SVHC: n = 470; and ED candidates: n = 145.The results are shown in Figure 2, where structurally dissimilar chemicals are found far apart from each other.Both MN in vitro and in vivo datasets covered vast areas of the chemical space, indicating that the datasets contain highly diverse chemicals.The exceptions are the top-right and bo om-right areas, sparsely populated by substances from both datasets, which are primarily occupied by REACH chemicals and FDA Drugs. Figure 2 . Figure 2. 2D PCA visualization of chemical space of compounds found in MN in vitro (a) and in vivo (b) datasets and in the lists of REACH registered substances, FDA Drugs, pesticides, biocides, substances of very high concern (SVHC) and endocrine disruptor candidates (ED candidates).Data points represent compounds encoded as 166-bit MACCS fingerprints on the first two principal component dimensions. Figure 1 . Figure 1.Distribution of physicochemical properties for MN in vitro (a) and MN in vivo (b) datasets.From top to bottom the following properties are presented: Molecular weight (MW), octanol-water partition coefficient (logP) and water solubility (logS).Dots are values of the property for each chemical, the violin plots represent the number of compounds with the same values (density).Positive compounds are colored in red and negatives in blue. Figure 1 . Figure 1.Distribution of physicochemical properties for MN in vitro (a) and MN in vivo (b) datase From top to bo om the following properties are presented: Molecular weight (MW), octanol-wa partition coefficient (logP) and water solubility (logS).Dots are values of the property for each che ical, the violin plots represent the number of compounds with the same values (density).Posit compounds are colored in red and negatives in blue.For more detailed description of the chemicals in datasets, we compared the chemi space occupied by these compounds to the one covered by chemicals from databas which include REACH registered substances, FDA drugs, pesticides, biocides, substanc of very high concern (SVHC) and endocrine disruptor candidates (ED candidates)[4 48].For comparison, Principal Component analysis (PCA) was performed based MACCS fingerprints.It is worth mentioning that for some parts of these databases structures could be retrieved; thus, the final number of chemicals in each dataset is follows: REACH: n = 14,790; FDA Drugs: n = 3234; pesticides: n = 1028; biocides: n = 2 SVHC: n = 470; and ED candidates: n = 145.The results are shown in Figure2, whe structurally dissimilar chemicals are found far apart from each other.Both MN in vi and in vivo datasets covered vast areas of the chemical space, indicating that the datas contain highly diverse chemicals.The exceptions are the top-right and bo om-right are sparsely populated by substances from both datasets, which are primarily occupied REACH chemicals and FDA Drugs. Figure 2 . Figure 2. 2D PCA visualization of chemical space of compounds found in MN in vitro (a) and vivo (b) datasets and in the lists of REACH registered substances, FDA Drugs, pesticides, biocid substances of very high concern (SVHC) and endocrine disruptor candidates (ED candidates).D points represent compounds encoded as 166-bit MACCS fingerprints on the first two principal co ponent dimensions. Figure 2 . Figure 2. 2D PCA visualization of chemical space of compounds found in MN in vitro (a) and in vivo (b) datasets and in the lists of REACH registered substances, FDA Drugs, pesticides, biocides, substances of very high concern (SVHC) and endocrine disruptor candidates (ED candidates).Data points represent compounds encoded as 166-bit MACCS fingerprints on the first two principal component dimensions. Figure 3 . Figure 3. Product type categories within datasets (a) MN in vitro; (b) MN in vivo. Figure 3 . Figure 3. Product type categories within datasets (a) MN in vitro; (b) MN in vivo. Figure 4 . Figure 4. ToxPrint CTs enriched in the positive space of MN in vitro dataset (light orange) relative to the MN in vivo dataset (dark orange) with positive predictivity values (PPV) indicated on the right.PPV values ≥ 70% and 50% < PPV < 70% are enclosed in red boxes. Figure 4 . 16 Figure 5 . Figure 4. ToxPrint CTs enriched in the positive space of MN in vitro dataset (light orange) relative to the MN in vivo dataset (dark orange) with positive predictivity values (PPV) indicated on the right.PPV values ≥ 70% and 50% < PPV < 70% are enclosed in red boxes.Toxics 2023, 11, x FOR PEER REVIEW 9 of 16 Figure 5 . Figure 5. Representative images of chemicals containing "bond:C(=O)N_carbamate" CT (highlighted in red), labeled by CAS number and MN in vitro and in vivo activities. Figure 6 . Figure 6.Performance of the RF models without balancing and using class weighting and SMOTE balancing methods in combination with molecular descriptors and MACCS fingerprints trained on the MN in vitro (a) and MN in vivo data (b), respectively.Average values of ten-fold cross-validation are presented. Figure 6 . Figure 6.Performance of the RF models without balancing and using class weighting and SMOTE balancing methods in combination with molecular descriptors and MACCS fingerprints trained on the MN in vitro (a) and MN in vivo data (b), respectively.Average values of ten-fold cross-validation are presented. Figure 7 . Figure 7. AUC values of models in combination with different fingerprints on the (a) MN in vitro and (b) MN in vivo, respectively.Models were trained using SMOTE or class weights balancing for MN in vitro and in vivo, respectively.Average values of ten-fold cross-validation are presented.Numeric values are presented in TablesS8 and S9. Figure 7 . Figure 7. AUC values of models in combination with different fingerprints on the (a) MN in vitro and (b) MN in vivo, respectively.Models were trained using SMOTE or class weights balancing for MN in vitro and in vivo, respectively.Average values of ten-fold cross-validation are presented.Numeric values are presented in TablesS8 and S9. Figure 8 . Figure 8. Distribution of AUC, Specificity and Sensitivity values obtained within the ten-fold crossvalidation framework for (a) MN in vitro and (b) MN in vivo.Models were trained using SMOTE or class weights balancing for MN in vitro and in vivo, respectively.Average values of ten-fold crossvalidation are presented. Figure 8 . Figure 8. Distribution of AUC, Specificity and Sensitivity values obtained within the ten-fold crossvalidation framework for (a) MN in vitro and (b) MN in vivo.Models were trained using SMOTE or class weights balancing for MN in vitro and in vivo, respectively.Average values of ten-fold cross-validation are presented. Table 1 . Performance of models in combination with selected molecular descriptors on MN in vitro and MN in vivo datasets.Models were trained using SMOTE or class weights balancing for MN in vitro and in vivo, respectively.Average values of ten-fold cross-validation are presented.The best performing model is in bold. Table 2 . Performance of individual and ensemble models on the MN (a) in vitro and (b) MN in vivo mouse external datasets.The best model in terms of Acc and balanced performance is in bold. Table 2 . Performance of individual and ensemble models on the MN (a) in vitro and (b) MN in vivo mouse external datasets.The best model in terms of Acc and balanced performance is in bold.
v3-fos-license
2023-07-11T15:53:13.881Z
2023-07-06T00:00:00.000
259644718
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1178314/pdf", "pdf_hash": "84109e6d880cafb5ec63407328ef23bb654fd071", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2465", "s2fieldsofstudy": [ "Sociology" ], "sha1": "e5178d787703d7e06a854a4465e7b08c4e031fc4", "year": 2023 }
pes2o/s2orc
How police officers juggle work, a life partner, and kids Police officers frequently encounter stressful social situations during their working days. Furthermore, previous research on policing and families show that police officers’ families are impacted in different ways when at least one member of the family has the role of a police officer. Despite work spilling over to family life there is currently little research on police officers’ role-balancing. Thus, the purpose of this study was to explore and describe the challenges that arise at the intersection between police officers’ professional roles and their private life roles as parents and life partners, as well as how police officers balance these roles in between. We used qualitative content analysis after interviewing 13 uniformed police officers. The findings show how the police officers’ professional roles affect their private life roles within three different sub-themes and are summarized under the theme of “Balancing conflicting roles: Coping with professional and private life commitments”. The theme revolves around the various challenges of working as a uniformed police officer, such as hypervigilance and risks, as well as the enrichments and conflicts of working shifts while also juggling private life roles. The results also touch on gender and equality in life-partner relationships. The study raises an important question about how these challenges can be mitigated within Police authorities to enable uniformed police officers to balance their professional and personal lives in a healthy and sustainable manner. Introduction Uniformed police officers-officers interacting with the public daily, responding to emergency calls, or patrolling specific areas on foot or in a vehicle while keeping the public safe and upholding the law (the Riksdag, 1998; PATROL OFFICER, 2022)-frequently engage in challenging contexts and environments while at work. The different features of their social environments at work, such as adequacy of communication or emotional support and collaboration, impact the police officers' life and health (Granholm Valmari et al., 2022b). Their physical environment at work can also include risky and traumatic work activities throughout the course of their working day (Mona et al., 2019). Both social and physical environments at work may also provide both barriers and resources for the police officers' balance in life (Granholm Valmari et al., 2022a) and affect their personal lives (Qureshi et al., 2019). Previous research on policing and family issues has been summarized in a prior review (Violanti et al., 2017), indicating that police officers' marriages and families are impacted in different ways when at least one member of the family has the role of a police officer (Miller, 2007;Tuttle et al., 2018). Hence, being a police officer may lead to stress spilling over to family OPEN ACCESS EDITED BY life, challenging the social relationships at home (Tuttle et al., 2018). For example, Canadian researchers have found police officers taking work home with them (Duxbury and Halinski, 2018). Furthermore, they struggle to maintain a healthy work-family balance while caring for their elderly and children as well as working full-time (Duxbury et al., 2021). A Swedish research report also shows that police officers worry that their job will affect their families' health, such as their loved ones being threatened (Sundqvist et al., 2021). Being a police officer may also result in secondary victimization, where spouses can be affected by trauma experienced by their significant other (Friese, 2020). Conflicts between work and family life for police officers have also been linked to stress and mental ill-health in both Norway (Mikkelsen and Burke, 2004) and India (Lambert et al., 2017;Qureshi et al., 2019). Additionally, it has been demonstrated that police officers who experience work-life conflict are more likely not to recover mentally after their shifts (Elgmark Andersson et al., 2013). Roles in life, role conflict, and role balance The interface between work and family, including roles, has been researched in various ways. Role balance (Marks and MacDermid, 1996), work-family role conflict (Greenhaus and Beutell, 1985), or work-family spillover (Bakker and Demerouti, 2013) are some examples. Roles organize how we behave and relate to others, as well as shape what we do. Roles also divide our daily and weekly cycles into time periods when we inhabit certain roles. During our days, and within our social environments, roles overlap, and some roles involve a succession of other roles. A person may also have several roles in the same social environments, which occupy the person's routines, time, and space (Taylor, 2017;Wook Lee and Kielhofner, 2017). Hence, in the workplace, a person is mainly in the worker role, and at home, for example, has the role of parent or life partner. Having complementing roles gives rhythm and change between different identities and modes of doing things in life (Taylor, 2017). Roles are also affected by culture and gender (Taylor, 2017), indicating that gender-related patterns affect life partner relationships (Connell, 2021). For instance, for men, male stereotypical behavior, such as being aggressive or managerial at work, may conflict with exhibiting the same behavior at home because family members expect different behaviors from them in their private lives compared to their professional lives, such as being nurturing (Greenhaus and Beutell, 1985). Having a more traditional gender role attitude such as the belief that women should perform household chores and men are the main breadwinners, also result in lower well-being as well as a higher workfamily conflict for both men and women (Chen et al., 2022). Additionally, for men but not for women, stress-based work-family conflict may be lowered by life partner support (Adams and Golsch, 2021). According to Van der Lippe et al. (2004) Sweden has taken the lead in promoting gender equality within the police force, for example implementing government policies such as gender mainstreaming and reaping benefits from publicly funded childcare. Furthermore, workfamily arrangements, such as working part-time or offering flexible working hours and replacement while on leave, has been important for gender equality within the Swedish police force ( Van der Lippe et al., 2004). Despite of this, a Swedish research project has shown gender inequalities regarding health between men and women. Male patrolling police officers had a 1.2 likelihood of better mental recovery after work than female police officers (Elgmark Andersson et al., 2013). One reason for the complexities of role conflict for both male and female uniformed police officers may be that they work in a maledominated organization (Duxbury et al., 2021). Furthermore, Haas and Hwang (2019) concluded that flexible work practices in maledominated organizations are not possible due to workplace culture and work structure despite Sweden's efforts to promote gender equality. Thus, there is an expectation of fathers working and not taking parental leave, regardless of Swedish policy requiring employers to allow fathers to take parental leave (Haas and Hwang, 2019). Having different roles in life also necessitates balancing them, which could entail investing more time or attention to certain roles. Another example of balancing roles is to even-handedly allocate personal resources among the different roles to balance one's life (Marks and MacDermid, 1996). Hence, in the same way that the professional role may collide with private life roles, they can also enrich one another by transferring resources or emotions from one role to the next (Greenhaus and Powell, 2006). Role balance is the result of ongoing adjustments to priorities and activities within various roles, including both enrichment and conflict between roles, and is, therefore, both dynamic and complex (Evans et al., 2014). According to Greenhaus and Beutell (1985) there are three sources of stress in terms of role conflict: time-based conflict, strain-based conflict, and behavior-based conflict. According to Marks and MacDermid (1996), people with more balanced role systems will also report less role strain, greater role comfort, and better well-being than those with a less balanced role system. Role balance has also been found to be related to self-esteem (Marks and MacDermid, 1996), as well as to be important for health and well-being (Moll et al., 2015). This is also why for example the European Foundation for the improvement of Living and Working Conditions (Eurofound), are concerned with, and strategically work towards work-life balance as a way to raise the bar on job quality (Eurofound, 2019(Eurofound, , 2023. However, when a person cannot meet the obligations or aspirations represented in several roles within their lifestyle, role strain occurs (Kielhofner, 2008). Role imbalance is therefore opposite to role balance (Evans et al., 2014). According to Perreault and Power (2023) work and life have been treated as two separate domains within organizational psychology. Work has been regarded to be on one side and equal to everything else in life, indicating work to be of primary importance. Balance is regarded to occur in-between these two domains. However, role balance has been suggested to better explain between-role conflicts when utilizing an individual perspective instead of an organizational perspective (Perreault and Power, 2023). According to Marks (2014) role balance is also a gendered issue. For instance, men feel less balanced the more they work, whereas women feel less balanced the more time they spend with their children (Marks, 2014). A longitudinal study in Sweden also showed that work-family conflict was related to an increased risk of poor health for women, while for men there was an increased risk for problem drinking. Thus, men and women's health is affected in different ways by their professional role affecting private life roles (Leineweber et al., 2013). Moreover, a Swedish research report discovered police officers working in vulnerable areas to be dissatisfied with their life partner if they also experienced high levels of operational stress at work (Sundqvist et al., 2021). Hence, despite existing research Frontiers in Psychology 03 frontiersin.org on police officers' contexts and environments (Mona et al., 2019;Granholm Valmari et al., 2022b), work-life conflicts (Duxbury and Higgins, 2012), and role conflicts (Wu, 2009), we are not aware of any earlier research on police officers' role balance from an individualistic perspective, where professional and private life roles are included on similar terms. Furthermore, since role balance also affects health (Marks, 2014), and the police profession is a male-coded organization (Duxbury et al., 2021), role balance of police officers warrants further investigation. Moreover, there is relatively little research on the challenges police officers face in both their professional and private life roles and how they balance their roles regarding these challenges. Consequently, we sought to gain a better understanding of how uniformed police officers balance their roles between work and private life. Accordingly, this study aims to explore and describe the challenges that arise in the intersection between the police officers' professional roles and private life roles of being a parent and partner, and how the police officers balance these roles. Method The approach of conducting semi-structured interviews was selected for its capacity to increase understanding of the experiences of uniformed officers' role balance. Graneheim and Lundman's (2004) qualitative content analysis methodology includes a variety of ways of analyzing the data, from describing similarities and differences to describing a red thread within the data and illuminating themes of meaning (Lindgren et al., 2020). Therefore, their method was chosen to answer the study's aim of exploring these police officers' balance between roles. To ensure transferability, the methods, and results (including quotations) were described in detail according to the Qualitative Design Reporting Standards (JARS-Qual) (Levitt et al., 2018). JARS-Qual has also been for performing the study and writing the paper. Ethical approval for this study, as part of a larger research project, was obtained from the Swedish Ethical Review Authority 2020-07170. Participants and procedure The dataset includes 13 uniformed police officers from Sweden working within different patrol services, for example, focusing on emergency response, road, or community policing duties. These participants are not only police officers but also parents and current or ex-life partners. The dataset is part of a larger dataset aiming to understand police officers' lifestyles and health. After analyzing the data from 13 participants, the results reached the point of information power (Braun and Clarke, 2022). Consequently, the author group decided no further data needed to be collected. To ensure the adequacy of data, both purposive as well as snowball sampling were used to gain a rich dataset of police officers' experiences regarding balancing their roles (gender, family situation, living and working in an urban or rural area, as well as locations in Sweden); see Figure 1 for the sampling procedure as well as more participants' characteristics. The first author conducted and audio-recorded all interviews. An interview guide, see Figure 1, was used in the first few interviews, but after that, the participants' narrative steered the conversation, as well as the order of topics and questions. Questions from the interview guide were added if the participants did not address the various topics themselves. The interviewer strived to encourage a relaxed atmosphere to make the participants feel comfortable and receive their undivided attention. More information regarding the interview and interview guide is found in Figure 1. Due to the sensitive nature of the content, the interviews were transcribed verbatim by professional transcribers, the majority of them by a medical secretary. The first author read the transcripts and checked for any deviation between the recordings and the transcripts. For reasons of confidentiality, identifying details were omitted or altered, and fictitious names will be used for presenting the findings. Data analysis Qualitative content analysis offers a systematic process for interpreting qualitative data, by focusing on expressions of experiences while identifying similarities and differences in manifest and latent data by going back and forth in the data analysis (Graneheim and Lundman, 2004;Graneheim et al., 2017;Lindgren et al., 2020). The analysis was performed mainly by the first author. The other authors also took part in the analysis, for example by participating in triangulating the analysis, reading manuscripts, locating potential analytic interests, comparing transcripts with the initial analysis, etc. Thus, following the aim of this study and using qualitative content analysis, the data were first decontextualized and then recontextualized according to Graneheim et al. (2017): • The first author located the already de-contextualized data consisting of condensed meaning units that had been abstracted in a previous study. Then, both the manifest and latent meaning in the data was labelled with a code, using MAXQDA (VERBI Software, 2019). Most codes were, however, at a high level of abstraction due to the complex phenomenon of balancing roles. Furthermore, all transcripts were read again with the study's aim in mind to ensure nothing was missed. • Re-contextualization was done by comparing codes according to "police role and parent" and "police role and life partner" and categorizing them, based on differences and similarities. Then, to make sense of the data despite its richness and high abstraction level, the codes were unified in underlying meanings. Furthermore, interpreting both the manifest and latent content into themes and sub-themes. Results The analysis resulted in one theme, "Balancing conflicting roles: Coping with professional and private life commitments", and three sub-themes, see Table 1. The results highlight the significant challenges faced by uniformed police officers in balancing their professional responsibilities with their private life roles. The findings shed light on the unique difficulties experienced in managing their time, emotions, and life priorities. The impact their professional role has on their private life roles includes being torn between work and family commitments, causing conflicts between roles. Hence, the police officers express concerns about their private life feeling overshadowed by their demanding profession, leading to feelings of guilt. Furthermore, the findings highlight how the uniformed police officers adopt behaviors to mitigate risks in their private lives. For example, to avoid running into people and having to intervene as police officers while out with their families, they might, for example, exclude certain leisure activities from their private life roles. Nonetheless, the police officer role also enriches their private life roles in different ways, such as knowing what an actual risk is. For example, since it is their job to assess risks, they also know what an actual risk is instead of worrying about everything. Another example is shiftwork, which provides opportunities during the daytime to spend more time with their families. Additionally, the study presents findings related to navigating priorities in life as they fulfill their passion for police work while meeting their family responsibilities. Adjacent to this is the building of strong life partner relationships, where emotional and social support is important, particularly in the context of the police officers' demanding professional role. Finding harmony between professional and private life roles This sub-theme revolves around balancing the different roles in life, where they sometimes enrich each other, and sometimes are conflicting. The professional role is considered by the police officers to augment their family life and private life roles in different ways. One example was the advantage of understanding the difference between perceived and actual risks concerning danger when, for instance, raising teenagers. Since Lewi knows the city he works in, he also Information regarding participants and sampling procedure. Another aspect of parenting is finding oneself in a different phase of life, which means that suddenly one's own needs must be set aside because of children and family. This is not always easy to combine with the demands of their police officer role, such as always being fit. This specific demand is usually put upon themselves to function physically at work. However, the activity must be performed within their private life roles due to the limited time to work out during work hours. Hence, when other things in life take over, such as taking care of children and household chores, the demands of their professional roles must take the back seat, although staying fit has not become less valuable. Consequently, the time within private life roles is prioritized. For some of these police officers, this means that the number of roles they have time and energy for is limited to being a police officer, a parent, a life partner, and maybe a friend to somebody. As a result, previous roles, such as youth sports trainer, are now on hold. Another way for these police officers to juggle their priorities in their private life roles is to multitask. They take kids with them while exercising or combine exercise time with being on the way to work. To be able to fit everything into their private life roles they constantly evaluate and prioritize activities to balance the demands of both their professional role as well as their private life roles. Hence, the parental role was found both conflicting and enriching their professional role. Frontiers in Another aspect of role balance is if one role takes priority over other roles. For the uniformed police officers in this study, this happens in relation to their hypervigilance. Hence, they avoid and limit their activities in private life roles due to the risk of what might happen. The hypervigilance which comes with their professional role and spills over to their private life roles is explained by Samuel this way: "I do not like going out to eat in town with my wife, for example … I always walk a few meters ahead of my wife, for example, so that we do not walk together, and I'm on my guard. So, I avoid things like that …. " Police officers also generally choose where to live and work more cautiously, especially after having children. For example, they may choose not to work in the same area as where they live or specifically choose living areas they know are more crime-free than other areas. As Rebecca puts it: "And cops usually live in areas where you do not make arrests on your neighbors. " Thus, another pertinent concern is that the police officers do not always feel safe in their private life roles, because of their professional role. They worry that their life partners or children will be innocently affected. Other conflicting challenges were also raised, such as the feeling of not being compensated adequately for what they do and the danger they face, or the time they spend away from their parental or life partner roles. The professional roles also take priority in other ways, such as the feeling that everyday life pales in comparison to the professional role. For example, Samuel feels he is doing a good job in his police officer role, where he also receives praise. Praise at work is also more rewarding to him than praise from his children, even though saying this makes him feel ashamed. He explains that being a police officer is a drug that he cannot get off. Another example is from Daniel, who explains his feelings at home when he is in his parental role: "… it is a bit slow … these everyday things like hanging around the house, reading the same books, taking a stroll in the garden and things like that. But of course, I'm glad that they are feeling well, when you notice that they are happy. " To be able to understand this contrast in how everyday private life roles can feel so boring, Levi offers an explanation comparing the police profession to other professions: "… the adrenaline rushes are actually what make this job so special … Here we are still talking about this old classic 'life and death' … it becomes so tangible, and that's probably what I still find fun about this job, that it is real. " He also elaborates on why he cannot switch to working dayshifts only, even though his wife wants him to: "… It's a job that offers freedom, and just the uncertainty. When I go to work, I do not know where I will end up, I do not know what will happen, and to some extent, I do not know when I will get home. So that's probably the attraction anyway. " Navigating life's priorities while working shifts This sub-theme is defined by how the uniformed police officers describe how working shifts as a police officer has its pros and cons. Many of the police officers believe that working shifts gives them an advantage when they have small children. They can take their children swimming or to indoor playgrounds while most people are at work and there are fewer people where they want to go. If the other parent is on parental leave, they can also have more family time during the week. Financial difficulties and the feeling of not earning enough were also addressed as a male concern in the study. Consequently, working shifts and deciding to take on extra shifts, was an enrichment to the male police officers regarding private life in terms of providing for their family. A conflict emanating from shift work impacting their private life roles was regarding time management. The police officers frequently had to solve problems and adapt to collisions in life due to shift work and overtime. For example, Wyatt feels stressed over doing Another practical complication is when the family is home during the daytime and the police officer needs to sleep. Then resting and sleeping becomes a challenge also to their life partner who must "carry out a sleight of hand" to juggle kids as well as a sleeping husband, as Levi puts it. Another issue is that not all municipalities have daycare where children can be left overnight. The police officers feel that the Police authority should aid in the transition after having children, to maximize the possibility of continuing to work as a uniformed police officer. Not only could the Police authority assist, but municipalities also need to provide childcare during the night. An additional challenge is working shifts and being in a life partner relationship where both partners work shifts. Sophie gives a personal example of how her ex-life partner used to work shifts, even though he was not a police officer himself: "… when we looked at our schedules … which were a two-shift and my three-shift schedule, it was like someone had put them in overlap. So that if I worked Monday, Tuesday, and the weekend then he worked like Wednesday, Thursday, Friday, or Tuesday, Wednesday, Thursday. Er, so that, and then we got help from his mother then … to pick up at kindergarten and sleep over …. " For Sophie, this resulted in one reason for why she and her life partner separated. Ava who has also separated from her husband elaborates on why she believes overcoming these obstacles and balancing professional and personal life roles is difficult as a police officer: "… partly that the working hours can consume much more than expected. And then family life comes along and sort of changes the situation and that is why there are many who, after a few years, move on to slightly, what can I say, more comfortable tasks … more comfortable working hours and a slightly more comfortable work environment. " Managing conflicts between the professional role and private life roles was not easy and required strategies such as compromising with time or finding smart solutions to juggle the different roles. At some workplaces, the organization also provided some support. Such as flexibility regarding scheduling, or the possibility to agree upon a joint schedule among co-workers who had children. Many of the police officers with a life partner also tried to properly plan what happens during the week with their respective others so that things would run smoothly, for example when picking up kids during the week. For single parents, it was found to be the most challenging, and they had to find alternative solutions to continue working as a uniformed police officer. For Liam, after divorce, he tried to optimize his time with his daughter to suit both his professional role and parental role. He continued working shifts as before and had his daughter every other week. Additionally, he worked weekends every other week. Hence, he was a single parent working one weekend, so he had no time to rest. The following week, when he had his daughter, he had the weekend off, but he was still exhausted because he felt he had not had time to unwind; "…working the hours that three shifts entail, means they are fixed. They do not correspond so well with dropping off and picking up at preschool and other stuff. So, I tried for a couple of years, but… I went into a stress-related illness after that which … is a consequence of that. " Jack on the other hand has found another way to continue working as a uniformed police officer after divorce. He has his children only every other weekend. As he puts it: "I could for example keep them every two weeks, but then I would not be able to have the job I have. " Hence, he has chosen his professional role over his parental role. Also, if single parents have their children every other week and are always working every other weekend, they must schedule and plan their work around being a single parent. This usually means putting in long hours the week the children are away and taking more time off the following week when the children are back home. Nevertheless, this only works if it is organizationally feasible, and not all of them have scheduling they can influence. As a result, balancing shiftwork and private life roles is especially difficult for those who are separated and have children. Building strong relationships through emotional and social support This sub-theme is defined by how the uniformed police officers try to keep life partner relationships sustainable and alive, despite having a demanding profession. Gender inequalities and dividing the workload at home are also addressed in the sub-theme. Some of the police officers, particularly men, emphasized the importance of having a functioning sexual life, which did not always imply sexual intercourse, but intimacy and communication. Additionally, taking time for mutual activities, or relaxation, together as a family, or together as a couple, was important. It was also crucial to share more things than just their children. Moreover, also the opportunity to spend some time alone. For example, one police officer who shares both her profession and workplace with her longtime partner describes the importance of having something that they do not share. So, for her, not seeing her husband that often due to shiftwork was a way of getting time alone. The police officers also raised aspects such as overcoming both individual differences and being understanding towards each other, as well as showing respect, as important issues to succeed in a relationship. Samuel elaborates on the topic: "…the key is that you have an understanding and respect for Frontiers in Psychology 07 frontiersin.org each other's thoughts and feelings … it probably works differently for different people, but … you still have to find a way to give both space … if I were to compare it to when my wife was pregnant or when we just had a baby, all the focus is on mom, but it will never last in the long run unless dad also gets attention. " Striving for equality in the relationship was also touched upon by the police officers. It was found to be a way to build a strong relationship and overcome inequality and gender differences. For example, Mia is in a heterosexual relationship with another uniformed police officer, and both work shifts. She explains: "… family-wise, I feel that it is pretty much me who sacrifices to make it work. " Thus, there is a distinction between male and female roles. Female police officers believe they must shoulder more responsibility for the family and that their life partners' professional roles usually take precedence. This kind of inequality led Ava to divorce her husband. She felt that things were better after the divorce because they shared the responsibilities better when the children lived with them for a week in turn. She explains the reason for her separation like this: "That he had his own business and worked a lot and I took a lot of responsibility at home … which eventually led to a separation. " Thus, some of the police officers had changed life partners during their years working as police officers or were on the verge of separating. Several reasons for why were specified, but they centered around a lack of communication and gender inequalities in the relationship. Most men, also those that were still in a life partner relationship, admitted to their life partner taking a greater responsibility regarding family. Female police officers also agreed that they often did more when it came to family and household. Additionally, the male police officers explained how they tried to do their part to unload family work from their life partner. However, it was more in terms of "helping out" than sharing the load. As Ethan describes it: "… my partner, she is on parental leave and at home, and … I kind of feel that now I have to go home because I need to sort of relieve her…. " This inequality makes the male police officers feel as though they have a debt of gratitude that must be paid when they get home from work. It also inflicts a bad conscience on the men to perform better at home and feel inadequate in their private life roles. As Lucas puts it when talking about the difficulty of being enough as the ‛great challenge': "that I can feel that I'm not quite enough, that I cannot make it work. It is important for me to get this exercise that I talked about [as a police officer], it is important that you perform at work, at home you must perform. It can make me stressed … that I do not really feel like I can make it work. " For the male police officers, it was also more about the sense of validation, than their life partner carrying the heavier burdens of family life. Nevertheless, they did want to share the responsibilities at home and saw their home as a mutual project in some ways. Wyatt offers this description: "Yes, we try to help each other out. However, as a matter of fact, she pulls a bigger load than me. Anything else would be a lie. I'm so darned blind … if there's a coffee cup on a workbench, I do not think about it, I just walk past it. While she sees everything … But then I tried. I have to lighten the burden some way, otherwise, I will not have a wife. " Hence, this feeling of guilt also appears to be linked to the division of labor and the sacrifices needed to make family life work. However, no matter how hard they try, they do not feel they can compensate in their private life roles. This feeling was intensified, especially if their life partner had a daytime job while they were home "resting" because they had a day off in the middle of the week. Yet often their life partner would still not be satisfied with their efforts if they tried to contribute to the household chores. As Leo puts it: "…mostly try to sort of compensate for both my personal shortcomings and then the shortcoming of me working three shifts and maybe being away a bit, so that my wife does not have to do as much. And that makes me really go beyond myself to try to be there for my wife and my family and I do not know, it probably does not matter what I do, it does not really work. " Female police officers also expressed guilt, but it was more directed at them not spending enough time with their children than not "helping out" enough at home. Another aspect relating to support was that many male police officers described that having a life partner who was understanding toward their professional role was important if their life partner did not work in the police force themselves. Those that had life partners who were also police officers were relieved that their life partner could relate. This was an aspect only mentioned by the male police officers. As Leo puts it: " …one reason why my relationship has lasted with my wife is probably because she is a police officer herself so she has some kind of knowledge or some understanding otherwise I would have probably been divorced by now I think (laughs)…. " The feeling of having an understanding life partner towards their professional role, seems to be important, at least for the male police officers. Furthermore, both men and women mentioned how talking to each other about everything, from minor details to major issues, even if you do not always agree on everything was important. The police officers emphasized that it had not always been easy and that they had also attempted to seek family counseling to improve communication in their partnership. Hence, many mentioned communication and support in various forms as important for taking on the challenges that exist when building strong life partner relationships, especially during difficult times, such as when struggling within parental roles or experiencing life crises. Another aspect was the importance of having someone they could talk to about their professional role if they needed to. Thus, obtaining social and emotional support was sometimes sought from family members. This however included a schism between roles because they were unable to fully describe what they have been through at work during the day, either due to the confidentiality of the job or because they did not want to horrify or appall their life partner. This meant that they did not always get what they needed when searching for emotional support from their life partner. Nevertheless, they were still clear about the fact that if it became too much to handle for their life partner, they should show respect and seek support from their workplace. Ethan describes this as "… I just need to be able to tell someone … and it becomes natural that you want to talk about it at home. But … then I do not because she kind of does not want to know everything … she likes to listen … but she does not want to hear about horrible things … she cannot handle that well. " Hence, even though they are clear on how they should handle this conflict between their professional role and private life roles, they are still left with emotional strain. Discussion Overall, the role balance of the uniformed police officers in this study is a multifaceted issue that encompasses finding harmony between professional and private life roles, managing priorities while working shifts, as well as trying to build strong relationships through emotional and social support. It sheds light on the complexities that police officers face struggling to maintain a balanced and fulfilling life Frontiers in Psychology 08 frontiersin.org both on and off duty. The police officers describe their everyday life and the challenges they experience in both their professional and private life roles. Police-specific challenges include both enrichments and conflicts when having a family while working as a uniformed police officer. As an example, one conflict is their hypervigilance when in private life roles, while an enrichment is being able to assess risks to know when there is actual danger or not. Another more general issue identified in the study was gender-patterned role conflicts. Both the female and male police officers in the study felt that the women in their respective relationships were pulling the heavier load regarding household and children. In addition, preference was given to the men's professions in the relationships, which seemed to cause guilt. Furthermore, resulting in a role conflict based on emotional strain for both men and women in the study. Our study revealed that police officers feel conflicted about their roles and struggle to balance their professional and private life commitments. Thus, maintaining relationships when shifts clash, overtime is a reality, and there is no social network to fall back on were contentious issues that our study identified. The answers to this problem should not only come from the police officers themselves but also from their employers and local governments, for example, by offering night care for children. This is especially important if the police officers are single parents. Our findings concur with another Swedish study on single working mothers who work shifts. They have been found to have frequent difficulties matching preschool opening hours with their employment schedules (Roman, 2017). Our study found that the police officers also needed a family-friendly workplace. Supportive workplace examples included flexible solutions regarding shift work. However, it was not possible for everybody. Consequently, the findings indicate that to be able to work as a uniformed police officer, a supportive social network is needed. This is in line with another Swedish study, where Alsarve (2017) found access to social support as a single working parent especially crucial (Alsarve, 2017). Hence, despite all the legislations and family policies, promoting gender equality in workplaces in Sweden, not all aspects seem to be implemented at different police departments in Sweden, as seen in our study. Although our sample is small, we can still find gender inequalities, where, for example, childcare for shift workers is regarded as insufficient. Hence, the findings highlight the stress that arises between roles because of time demands, emotional strain, or behavioral requirements for key roles in life, according to Evans et al. (2014). The consequences of role balance are also important when raising awareness of uniformed police officers' work-life balance, as professional roles tend spilling over into private life domains. In our study, there were differences in how role conflict was perceived based on gender. Thus, if Police authorities want to keep both male and female police officers in uniforms, providing for instance a familyfriendly work environment, with flexible schedules for parents is important. Furthermore, it would be important to investigate the different types of role conflict strain experienced by male and female police officers. Moreover, to determine whether work-life conflict is experienced similarly or differently between genders, more research should be done using our findings. For example, a previous study on work-family conflict has revealed that while both men and women may experience the same amount of work-family conflict, there is also a gender suppression effect indicating that women work fewer hours or leave their jobs, to adjust for their work-family conflict, while men generally will not. This results in the same levels of work-family conflict for both genders (Young et al., 2023). Grönlund and Öun (2018) also found that women make choices to keep work-family conflict at a bearable level while at work, often avoiding familyunfriendly work conditions. The study also controlled for femalecoded and male-coded professions, such as police officers (Grönlund and Öun, 2018). Hence, more research is needed also within the domain of work-family conflict of Swedish police officers, especially focusing on gendered patterns within the police force, and how it affects police officers' private life domains. According to Rawski and Workman-Stark (2018), there are four elements of a masculinity contest culture-showing no vulnerability, being strong and showing stamina, putting work first, and having the desire to hurt others to thrive in one's own right-and these have also been studied in relation to police organizations in other countries (Rawski and Workman-Stark, 2018). Thus, working in a maledominated organization may also be one of the reasons why the police officers preferably searched for emotional support from their life partners. This should however be studied further. But a Norwegian study on help-seeking behavior within the police service showed that less than 10% of police officers experiencing depression and suicidal ideation had contacted a psychologist or psychiatrist. Instead, they sought help from physiotherapists or chiropractors (Berg et al., 2006). Although, we were unable to find evidence to support all of the four elements of masculinity contest culture according to Rawski and Workman-Stark (2018), we also found another aspect "putting work first". Hence, it is reasonable to assume that this phenomenon also exists among Swedish police officers, despite Sweden's efforts to advance gender equality and its ranking as one of the most gender-equal nations in the European Union (Gender Equality Index 2020 Report, 2020). The male police officers in the study also expressed being stressed that they were not doing enough to participate in parenting and domestic work. This is in line with Caroly (2011) who regards the division of labor within a male environment to be marked by workload regulations based on sexual stereotypes, whereas in female environments, like nursing, collective regulations allow schedule adjustments to meet domestic needs (Caroly, 2011). Consequently, despite sunshine examples found in our study, within a hierarchical and inflexible structure, adjustments to schedules are difficult to achieve. This implies a social injustice in a male-dominated context, which contributes to maintaining a gendered division of labor, also visible in our study. These gender-related factors may also be amplified for women in male-dominated professions, particularly if their life partner is also a police officer. Regardless, our results need further investigation on a larger scale. According to Guppy et al. (2019), gender behavior in households is also changing towards a more gender-friendly dynamic. This is in part due to changes in men's behavior, and involvement in parenting (Guppy et al., 2019). Thus, this could be another reason for the male police officers feeling guilty for not sharing the household workload enough, and that their careers are prioritized before their life partners. It could also be the reason for the female police officers' feel of discontent. Hence, guilt is a way of addressing gender issues in life-partner relationships, and the attempt to shoulder an equal share of the load. However, the gender-related issues found in this study should be studied on a larger scale since they could be related to health issues. It is especially important, since according to Harrysson and Elwér (2017) our society's gender system affects our health negatively, and previous studies show that females Frontiers in Psychology 09 frontiersin.org have been found to experience more sick leave than men (Harrysson and Elwér, 2017). According to Grönlund and Öun (2020), having a parent-friendly schedule and being able to share the workload at home is important, it would also alleviate some of the issues raised in our study regarding emotional stress of the police officers. Particularly how sharing of childcare would reduce women's stress. In our study we discovered a tendency for women to experience a double-work burden with both paid and unpaid work, which the male police officers are also aware of in their respective relationships and try to mitigate. For example, the female police officers were concerned with not having enough time with their children. The male police officers instead indicated that they worried about financial issues and their life partners carrying a heavier household burden than them. According to Duxbury and Higgins (2012) life partners of police officers tend to spend more time on domestic care than they do (Duxbury and Higgins, 2012). Moreover, studies on role balance in heterosexual relationships support our findings of gender differences in the data (Marks et al., 2001). It is also in line with Connell (2021) suggesting that there is a gendered division between paid and unpaid work within the gender system in our society, indicating unpaid work at home being more a female responsibility, whereas paid work is more of a male concern (Connell, 2021). However, as a result, this experience left the male police officers with feelings of guilt towards their life partners because of not spending enough time on household chores. The female police officers were not content either due to spending too much time on household chores, instead of with their children. The issue of role balance as a uniformed police officer also comes to a head with some of the police-specific aspects. One role-conflicting example is their hypervigilance, which comes with their professional role. This hypervigilance seems to cause a behavioral strain for the police officers, indicating a between-role conflict, where spare-time activities in private life is avoided. Conflicts between work and family life have according to Nilsen et al. (2017) been found to be associated with later sickness absence, indicating the need to mitigate the risks of conflicts between professional and private life roles (Nilsen et al., 2017). The finding in this study is also in line with previous research, where police officers have been found to worry about their family's safety, for example checking for bombs under cars (Sundqvist et al., 2021). Worrying about the well-being of their families while also juggling more common life demands, may place additional emotional strain on the police officers, which requires further research beyond this study. However, being concerned with risks could also cause role enrichment among the police officers. Thus, the knowledge of risk assessment was passed over from their professional role to their private life roles resulting in emotional relief instead of stress. Another police-specific aspect causing both conflict and enrichment between roles is when the police officers due to their professional role are left with an emotional strain due to job tasks. By searching for emotional support in their private life this strain can however at times be eased. Also, it may be temporarily lifted by having children, since at home, everything focuses on them. Another example of both role conflict and enrichment is how shift work may cause conflicts between roles, but it can also ease role strain. For example, the police officers felt that shift work allowed them to have more time with their families. According to Evans et al. (2014), when skills, resources, and energy are shared between various roles through enrichment, positive experiences in one role may help the person avoid negative experiences in another (Evans et al., 2014). Limitations According to Elo et al. (2014), the trustworthiness of a qualitative content analysis should be regarded from collecting data to finalizing the study in writing. For this study, dependability issues need to be raised regarding sampling strategies and might be one of the study's important shortcomings since it largely includes Caucasian heterosexual couples, possibly limiting the variety of perspectives. Although the data over time might be stable, the other aspect of dependability, namely stability under different conditions, might be impossible to prove. For the same reasons, the transferability of results might also be an issue. Thus, studying the challenges in the intersection of police officers' professional and private life roles should be conducted on a larger scale to include more diversity in the future. The intention was to utilize semi-structured interviews when gathering data for this study. The choice was made to concentrate on the participants' narratives as data collection went along, with a final check to make sure all subjects had been covered. While changing methods might seem like a limitation, it should, according to the credibility of qualitative content analysis, instead be seen as a strength since it also reflects self-awareness of the researchers. Transcripts were also read several times to obtain a sense of the whole, as well as keeping notes of emotional expressions in the transcripts to ensure the transmission of intrinsic meaning and strengthen trustworthiness. To increase credibility, the findings were presented and discussed with other researchers, and feedback was solicited from those not involved in the study. To increase the conformability of the data, the entire author group reviewed the final analysis and checked that quotations within a theme were consistent with the theme itself. Rich descriptions of findings, illustrative quotes, as well as quotes from all participants, were used to demonstrate the grounding of the data. Defining the study context, participants, and settings as clearly as possible should also strengthen the issue of transferability. Also, to gain a researcher's perspective on the data and strengthen methodological integrity, as well as the credibility and dependability of the results, each author contributed their expertise, linking existing literature to the themes and expanding on similarities and differences identified in the data. The authors all have different expertise, thus, throughout the entire process, different competencies and perspectives were brought to the analysis. Conclusion and practical implications This study sheds light on how police officers balance their professional roles with personal responsibilities as life partners and parents. The study advances our knowledge of the extra hardships that police officers experience because of their job duties and how this impacts their personal lives. For instance, feeling torn between roles, experiencing the overshadowing effects of their demanding profession, being constantly alert and vigilant, and dealing with feelings of guilt. However, the study also highlights that the police officer role can positively impact their private life roles in various ways, such as how being a police officer can bring enrichment and fulfillment to their personal lives. The results from this study may also apply to other male-coded contexts, such as firefighters, or military personnel, where Frontiers in Psychology 10 frontiersin.org the consequences of work are also a challenge to combine with family life. Furthermore, the study touches upon other wider-ranging challenges like working shifts, which pose a crucial question about how difficulties, particularly those arising from gender inequality, can be mitigated within police forces. These findings have practical implications for police officers, who must be able to successfully balance their personal and professional lives while taking care of a family. For instance, encouraging a gender-inclusive workplace would be crucial, where both male and female police officers have the option of continuing to work as uniformed police officers even after starting a family or becoming single parents. As well as to do so in good health. Hence, the findings may also be helpful in clinical practice when working with police officers' health or the well-being of police officers' families. Data availability statement The datasets presented in this article are not readily available because of the sensitive nature of the data, which needs special ethical considerations. Thus, the data cannot be shared with a third party. Requests to access the datasets should be directed to elin. granholm@umu.se. Ethics statement The studies involving human participants were reviewed and approved by Swedish Ethical Review Authority. The patients/ participants provided their written informed consent to participate in this study.
v3-fos-license
2020-10-07T10:20:25.032Z
2020-10-01T00:00:00.000
222142967
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.201092", "pdf_hash": "861fdd9b039635595181f164ac70deb7f420eca3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2466", "s2fieldsofstudy": [ "Psychology" ], "sha1": "6397bd7156059090c0e1bd4e81c01827c93b051d", "year": 2020 }
pes2o/s2orc
Recognizing affiliation in colaughter and cospeech Theories of vocal signalling in humans typically only consider communication within the interactive group and ignore intergroup dynamics. Recent work has found that colaughter generated between pairs of people in conversation can afford accurate judgements of affiliation across widely disparate cultures, and the acoustic features that listeners use to make these judgements are linked to speaker arousal. But to what extent does colaughter inform third party listeners beyond other dynamic information between interlocutors such as overlapping talk? We presented listeners with short segments (1–3 s) of colaughter and simultaneous speech (i.e. cospeech) taken from natural conversations between established friends and newly acquainted strangers. Participants judged whether the pairs of interactants in the segments were friends or strangers. Colaughter afforded more accurate judgements of affiliation than did cospeech, despite cospeech being over twice in duration relative to colaughter on average. Sped-up versions of colaughter and cospeech (proxies of speaker arousal) did not improve accuracy for either identifying friends or strangers, but faster versions of both modes increased the likelihood of tokens being judged as being between friends. Overall, results are consistent with research showing that laughter is well suited to transmit rich information about social relationships to third party overhearers—a signal that works between, and not just within conversational groups. Introduction During social interactions, people produce a variety of dynamic behaviours that not only serve to function within an interacting group [1], but can also inform third parties about the nature of their social relationships and their intentions in a broad sense. Imagine encountering a group of people talking and suddenly everybody in that group erupts in spontaneous laughter. There are many inferences an overhearer might draw about the people laughing together, including their history, current emotional states, shared information and even the interpersonal affiliation between specific dyads within a larger collection of laughers. To what extent does a group laughing together provide information about their relationship beyond just hearing them speak with one another? How well can overhearers generate accurate inferences, what information is offered, and how do acoustic features of the laughter itself play a role? Here, we explore the relative roles of laughter and talk in intergroup communication. Human laughter is characterized by a series of rapid bursts of vocal energy, often called bouts [2] that occur primarily in conversational interactions [3][4][5]. The initial burst is typically loudest with successive bursts often decaying in both frequency and amplitude [6]. Laughs can be either voiced or unvoiced, with voiced laughter following rather simple vowel production rules [7,8]. Laughter is highly recognizable across widely disparate societies and has proven to be a very robust indicator of positive emotion [9,10]. But by contrast, it is also associated with social ostracism and humiliation [11][12][13]. Evidence for highly distinct laugh types related to positive and negative emotions is limited, with some research suggesting distinctions across social functional categories (e.g. [14]), and other work pointing to the importance of context, with a high level of ambiguity in the physical properties of different laughs related to affect (e.g. [15]). The complex pragmatic utility of human laughter is not well understood. A fairly substantial body of work strongly suggests that laughter functions, at least in part, to communicate positive affect and cooperative intentions among groups of mutually trusting individuals in ongoing relationships. The notion that laughter helps people signal affiliative intentions is central to all current functional approaches [13,[15][16][17][18][19][20][21][22][23][24][25][26]. But few theorists have considered seriously the ubiquitous phenomenon of laughter in groups. Dezecache & Dunbar [20] provided an interesting exception in their ethological analysis of the typical sizes of natural conversational and laughter groups, which were similar, and generally around three or four individuals. They proposed a 'grooming at a distance' hypothesis, explaining group laughter (and conversation) as a means to regulate cooperative relationships beyond the limit of one imposed by physical grooming. When people laugh in groups they often do so together, but limited work has explored how people laughing together affects perceivers or what information it might contain. The focus of the current study is the perception of colaughter. Here we define colaughter as temporally coincident laughter between two or more individuals who are in close spatial proximity to one another and engaged in shared attention. People often indicate affiliation when colaughing, and sometimes direct it negatively, either in groups or alone, toward those with whom they do not wish to affiliate. Colaughter production (also called coactive, shared, reciprocal and antiphonal laughter) arises early in development [27] and promotes affiliative feelings, higher perceived personal similarity, and reports of subsequent increases in relationship satisfaction and intimacy in those who produce it [28][29][30]. Additionally, colaughter has been associated with mutual sexual interest in brief cross-sex encounters [31], cognitive similarity [32], and the behaviour is notably different between men and women, as well as between friends and strangers. For example, in one study, female friends reported engaging in shared laughter earlier in their friendships than male friends (three weeks versus six weeks) [33]. Bryant [34] found that in recorded conversations, female friends generated significantly more frequent colaughter than cross-sex friends, or male friends, and friends in general produced higher rates of colaughter than strangers. Acoustically, colaughter was louder than individual laughs produced by the same speakers, and colaughter between friends was louder and had greater pitch variability than colaughter between strangers. Overall, a variety of research findings suggest that colaughter is rich with social information for overhearers. There are clear benefits for individuals to develop sensitivity to subtle indicators of alliance structure, as well as benefits for allied signallers to convey that information reliably [35]. If these positive pay-offs held consistently for typical group interactions, social vocalizations such as colaughter that reliably correlated with cooperative relationships could have evolved for intergroup purposes. This social group signalling approach extends the functions of laughter beyond the immediate interactive context and, as explained below, can help elucidate some of laughter's more unique acoustic and psychological features. Nonhuman animal examples of group signalling are abundant-animals collectively signal group territory boundaries, mateships and other identifying information by chorusing for other groups and individuals. Humans exhibit many group-level vocal signals as well, including ritualized chanting, group singing, coscreaming, among others-signals that broadcast to those outside the group information about identity, intentions and coalition strength [36][37][38]. Could colaughter function similarly? From a signal design perspective (i.e. how signals are shaped by natural selection to solve adaptive problems of communication), laughing is well suited for intergroup communication. Laughter has acoustic features that are traditionally associated with wide broadcast and the penetration of noisy environments [39]. First, laughs often contain alerting components, such as high energy voice-onsets (e.g. high pitch and loudness with abrupt onset) effective for grabbing attention [40,41]. Second, laughter is conspicuous-it contains fairly distinctive acoustic attributes that differentiate it from most other vocal signals (crying is an interesting exception with several similar acoustic characteristics). Third, laughter comprises small repertoires-a reasonably stereotyped form resulting from automatic and rhythmic neuromuscular oscillations. Fourth, laughter is typically repetitious. A single laugh bout with simple acoustic elements can last for many seconds, and laugh epidemics have been documented with laugh episodes continuing for hours and even days [42]. Finally, laughter is highly contagious [43], pointing to design for group production. This collection of physical characteristics makes human laughter fairly exceptional across the known variety of play vocalizations in social mammals, most of which are difficult to hear at even a small distance. Human spontaneous colaughter can be construed as a derived activity originating from the co-production of play vocalizations [44], with the features described above resulting from ritualization-the process of a by-product cue becoming physically modified into a functional signal [45]. A form-function account that predicts and explains these acoustic characteristics is needed. Recent research suggests that dyadic colaughter might be particularly effective in signalling information about interpersonal relationships. In a study examining the detection of friends versus strangers from short isolated clips of colaughter, listeners from 24 societies, ranging from small-scale hunter gatherers to industrialized college students, were able to reliably judge whether two people laughing together were friends or strangers [18]. Moreover, participants from different societies tended to use the same acoustic information to identify friends, specifically arousal-linked laughter elements such as shorter burst duration and irregularities in pitch and intensity cycles. Sensitivity to the social relevance of laughter appears to develop early. Children begin laughing as early as six weeks, and before long, social events will trigger it [46]. Laughter is quite frequent in young children, but somehow it has escaped extensive empirical scrutiny in the developmental literature, a situation that is now changing [47]. Using a looking time paradigm, Vouloumanos & Bryant [48] found that five-month-old infants preferred colaughter between friends over that of strangers. A second group of five-month-olds were surprised when the source of the colaughter (friends or strangers) was incongruent with a video sequence of two people displaying either affiliative or non-affiliative behaviour. Another recent study documented the social nature of laughter production in preschoolers, showing that children as young as 3 years old produced up to eight times more laughter at funny cartoons when in the presence of at least one other child [49]. Larger groups did not elicit more laughter, and the amount of laughter was not associated with subjective levels of funniness of the stimuli in the children. This work conceptually replicated previous research done with 7 year olds [50]. The mere co-presence of others is enough to inspire the behaviour, suggestive of a developmental programme calibrating colaughter social signalling. Adults and children are attuned to the acoustic features of colaughter between friends, with a likely attentional focus on speaker affect. Perceptually, one possibility is that high arousal is associated with spontaneous laughter [19,51] which, in turn, is judged as more likely to be produced between friends (see SI in [18]). Relative to volitional laughs, spontaneous laughs tend to be higher in pitch, higher in rates of intervoicing intervals (i.e. more time between voiced bursts), more irregular in pitch and intensity cycles, and noisier (for a review see [17]). In a large cross-cultural study, participants from 21 societies reliably distinguished spontaneous laughter from volitional laughter, again relying on related arousal-linked acoustic phenomena to make their judgements as listeners identifying friends in colaughter across cultures [9]. Other types of spontaneous emotional vocalizations differ from their volitional counterparts on similar acoustic features tied to speaker arousal, such as raised and more variable fundamental frequency, and lower harmonicity [52]. Sensitivity to indicators of speaker arousal could help overhearers track emotional engagement between interactants. This provides the basis for possible subsequent positive selection on groups of senders to amplify the signal, resulting in a ritualization process [44]. Proximately, this can manifest itself in familiar group members as heightened experiences of mirth and joy that motivates further colaughter. Moreover, there could be additional elements of group interactions that contribute to people's affective experiences and subsequent colaughter features. For example, reduced behavioural inhibition and increased overall comfort between familiar speakers could afford colaughing episodes that are not necessarily reflecting greater arousal, but more honest sounding vocal emotions and relaxed engagement. While recent studies strongly suggest arousallinked qualities are playing a role in people's judgements of familiarity, affiliation is potentially revealed in a variety of ways through interactive dynamics. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201092 In summary, there are now well-documented, wide cross-cultural consistencies in the perception of affiliation in colaughter, as well as the perception of laughter as spontaneous or volitional. These results provide preliminary but compelling evidence that laughter has recognizable universal structural aspects that communicate important information about affect, relationships and intentions. Here, we investigated the relative efficacy of laughter to communicate information about social affiliation by comparing colaughter to overlapping speech (cospeech). We consider cospeech to be a useful co-interactive behavioural control to make this comparison. As in laughter, there are many reasons that people overlap in their talk, including backchannelling (i.e. brief utterances while listening in conversation), interruptions, and ordinary turn-taking in which the vocalized offset of one speaker minimally overlaps with the onset of an interlocutor in fairly precise ways [53]. These conversational phenomena can be associated with a variety of pragmatic effects [54]. Perhaps laughter is not special in its ability to reveal affiliation, and instead the dynamics of other vocal interactions can reveal equally reliable information to overhearers. Thus, we used cospeech here to act as an appropriate baseline behaviour in conversational interaction rather than a viable alternative as a potential signal of affiliative status between speakers. Emotional signals such as colaughter contain rich information about the mutual affective intentions between socially interacting individuals, including honest signals of the physiological states of the interactants. Colaughter between friends is more likely to be judged as containing individual spontaneous laughter [9,18], a product of the evolutionarily conserved vocal emotion system [55][56][57]. These signals can reveal social information in a way that ordinary speech typically does not. By this logic, we expected that colaughter would be more effective in conveying speaker affiliation than cospeech. Additionally, we explored whether vocal production speed-an arousal-linked feature present in both laughter and speech-would affect judgements of affiliative status. Again, because laughter is a carrier signal of both physiological arousal and emotional valence, we expected that increased laughter burst rate-an index of production speed in laughter-would increase judgements of affiliation. But an equivalent increase in speech rate (i.e. syllables per second) would not result in increased judgements of affiliation in cospeech by the same speakers. While arousal can have perceptible effects on speech rate [58], there are many non-affective reasons that people alter their speech rate, and intra-and interspeaker variability are high. Method Using edited clips from conversational interactions between friends and strangers described below, we presented listeners with interspersed recordings of colaughter and cospeech, hearing either a sped-up version or the original version of any given token. Listeners were asked to identify whether the pair of vocalizers were friends or strangers and were additionally asked to rate how well they believed the interactants liked one another. Participants We tested 108 participants (28 male) (mean age = 19.2; range = 17-23; s.d. = 0.9) who completed the experiment for credit in an introductory communication course at University of California, Los Angeles. The experiment was approved by the UCLA Institutional Review Board, and all participants provided informed consent prior to participating. Materials and procedure 2.2.1. Laughter and speech stimuli All stimuli were extracted from 24 recorded conversations between either established friends (mean length of acquaintance = 20.5 months; range = 4-54 months) or newly acquainted strangers who met immediately prior to the recording. All conversationalists were college-aged students ( [59]. The original set of 48 colaughter segments have been used in two previous studies [18,48], with detailed descriptions available, including acoustic information. In sum, colaughter was defined as simultaneous (offset of an initial laugh within 1 s of onset of a subsequent laugh) laughter produced by two speakers without verbal or other audible sounds present. From 24 conversations (12 between friends and 12 between strangers), the first and last occurrences of qualifying bouts of colaughter were used for a set of 48. Between friends and strangers, colaughter segments were not different in duration (M = 1.1 s; s.d. = 0.37 s) or laughter onset asynchrony (M = 313 ms; s.d. = 257 ms). Cospeech was defined as simultaneous speech production (energy onset within 1 s) between two speakers. The first and last instances of cospeech were excised from the same conversations as the colaughter described above, providing a set of 48 cospeech clips. These initial criteria were the same as those used to extract colaughter samples. But cospeech, of course, differs from colaughter on many dimensions, including its likely greater heterogeneity in pragmatic functioning. Additional criteria were used for choosing cospeech samples that probably increased their coherency, and thus their potential ability to reveal relationship dynamics. Instances had to contain no greater than a 4 : 1 ratio of speech time between speakers, no laughter, no successful interruptions and no verbal information that would identify the speakers as friends or strangers. The average duration of cospeech samples was 2.3 s (s.d. = 0.93), which is greater than two times the length of the corresponding colaughter samples. Speech rates (measured as syllables per second) of cospeech between friends (M = 3.7, s.d. = 1.6) and strangers (M = 3.9, s.d. = 1.75) were similar, t 94 = 0.67, p = 0.51. Pilot work in developing these stimuli revealed that clips of cospeech matched in duration to the corresponding colaughter clips from the same interactants (approx. 1 s) did not afford accurate judgements of friends versus strangers, providing initial support for the expectation that laughter provides richer information about affiliation than does speech. To confirm that the verbal information alone would not reveal the relationship status, cospeech samples in the current study were transcribed verbatim and the text was presented randomly on a computer (iMac; SuperLab 4.0) to a separate group of participants from the same pool as the main experiment. Participants were equally likely to judge the text as being between friends for From the complete set of 96 tokens, we created a speed-manipulated set. Samples were sped-up (duration reduced 33%) with pitch held constant using the Adobe Audition 2.0 (www.adobe.com) constant stretch effect function (stretching mode: time stretch, high precision, splicing frequency: 51 Hz, overlapping: 30%, ratio = 150). All tokens were normalized to peak amplitude. See figure 1 for spectrogram examples of colaughter and cospeech for both friends and strangers. See electronic supplementary material for one audio example from each condition. Procedure Two stimulus lists were created with half of the manipulated colaughter and cospeech clips in each list. Participants were then presented one of the two lists, thus only hearing one version of each clip (96 trials). The experiment was presented using SuperLab 4.0 (www.superlab.com) on an iMac desktop computer in an experimental cubicle in a quiet room. Participants wore headphones (Sony MDR-V250) and loudness levels were checked prior to each session. The 96 colaughter and cospeech clips were presented in random order, and after each recording, participants were asked (i) decide whether the people interacting were friends or strangers by pressing either '0' for 'strangers' or '1' for 'friends' on a computer keyboard, and (ii) how much do you think these people liked each other on a scale of 1 to 7 where 1 is 'not at all,' 4 is 'somewhat' and 7 is 'very much'. Prior to beginning, participants were told that some of the pairs of people were friends at the time of the recording, and others were complete strangers who were meeting for the first time. After one practice trial, the experiment began. See electronic supplementary material for text of complete instructions. Participants answered two questions after each trial. The first question was to identify whether the presented pair of speakers were friends or strangers. Raw response rates for answering 'friends' in the judgement task across the conditions of speaker familiarity, mode of communication and vocal production speed are presented in figure 2. The second question was 'How much do you think these people liked each other?' Results for this second question are presented in the electronic supplementary material. Statistical modelling We performed a signal detection analysis in the form of Bayesian multilevel probit regressions [60] with weakly regularizing priors. The binomial response ( judgement of 'friends' versus 'strangers') was royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201092 predicted by intercept (equivalent to criterion, or response bias) and familiarity (familiar versus nonfamiliar dyads; equivalent to sensitivity or d prime). Criterion indexes participants' bias in judging stimuli as more or less likely to comprise friends independently from the actual category. The sensitivity indicates how well familiarity can be detected in the stimuli, controlling for the criterion (see Models 0 and 1). Further, we tested whether our experimental manipulations affected criterion and sensitivity by introducing main effects (relations to criterion) and interactions (relations to sensitivity) of cospeech versus colaughter, and speed with familiarity (see Models 2, 3 and 4). All predictors were modelled as multilevel parameters (also called random effects); that is, varying by the participant, and as possibly correlated. Potential stimulus heterogeneity was also accounted for by varying the intercept by stimulus. Note that multilevel models perform partial pooling of information, so estimates of each participant and each stimulus are influenced by the data available for all participants. This might reduce differences between participants, but it also provides more conservative estimates and has been shown to improve the generalizability of the models [61]. This generated the following models: Model 0: Probit(Response ij ) = α ij Model 1: Probit(Response ij ) = α ij + β 1j Familiarity ij Model 2: Probit(Response ij ) = α ij + β 1j Familiarity ij + β 2j Talk ij + β 3j Familiarity ij Talk ij Model 3: Probit(Response ij ) = α ij + β 1j Familiarity ij + β 2j Speed ij + β 3j Familiarity ij Speed ij Model 4: Probit(Response ij ) = α ij + β 1j Familiarity ij + β 2j Talk ij + β 3j Speed ij + β 4j Familiarity ij Speed ij + β 5j Familiarity ij Talk ij + β 6j Talk ij Speed ij + β 7j Familiarity ij Talk ij Speed ij The i subscript indicates 'for the ith stimulus' and the j subscript indicates 'for participant j'. Thus, the i and j subscripts on the α coefficient indicate random intercepts by participant and stimulus, and the j on a β coefficient indicates a random slope for that predictor over participant. We performed prior predictive checks to choose weakly regularizing priors for the model, excluding implausibly high values for the effects of the experimental manipulations while not critically affecting our results [62]. We tested normally distributed priors with a mean of zero and standard deviations of 1, 0.5, 0.3 and 0.1. We chose a standard deviation of 0.3 as the related predictive prior could generate a broader distribution of outcomes than our data and had a low probability for extreme rates of choosing 'friends'. The models were run on two parallel chains with 3000 iterations each, an adapt delta of 0.99 and a tree-depth of 20 to ensure no divergence in the estimation process. Estimates from the models are reported as mean and 95% credibility intervals (CI) of the posterior estimates. We also report the credibility of the estimated parameter distribution: the probability that the true parameter value is above 0 if the mean estimate is positive, or below 0 if it is negative. The quality of the models was assessed by: (i) ensuring no divergences in the estimation process, (ii) visual inspection of the Markov chains to ensure stationarity and overlapping between chains, (iii) ensuring Rhat statistics to be approximately 1.00 and number of effective samples to be above 200, and (iv) comparing prior and posterior estimates to ensure the model was able to learn from the data. The relevance of the predictors was assessed first by model comparison relying on estimated out-of-sample error via Pareto-smoothed leave-one-out information criteria (LOOIC, [63]). We also calculated the accuracy of judgements in terms of area under the curve (AUC) from the predictions of the above models. AUC is a robust measure of performance, accounting for baseline accuracy and the cost function of the judgement; that is, the possible relative weights put on the different kinds of errors (false positives and false negatives). To draw receiver operator characteristics (ROC) curves and calculate AUC performance, we estimated the predictions of the best signal detection model above and employed them to assess the effects of varying decision thresholds on the sensitivity and specificity of the model binomial predictions. To maintain intelligibility of the plots, we only represent ROC curves estimated from the means of the model posteriors. The results reported here are all calculated from the posterior estimates of the best model according to the model comparison procedure; that is, if the model included a three-way interaction, we calculated the actual contrast necessary to test our specific hypothesis and assessed its credibility instead of relying on the less specific three-way interaction. All analyses were performed in RStudio 1. Results Model comparison indicated that the experimental conditions credibly affected the participants' responses (Model 4 having a lower estimated out-of-sample error than Models 0 to 3) (see electronic supplementary material, table S1). See figure 2 for raw response data on judgements of 'friends' in Question 1. Estimates from Model 4 are presented in table 1, and figures 3 and 4. As predicted, participants were able to judge colaughter and cospeech above chance (AUC greater than 0.5, table 1 and figure 4), but were credibly better at judging colaughter-the difference in sensitivity between conditions was 0.37 (on a z-score scale), 95% CIs: 0.24 0.51, 100% credibility; that is, 100% of the estimated parameter values indicated a higher sensitivity for colaughter. In particular, colaughter was judged with 9% higher accuracy than cospeech. More exploratorily, we observed that participants were more likely to judge colaughter as produced by friends (62%) than cospeech (47.5%), a pattern that also held for stimuli produced by strangers only, with a false positive rate of 41% for cospeech and 49% for colaughter (mean difference in criterion between colaughter and cospeech: −0.19, 95% CIs: 0.07 0.32, 99.97% credibility). The overall pattern indicated that cospeech tended to generate more false negatives (i.e. cospeech produced by friends judged as produced by strangers) and colaughter more false positives (i.e. colaughter produced by strangers judged as produced by friends), but the total error rate was higher in cospeech. As expected, speeding up colaughter increased the likelihood of it being judged as friends: 68% of 'friends' responses, against 62% in the original non-sped-up colaughter. The effect is due to participants developing a positive bias for colaughter produced by strangers: the 'friends' responses for the latter went from 49% to 56%, with an increase in the criterion of 0.19, 95% CIs: 0.09 0.29, 100% credibility. But their ability to judge colaughter produced by friends did not credibly change (going from 77% of 'friends' responses to 80%, mean difference in sensitivity between original and sped-up colaughter: −0.01, 95% CIs: −0.16 0.14, 44.97% credibility). Detailed analyses of individual and stimulus variability in the effects are reported in the electronic supplementary material (i.e. see electronic supplementary material, figure S1-S4 and table S2). Analyses reported in the electronic supplementary material on the ratings of liking (Question 2: How much do you think these people liked each other?) reflected the findings for judgements of friendship. Table 1. Estimates of criterion, sensitivity and AUC (area under the curve-a measure of accuracy) for each condition, calculated from the posterior estimates in the signal detection model. Mean estimates for criterion and sensitivity are reported first on a percentage scale (0-100 percentage of choosing 'friends' when the stimulus is produced by strangers for criterion, difference in probability of choosing 'friends' between the stimulus being produced by friends and it being produced by strangers for sensitivity), then on a z-score scale (as in the SDT model, followed by 95% CI in parentheses (also on the original z-score scale). Area under the curve is on a 0-1 scale followed by 95% CI in parentheses and indicates the accuracy of the responses, with 0.5 being accuracy at chance level and 1 perfect accuracy. N = 108 × 96 trials (i.e. recordings) per participant = 10 368. On the x-axis the proportion of strangers stimuli correctly identified, on the y-axis the proportion of friends stimuli correctly identified. The ROC is derived by generating predictions from the model and calculating their sensitivity and specificity as a function of the threshold for categorizing a stimulus as produced by 'friends.' In other words, we can set the threshold so that an estimated probability of 0.1 or more of being 'friends' leads to a 'friends' categorization, and estimate sensitivity and specificity. We can then increase the threshold to 0.2 and estimate sensitivity and specificity. This process (on a much finer scale) is then plotted generating the ROC curve. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201092 They revealed that listeners gave higher ratings to: (i) actual friends over newly acquainted strangers, (ii) colaughter over cospeech, and (iii) faster colaughs and cospeech relative to slowed versions (see electronic supplementary material, figure S5 for rating data). Discussion Colaughter afforded more accurate judgements of friends and strangers than did cospeech, despite cospeech being over double in relative duration. This finding is consistent with the notion that colaughter constitutes an intergroup signal of affiliation to which listeners are highly sensitive. Given the ritualized, species-specific features of human laughter related to its apparent suitability for wide broadcast and signalling of affective information (e.g. colaughter is loud, conspicuous and contagious), it is not surprising that listeners were able to rapidly draw accurate inferences about the affiliative status of those laughing together. Cospeech, while revealing certain dynamics between interlocutors, is not well suited to rapidly and widely broadcast information, but instead is probably a by-product cue of conversational turn-taking and interpersonal coordination during discourse. As mentioned earlier, cospeech reflects a quite heterogeneous category of conversational linguistic phenomena given the complexity of language use and pragmatic signalling. Rather than thinking of cospeech as a communicative behaviour that can be contrasted with colaughter as a means of communicating affiliative behaviour, we instead consider it to be an appropriate baseline behaviour that affords an analysis of colaughter as a potentially specialized behaviour geared towards communicating affiliative relationships in group contexts. An important signalling property of colaughter is its ability to efficiently transmit social information. Cospeech clearly also conveys social information to overhearers, but over much longer timescales. Laughter, like other emotional signals such as crying and fear screams, works well at short timescales and over relatively long distances. It is also the case that individual laughter is effective at rapidly conveying emotional intent between interlocutors. In fact, the effects of colaughter on overhearers are probably due to the characteristics of the individual laughs making up the colaughter, as synchrony in colaughter does not seem to affect listeners' judgements of affiliation, and ratings of the individual laughs on the dimensions of valence and arousal predict judgements of 'friends' in the same laughs when paired [18]. While our speed manipulation had an overall effect of causing listeners to judge sped-up covocalizations as more likely to be between friends, this was due to differential changes in response biases across laughter and speech. When cospeech was sped-up, negatively biased judgement patterns disappeared, possibly reflecting a sensitivity to arousal that might better characterize interacting friends. Conversely, sped-up colaughter introduced a positive bias in listeners, making them more likely to judge a colaugh segment as between friends. While our production speed manipulation differentially affected judgements in colaughter and cospeech, it did not do so exactly according to our predictions. Listeners are clearly drawing some inferences based on speech rate in cospeech related to the affiliation of speakers, and these inferences are probably tied to inferred arousal at some level. This is the first study, to our knowledge, that has examined whether overhearers can detect affiliation in overlapping talk-our results suggest that listeners can make this judgement, independent of verbal information, if they have at least two seconds of cospeech. The effect is not large, however, and in the context of multimodal information, might be easily obscured. Theories of the social function of laughter always focus on pragmatic actions within the interacting group, and usually just the dyad. But recent research, including the current findings, suggests that its adaptive reach could quite easily extend beyond that. Laughter acoustic forms indicate a broadcasting function, and the high arousal common within strongly affiliative interactive groups can contribute to extremely loud and coordinated vocal outbursts. Moreover, laughter is highly contagious, making group colaughter ubiquitous across social contexts, providing chorusing utility. As stated earlier, there are many examples from the non-human animal behaviour literature documenting the coordinated signals between groups of animals that help them advertise different aspects of their social environment [37,38]. Humans' propensity for extensive cooperation beyond kin networks implicates a need for group-level signals that allow coordinated alliances to assess one another. Listeners' high sensitivity to the subtle dynamics of brief slices of colaughter-including preverbal infants and adults from all over the world-strongly suggests the existence of perceptual adaptations that track social alliances through vocal emotion behaviour. The distinction between signals and cues is of paramount importance here. Many behaviours associated with actual cooperation appear to be subtle cues as opposed to robust signals. We believe the royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201092 evidence presented here reveals a signalling system-a system probably functioning for established friends, and not newly acquainted strangers. One of the primary distinctions between signals and cues is that in order to evolve, signals must benefit senders, whereas cues are given off inadvertently, and in many cases are the by-product of another system and difficult to hide [68]. The benefits are derived primarily by how the signals affect the behaviour of a target audience. In the case of individual spontaneous laughter, the signal often encourages continued interaction with the target and predicts future positive engagement, including cooperative interaction and support [13,19]. Laughter can act as a covert signal of shared information, often mediated through encrypted, intentional humour [69], affording adaptive strategies of social alignment through the mutual recognition of spontaneous co-signals within groups [32,70,71]. Broadcasting such a state of affairs to other groups could serve to influence how outsiders might interact with that group, which can include both encouraging interaction in some contexts (e.g. inviting others to join), or discouraging it in contexts of possible intergroup conflict (e.g. rapidly advertising alliance size in situations where alliance structure might otherwise be ambiguous). Moreover, individuals in groups might strategically (though often unconsciously) amplify aspects of colaughter to enhance the effect, and this could be implemented volitionally. Exaggerated emotional signals such as colaughter can potentially serve to provide unambiguous information about the constitution of a group and affect overhearers in adaptive ways for sender and receivers. One limitation in the current study is the homogeneity in both our corpus of conversationalists from which the laughter was taken, as well as the experimental participants-both from WEIRD California university populations [72]. Consequently, even strangers in our conversations often have a fair amount of common ground, resulting in behaviours that signal friendly intentions. To a naive outsider, this could be difficult to discriminate from actual friends and thus result in high error rates and response biases like we see in the current study. Future research should examine speakers who are much more diverse on multiple demographic variables. The task of identifying friends and strangers would probably be much easier, and greater variability in the sample of conversationalists could afford a variety of predictions regarding how colaughter behaviour predicts compatibility. For example, perhaps newly acquainted strangers who immediately begin colaughing like familiar friends would cooperate more effectively and faster than those who do not engage in such a way. If so, a positive relationship between colaughing behaviour and actual cooperation between newly acquainted individuals could be an important reason why listeners are so attuned to the signal. Empirical explorations of the communicative dynamics of colaughter in groups, the innumerable social functions it might serve, and its effects on subsequent group interactions, might reveal design features of a powerful species-specific signalling system that are only beginning to be understood. Ethics. All procedures used for original data collection were approved by the UCLA Institutional Review Board (IRB#11-000928). Data accessibility. All data and analysis scripts can be found at https://osf.io/7egry/. Authors' contributions. G.A.B. and C.S.W conceived the study design, created the stimuli and collected the data; R.F. and G.A.B. conducted the data analysis; G.A.B., R.F. and C.S.W. wrote the manuscript.
v3-fos-license
2024-02-28T16:07:30.989Z
2024-02-26T00:00:00.000
268035915
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://iieta.org/download/file/fid/119554", "pdf_hash": "13c66670fba8095000511ccf19607b303c2d13d7", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2467", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "c218d1d079e4c93f98e84da0336eeb8da46192b7", "year": 2024 }
pes2o/s2orc
Development and Evaluation of a MQ-5 Sensor-Based Condition Monitoring System for In-Situ Pipeline Leak Detection The condition monitoring system for an in-situ pipeline is an innovative concept that uses MQ-5 sensors to detect fuel leaks in pipelines and relay concentration data to a receiver station. The essence of the monitoring process is to ensure the safety and security of engineering properties and lives. The project addresses the crucial requirement for rapid detection of fuel leaks to avoid environmental problems, economic losses INTRODUCTION The world's oil and gas industry depend on pipelines, which provide the most practical, economical, efficient, and environmentally friendly means of transporting crude oil and natural gas from upstream production to downstream refineries [1].Power plants, industries, domestic consumers and markets while traversing nations through oceans and continents.Pipeline a network of pipes containing pumps, valves, and some other control equipment for moving liquids, gases, and slurries which is (fine particles which are suspended in liquids) [2].The sizes of these pipes range from a diameter of 2 inches (5 centimeters) for oil-well series of collection systems to line up with a cross-section of 30 inches (9 meters) for high-volume water and sewage networks.Although some are built of concrete, clay products, and occasionally polymers, pipelines are often composed of sections of pipe made with metals (such as steel, iron, cast iron, and aluminum) [3,4].The pieces are joined by welding, and they are often laid underground.The majority of nations have a vast pipeline infrastructure.This is due to their importance to the broader population and the fact that they are typically out of sight.However, pipelines transmit almost all the water from the treatment plants to domestic homes, the natural gas from wellheads to individual customers, and long-distance oil overland transportation.According to Khan et al. [5], Nigeria produces mostly crude oil in Africa.Still, its income is greatly reduced by theft and attacks on oil pipelines, which greatly influence crude output and gasoline supply.The safest method of moving fluids is through pipes while in use.However, leaks are unavoidable because of maintenance errors made by people, sabotage, corrosion, and aging pipelines and fittings [6]. Even though pipeline leaks frequently begin tiny, their identification and late detection might have negative effects.Delays in detections can result in significant financial losses for an oil and gas business and environmental damage.A very low response system is necessary since there will always be a response in terms of security and technology.Shortening the response time to reduce the overall effect is economically significant.This effort will implement a system with a quick response time.According to Parapurath et al. [7] Discussed that when gas escapes from a cylinder or pipeline and enters an area, it was not intended to.It is said to have leaked.Because all these gasses normally are frequently colourless, and while some are odourless, it's almost impossible to determine if there is a gas leak around the environment.If the leak is not found, it can result in life-threatening explosions.Gas leaks have increased recently due to a lack of public knowledge and poor equipment maintenance.A gas leakage detection system must be built to find gas leaks from gas pipelines or cylinders in homes or businesses.It is essential to stop the loss of life and property.Using wireless sensor networks, cloud computing, and radio frequency identification (RFID) (WSN).Due to their benefits, such as low cost, quick availability, simple interface circuitry, endurance, and a wide spectrum of detectable gases, metal oxide chemo-resistorbased commercial gas sensors are quite common [8].A current trend is the development of smart homes all around the world.Automating regular actions like turning on lights and fans and regulating the thermostat has become popular among many people and businesses.The project's primary objective is to build an oil and gas leak detector utilising an oil and gas sensor.To increase safety and security, this device will continuously monitor the level of oil and gas leakages and connect to the Internet of Things using an ESP module (ESP 32 processor).This device can be used in LPG gas storage spaces in hotels and homes.The finished result of the project is used to notify the user and locate oil and gas leaks from pipes.To prevent intentional or unintentional leaks, this article provides instructions for finding oil and gas leaks in pipelines and alerts the user to the leak [9].Many people are using mobile phones right now all across the world.Therefore, using smartphones as a surveillance tool to detect gas leaks will be quite useful [7]. As a result, a developed Android application and a system for gas leak detection.Yet, this technology can prevent explosions that can occur as a result of gas leaks in addition to detecting them.The gas leak detection system is a popular tool right now.They work well and significantly lessen the damage.They might be more successful if they check for gas leaks and other safety measures.Current gas leak detection systems can find leaks that can send out audible alerts and text and contacts to consumers to alert them to where the leak is.However, there is no safety precaution; therefore, if there is no user, any accident could occur-however, the proposed approach with safety precautions.In case if there is a gas leak, the user can take immediate action using their mobile smartphone, such as turning off the electricity and gas, to reduce the chance of any damage.Using the created program on mobile devices, they may remotely operate the complete system.The system's main goal is to create a system for gas leak detection and to take the required precautions to avoid disasters caused by gas leaks.Arduino is the system's sole source of electricity.If a gas leak, smoke, or flame is found, appropriate actions can be taken to alert people. Oil and gas may be transported globally in a safe, costeffective, and quickly using pipeline transportation [10].However, according to current statistics, terrorism, vandalism, and sabotage have increased more frequently in this kind of transportation.How to solve these three interconnected issues is thus a significant challenge facing stakeholders in the pipeline business.Although most studies have recommended implementing improved regulations to address the issue, this study offers a technological alternative.Quality assessment of finished products is essential and inevitable to delivering standard goods to end users.Industry or industrial defects initiated during production reduce the quality of pipes and structures [11].If not properly monitored and corrected, customer satisfaction cannot be achieved, as low-quality products will flood the market.Besides, deterioration in pipes and structures takes place with time.Wear and tear are initiated in the course of use and often result in in-complete failure of system failure if adequate measures are not put in place to detect defects at the initial state. However, this work aims to develop a system with suitable sensors to effectively monitor, detect, and report the quality and status of pipelines.Also, the study incorporates a memory system for storing captured data; ➢ exploring the synergy of different sensors to alert the nearest control center; ➢ collecting response time data of existing condition monitoring methods, and system performance evaluation and data analysis.The system will be developed and equipped with essential features to monitor pipes and other structures before delivery and in-situ effectively.➢ To prevent catastrophic breakdowns and ensure the safe functioning of pipelines and structures, in-service inspection is necessary.It is crucial to effectively measure a structure's conservation state before, after, and throughout its lifetime.➢ Monitoring structural health can help define an expensive but successful program by maximizing the interventions required to maintain or restore safety and serviceability conditions. METHODOLOGY Condition monitoring continuously monitors a particular machine's state (such as temperature, vibration, etc.) [12].This chapter models the conditioning and monitoring of an underground crude oil and gas pipeline.The proposed system is a mobile device-based application for hydrocarbon leak detection.The materials component employed is based on the system hardware. System hardware The system hardware consists of both mechanical and electrical components.These electrical and mechanical components are used to construct both transmitter and receiver stations, the pipeline, and the fittings that comprise the pipeline system.Table 1 shows the selected materials employed in this study, and Figure 1 The liquid crystal display is a diode that uses small cells and ionised gases to produce images.In this modern age, liquid crystal display (LCD) is now the most selected option for graphics, video and alphanumeric display.The display offers space for 20 columns of characters on four rows, making it ideal for presenting a huge amount of text without scrolling.Each column has a resolution of 58 pixels, ensuring visibility from a considerable distance.Aside from its size, the fascinating feature about this version of the display used is that it communicates via I2C, which means we only need two wires to connect the display to the Arduino (aside from GND and VCC).This is feasible due to the Parallel to the I2C module connected to the display, as seen in Figure 2a.The I2C module can also be purchased separately and connected to the 16-pin version of the display.The following are the specifications; Forward Current: 30mA, Forward Voltage: 1.8V ~ 2.4V, Reverse Voltage: 5V, Luminous Intensity: 20mcd. The MQ-5 sensor detects various gases like liquified petroleum gas (LPG), methane, propane, and other hydrocarbons [9].The MQ-5 gas sensor operates on the following principles: The sensor contains an element with a SnO2 hypersensitive filament.When a flammable gas, such as LPG, is supplied, the conductive property of the filament increases, and the proportion of the change in conductance or resistance can be utilized to calculate the approximate gas concentration.This sensor is utilized in the in-situ pipeline CMS to detect vaporized PMS that may have leaked from the model pipeline. Piezoelectric buzzer and Espressos if ESP3Devkit V1 microprocessor A buzzer, a sounder, an audio alarm, or an audio indication is a basic audio device that produces sound in response to an incoming electrical input.Buzzers are classified into two types: piezo buzzers and magnetic buzzers.In this work, a piezo buzzer was used, which possesses piezo crystals between two conductors.When a voltage is supplied, the crystal alters the position of the conductor numerous times per second due to a constant push and pull motion, producing a sound wave [7].The sound frequency can be adjusted according to the source code used, which permits varied sounds to be created.This component in Figure 3a is installed in the reception station to generate an audio warning if any transmitter station detects a leak.The ESP32 is an all-in-one system developed for mobile, wearable electronics, and Internet of Things (IoT) applications.The ESP32 is a microcontroller family of system-on-chip that comes in various low-cost modules.The ESP 32 module was selected because of its inexpensive Wi-Fi modules with low power consumption.Express if systems' prototyping microcontroller board with integrated Wi-Fi and Bluetooth capability.It has 520KB of RAM and a Tensilica Xtensa 32bit LX6 onboard microprocessor as given in Figure 3b.It is a low-cost, low-power system capable of easily powering IoT projects.The ESP32 was selected as the microcontroller for each substation in this project work because it offers low power and rapid internet connectivity to capture data for remote monitoring via the website.Unlike most prototyping microcontrollers, all ESP32 boards have a communication protocol that allows them to communicate with one another using Wi-Fi or Bluetooth.This allows information to be sent between substations and the base condition monitoring station.A secure digital card, often called an SD card, is a small, electronic, portable device with microchip circuitry for uploading, processing, storing, and transferring media material such as files, photos, and applications.It is a simple method for transferring data to and from a standard SD card.The Micro SD Card Adapter module is straightforward, with an SPI interface for connecting various SD cards and an onboard 3.3V voltage regulator for powering the SD card.The SD card module communicates with the ESP32 microcontroller and reads data from the SD card that has been inserted.This module allows the ESP32 to communicate, read, and write data to and from the memory card, enabling data logging and analysis.The purpose of the SD card in this work is to store the occurrence of leakages in the data in a datasheet.Operating Voltage: 4.5V ~ 5.5V DC, Current Requirement: 0.2mA ~ 200mA, 3.3V on-board Voltage Regulator, Supports FAT file system, Supports Micro SD up to 32GB presented in Figure 4. An electric connection known as a jumper wire is used to connect distant electrical networks on printed circuit boards.Connecting a jumper wire makes it possible to short-circuit and short-cut (jump) to the electrical circuit.Male-to-male, male-to-female, and female-to-female jumper wires are the three types available.This particular semiconductor emits light when current is permitted to flow across it.This component generates a visual alarm when a transmitter detects a leak.This is positioned in the reception station so the operator can detect it presented in Figure 4.The DS3231 RTC is a low-cost real-time clock which calculates time in hours, minutes, seconds, days, and months.It recognizes leap years and months with 30 days.This module makes it simple to record the time of a leak incident for convenient monitoring and trend analysis.It runs on 3.3V or 5V and has a CR2032 3V battery that can store information for up to a year.These pins are available on the RTC module: VCC: Power supply pin 3.3V or 5V, GND: Ground pin, SDA: Serial Data, SCL: Serial Clock Signal.Its operating voltage is around 2.3V ~ 5.5V, operating temperature from (−40℃-+70℃), 400KHz 12C Interface, CR2032 Battery back-up as presented in Figure 5a.This is a common 5.5mm barrel DC power supply jack used to power a circuit.It can accept a male power connector for power delivery from a bulk DC power supply or DC charger.When the three 3.7V batteries in the transmitter and reception station are depleted, this component is utilized to charge them.Ranging specifications Terminal: PCB Soldering, Voltage Rating: 48V, Current Rating: 5A, Insulation Resistance: 1000MΩ min, Dielectric Withstanding: 500VAC max, Gender: Female given in Figure 5b.5d.This is an accessible single pole singlethrough switch (SPST) commonly used as a reset button.When pressed, it closes a circuit and reopens it when released.When necessary, this switch is utilized to reset the receiver station.One side is connected to the ESP32 microcontroller's Enable Pin (EN), while the other is connected to the ground (GND).When the operator clicks the button, the EN pin connects to GND and briefly resets the microcontroller.The specifications include; Input Voltage: 1V ~ 12V, Input Current: 10µA ~ 50mA, Operating Temperature: −25℃ ~ +70℃, Switch Dimensions: 4.5mm × 4.5mm × 3.5mm as illustrated in Figure 5e.It supports and connects electrical and mechanical electronic devices or components using conductive routes, tracks, or signal traces of copper sheets attached to a non-conductive substrate.It provides a controlled and predetermined platform for circuit development by acting as an electronic board utilized for the assembly and connection of electronic components.Express PCB was used to print PCBs for the transmitter station circuits and the receivers, as presented in Figure 5f. This two-position switch alters an electrical power supply to a circuit or device on or off.This SPST switch was utilized to turn on or off the power supply from the batteries to the transmitter and receiver stations' circuits.Its specifications are; Working Voltage: 250V, Working Current: 3A, Maximum Insulation Resistance: 100MΩ, Contact Resistance: 0.02Ω.These inactive components are used to resist current flow by a specified amount.It is used in this project to lower the supply voltage to the LEDs in the transmitter and receiving stations.The resistor used to do this had four bands, namely Red, Red, Brown, and Gold, and hence a resistance value of 220 ±5%.The resistor specifications used; Resistance: 220Ω, Tolerance: 5%, Type: Carbon Film, Maximum Operating Voltage: 350V, Operating Temperature −55℃ ~ 155℃ as shown in Figure 5g.These are stiff metallic solderable connectors designed to accept male pins.This was utilized to link the ESP32 male pins to the PCB for both the transmitter and receiver stations.Two single 15-pin headers were used as debited in Figure 5h. Circuitry method This part of this study describes the whole operation of the system (Figure 6).The circuit building and designing used a long-term circuit connection approach, using a PCB (printed circuit board) assembled and ordered via Express PCB.For a simple and direct circuit construction, the components of the receiver and transmitter stations were soldered together rather than utilizing a normal prototyping breadboard.The transmitter stations are responsible for detecting leaks in various locations along the pipeline by utilizing an MQ-5 sensor that will detect discharged PMS exiting a leak location or fracture.In each station, an ESP32 microcontroller is used to collect input from the gas sensor and send the concentration of the gas present to the receiver station.The following processes are involved in the building of the transmitter stations: i. Making use of prototyping software (Design), for the circuit to be prototyped on a breadboard.ii. Configured the PCB for the transmitter stations using Express PCB online.iii. Drilled three circuit box connection holes for the ESP32 USB port, the DC power supply jack port, and the MQ-5 sensor.iv. Using the specified PCB design, etch and solder the female pin headers, two 7805 voltage regulators, MQ-5 sensor jumper wires, and power supply wires from the batteries onto the PCB. v. Drilled the circuit box and PCB for the transmitter station and secured it with a 5mm bolt and washer.vi. Connected the ESP32 Microcontroller to the female pin headers and the MQ-5 sensor pins to the corresponding header pins (after observing the preset orientation).vii. The batteries were packed together and connected the power supply circuit, which includes the batteries, rocker switch, and DC female jack, to the PCB input power line.viii. Glued the MQ-5 sensor to the circuit box's inner wall, with the sensor facing outside the box via the aperture.ix. Closed the circuit box, powered the station, and connected the ESP32 to a PC to monitor the MQ-5 sensor output using Arduino IDE Serial Monitor.3.7V rechargeable Li-ion batteries (3) were utilized to provide power to the circuit, resulting in a compact and adaptable structure that eliminated the necessity for fixed power supply connections while also decreasing the system's space needs.According to manufacturing standards, a minimum of 24 hours of battery life was necessary to guarantee appropriate heating time for the MQ-5 gas sensors.As a result, a maximum amount of power of the transmitter circuit was measured in order to decide on the suitable battery capacity for the sensor as well as to keep the system functioning even after pre-heating shown in Figure 7. A mAh multi-meter was connected in series with the batteries to measure the current drawn by the circuit to determine the peak power usage.At a 5V DC supply to the components, the average peak current consumption was 149mA.As a result, the following equation was applied to compute the maximum power usage. where, I = the maximum current; V = Voltage supplied; As a result, the circuit's power consumption is The formula below calculates the system's battery life using three 3.7V batteries, each generating 3800mAh of current. The discharge time is computed when the efficiency of a Liion battery is 0.9, and the battery capacity is 3800mAh: Circuit design and construction of the receiver station The receiver stations collect sensor data from each transmitter station and alert a local operator of a potential leak using a buzzer and LED light indicator representing each station.An LCD panel is also employed to display the MQ-5 sensor readings for each transmitter.A real-time clock records the time a leak occurs while the tabulated report is stored in an SD card for trend analysis and reporting.An ESP32 Devkit V1 microcontroller is also responsible for taking inputs from the transmitters through the ESP-NOW communication protocol and providing output for the operator.The microcontroller also provides over-the-internet updates for monitoring through an IoT platform enabling remote monitoring capability.The procedures involved in the construction of the receiver station include the following: i. Prototyped the circuit on a breadboard using online prototyping software. ii. Optimized the circuit by removing unnecessary components.iii. Designed and ordered the PCB for the transmitter stations using Express PCB online.iv. Opened two circuit box connection holes for the ESP32 USB port opening and the DC power supply jack port opening.v. Etched and soldered the female pin headers, two 7805 voltage regulators, four 220Ω resistors, SD card pins, LCD pins, RTC pins, buzzer wires and power supply wires from the batteries onto the PCB based on the predetermined PCB design.vi. Drilled a hole in the circuit box and PCB for the receiver station and joined them using a 5mm bolt and washer.vii. Used a saw to cut a rectangular hole at the top of the receiver station circuit box and drilled four holes to join the LCD panel to the box with four 5mm bolts and nuts with the screen facing the top.viii. Completed all the connections based on the designed circuit and glued loose wires to keep them stiff and stable.ix. Closed the circuit box, switched on the station and connected the ESP32 to a PC to check the functionality of the receiver station and its communication with the transmitter stations through the Arduino IDE Serial Monitor. Figure 8 shows the schematic structure of the receiver station, and Figure 9 illustrates the system's circuit diagram. Software and programming implementation The Arduino Integrated Development Environment (IDE) is an open-source electronic prototyping program to create inventive, smart, and creative devices [11].It is compatible with hundreds of microcontrollers from many manufacturers and provides a simple way to utilize this hardware in a userfriendly environment.The Arduino IDE 2.0 platform was used to develop, test, and implement program codes on the ESP32 DevKit V1 for the transmitter and receiver stations, allowing the system to detect leaks, acquire and store information on possible leak scenarios, alert a local operator of a potential leak detected by any of the transmitter stations, and provide a means of remote monitoring.The transmitter and receiver stations were programmed separately to fulfil their respective roles but were linked to communicate to achieve the integrated system's aim. Programming of the transmitter stations The transmitter stations detect leaks in the pipeline and transmit sensor data to the receiver station.The following portions were included in the program for the transmitter stations: Initialization and Transmitter-Receiver Communication: This comprises initializing variables that will be used later in the program and establishing peer-to-peer communication between the transmitter and receiver by launching the ESP NOW communication protocol. • The code includes the necessary libraries: 'esp_now.h'for ESP NOW functionality and 'WiFi.h'for setting up Wi-Fi as a station.'esp_now_register_send_cb()' to receive the send status of transmitted packets.• The 'peerInfo' structure is configured with the receiver's MAC address, channel (0), encryption status (false).• The receiver is added as a peer using'esp_now_add_peer()'. If the addition fails, an error message is printed.• Leak Detection and Sensor Data Transmission: In this program section, the ESP32 microcontroller collects MQ-5 sensor data and sends it to the receiver over the ESP NOW protocol. In the 'loop()' function; • The 'id' field of 'myData' is set to 1 (can be any unique identifier). • The 'esp_now_send()' function sends the data stored in 'myData' to the receiver using the 'broadcastAddress'.The data is sent as a byte array, and the data size is provided using 'sizeof(myData)'.esp_err_t result = esp_now_send(broadcastAddress, (uint8_t *) &myData, sizeof(myData)); myData.x= (xSensorValue); • The result of the send operation is checked.A success message is printed if it is successful ('ESP_OK'); otherwise, an error message is printed.A delay of 2 seconds is also added before the next iteration of the loop: if (result == ESP_OK) { Serial.println("Sent with success"); } else { Serial.println("Errorsending the data"); } 2.2.5 Programming of the receiver station By receiving input from the transmitters, showing it on a locally located LCD, and saving the information on a mounted SD card, the receiver station serves as the main control centre of the condition monitoring system.Initialisation of Devices and Communication: • The code includes necessary libraries and defines various constants and variables. • It initialises the ESP-NOW communication protocol and sets up the Wi-Fi mode for the device to act as a Wi-Fi station.• The code also initialises the RTC (Real-Time Clock) DS3231 module for timekeeping and the SD (Secure Digital) card for data storage.• Additionally, it initialises the LiquidCrystal_I2C library and sets up the LCD. Transmitter-Receiver Communication: • The code defines a structure `struct_message` to represent the data to be transmitted and received.• The code then appends the readings and other relevant data (such as time and date) to the CSV file on the SD card for record-keeping. Pipeline setup The pipeline structure was to imitate the bends and flow direction in actual pipelines service, utilizing a way smaller scale MQ-5 sensor to provide maximum accuracy from the sensor due to its limited range.30mm in diameter PVC pipe was used to construct the model for a simple set-up.Three 250mm and four 40mm pipes were used for the straight flow paths and 32mm (90°) elbows were used for the bends as shown in Figure 10. RESULT AND DISCUSSION In this work, while utilizing the MQ-5 sensor, the aim is to focus on the time detection concerning distance and, finally, on the delay in response of the control system, as shown in the graphs.The system was tested using a hydrocarbon fluid (fuel) in a pipe set-up and continued to vary the distance.At the same time, data are being collected to observe the detection time and distance.The data and time were measured with the real-time clock (RTC) and stored in the SD card inserted in the receiver station.After detecting the LPG gas, the sensor transmits a signal to the microprocessor, which the latter afterward analyses.The microprocessor then transmits an active signal to further devices connected to the outside world.As a result, a buzzer signals the presence of the gas concentration; this finding is supported by Somov et al. [13].The primary performance parameter assessed and the outcome acquired to analyze the outcome is response time, quantified as the time it takes to receive an alert on the receiving station after the transmitter senses the leak.The response time for station two (2) data was recorded at 1m, 2m, and 3m from the receiver station.This response time was tested five times for each distance, and the average response time was calculated.Figure 11 shows the response times recorded for station two when it's 1m to 3m from the receiver.Figure 12 shows a chart representation of the differences in the response time of station two at a distance of 1m to 3m.Response times decreased after four test runs but increased after the fifth test.This indicates that the minimal response time has been achieved and will thus vary around that value based on external factors such as wind speed and direction [14][15][16][17][18].These parameters were not taken into account.Analysing response time statistics, the system generally achieves appropriate response times for each station at varied distances.The average response time for all stations is less than 5 seconds, showing that sensor data is transmitted relatively quickly from transmitter stations to receiver stations.As a result, it is possible to conclude that the system and receiving station meet the specified performance standards.There were no reports of data transmission delays or loss during data collection.Nevertheless, uniformly throughout the measurement period, demonstrating reliable communication between the transmitter and receiver stations. T-Test (Paired two samples for mean) analysis of response times This analysis compares station 2's response times for each distance.The significant difference in the response times at each distance can be found by comparing the response times and performing a significant inference test.This helps determine whether there is a significant change in response times when the distance between the transmitter and the receiver station varies.The findings from this study shed light on the effectiveness and dependability of transmitter-receiver communication for identifying and transferring concentration data.The mean t-test used the paired two data to compare response times in pairs.Table 2 Compares 1m vs 3m and 2m vs 3m.While Table 3 Shows the T-test output comparing the response times of station 2 at 1m vs 2m from the receiver.The p-value is used in hypothesis testing to determine the importance of the results.If the complete hypothesis is true, the p-value shows the chance of obtaining the observed results (or more extreme results) [19].The null hypothesis in this scenario would be that there is no difference between the two reaction times for the compared distance.The p-value calculated for a two-tailed test performance stated in Table 4 is 0.000152, less than the generally used significance level of 0.05 in hypothesis testing.The null hypothesis is rejected because the p-value (0.000152) is less than the significance level, and the conclusion that there is a significant difference between station2 response time when the receiver is 1m away vs 2m away is reached, which means that the difference observed in the sample means (-13.9768) is unlikely to occur by chance alone if there is no true difference between the variable in the population.The negative t-statistic value indicates that the response time at 1m has a lower mean than the response time at 2m, and the p-value confirms that this difference cannot be explained by chance.The two-tailed pvalues of 0.000139 and 0.001997378 reveal a significant difference when the distance between the receiver stations is adjusted.The Pearson correlation from each table also shows a strong association between the response times for each distance, indicating a definite proportionality between response time and distance [20]. CONCLUSION Regarding response time and data transmission reliability, the pipeline condition monitoring system created to identify leaks in a pipeline conveying fuel has shown encouraging results.The system detected flashing fuel leaving leak spots and sent concentration data to the reception station using four transmitter stations equipped with MQ-5 sensors and ESP32 microcontrollers.At various distances, the response time was recorded and analyzed, defined as the time it takes for the receiving station to get an alert after the transmitter detects a leak.According to the data analysis, the system consistently achieved response times within an acceptable range for all transmitter stations at varying distances.The average reaction times were less than 5 seconds, demonstrating a rather rapid transmission of sensor data.The two-tailed p-values of 0.000139 and 0.001997378 reveal a significant difference when the distance between the receiver stations is adjusted.This illustrates the system's effectiveness and efficiency in detecting and transmitting concentration data on time.Furthermore, the system's transmission dependability was adequate, with no delayed or lost data transfer seen during the data collection procedure. RECOMMENDATION The following recommendations for further improvement and future work are based on the project's results and conclusions. Environment testing-Conduct comprehensive environmental testing to evaluate the system's performance across various situations, such as temperature fluctuations, humidity, potential interference sources, and wind speed and direction changes.This will ensure the system's dependability and efficacy in various operational environments. Data analysis visualisation-exploring other data analysis and visualisation approaches to elicit more insights from the obtained data.Statistical analysis, graphical representations, or machine learning methods may be used to find patterns or anomalies in the sensor. Integration with monitoring infrastructure-Consider integrating the pipeline condition monitoring system with existing pipeline monitoring infrastructure, such as supervisory control and data acquisition (SCADA) systems.This interface will allow for real-time monitoring, automated alarms, and smooth integration into the pipeline management framework. Figure 2 . Figure 2. (a) Liquid crystal display and (b) MQ-5 sensor 2.1.2Piezoelectric buzzer and Espressos if ESP32 Devkit V1 microprocessorA buzzer, a sounder, an audio alarm, or an audio indication is a basic audio device that produces sound in response to an incoming electrical input.Buzzers are classified into two types: piezo buzzers and magnetic buzzers.In this work, a piezo buzzer was used, which possesses piezo crystals between two conductors.When a voltage is supplied, the crystal alters the position of the conductor numerous times per second due to a constant push and pull motion, producing a sound wave[7].The sound frequency can be adjusted according to the source code used, which permits varied sounds to be created.This component in Figure3ais installed in the reception station to generate an audio warning if any transmitter station detects a leak.The ESP32 is an all-in-one system developed for mobile, wearable electronics, and Internet of Things (IoT) applications.The ESP32 is a microcontroller family of system-on-chip that comes in various low-cost modules.The ESP 32 module was selected because of its inexpensive Wi-Fi modules with low power consumption.Express if systems' prototyping microcontroller board with integrated Wi-Fi and Bluetooth capability.It has 520KB of RAM and a Tensilica Xtensa 32bit LX6 onboard microprocessor as given in Figure3b.It is a low-cost, low-power system capable of easily powering IoT projects.The ESP32 was selected as the microcontroller for each substation in this project work because it offers low power and rapid internet connectivity to capture data for remote monitoring via the website.Unlike most prototyping microcontrollers, all ESP32 boards have a communication protocol that allows them to communicate with one another using Wi-Fi or Bluetooth.This allows information to be sent between substations and the base condition monitoring station. Figure 3 . Figure 3. (a) Piezoelectric buzzer and (b) ESP microcontroller 2.1.3SD card, jumper wires and Light emitting diode (LED)A secure digital card, often called an SD card, is a small, electronic, portable device with microchip circuitry for uploading, processing, storing, and transferring media material such as files, photos, and applications.It is a simple method for transferring data to and from a standard SD card.The Micro SD Card Adapter module is straightforward, with an SPI interface for connecting various SD cards and an onboard 3.3V voltage regulator for powering the SD card.The SD card module communicates with the ESP32 microcontroller and reads data from the SD card that has been inserted.This module allows the ESP32 to communicate, read, and write data to and from the memory card, enabling data logging and analysis.The purpose of the SD card in this work is to store the occurrence of leakages in the data in a datasheet.Operating Voltage: 4.5V ~ 5.5V DC, Current Requirement: 0.2mA ~ 200mA, 3.3V on-board Voltage Regulator, Supports FAT file system, Supports Micro SD up to 32GB presented in Figure4.An electric connection known as a jumper wire is used to connect distant electrical networks on printed circuit boards.Connecting a jumper wire makes it possible to short-circuit and short-cut (jump) to the electrical circuit.Male-to-male, male-to-female, and female-to-female jumper wires are the three types available.This particular semiconductor emits light when current is permitted to flow across it.This component generates a visual alarm when a transmitter detects a leak.This is positioned in the reception station so the operator can detect it presented in Figure4. Figure 5 . Figure 5. (a) Real time clock, (b) Rechargeable battery, (c) Rechargeable battery, (d) Printed circuit boards (PCB) (e)Voltage regulator (7805), (f) Rocker switch, (g) Resistor, and (h) Female pin header This lithium-ion rechargeable battery stores electric charge by reversibly reducing lithium ions.The 18650 means the dimensions of the battery, which are 18mm in diameter and 65mm in length.They are widely employed in electronic equipment because of their low self-discharge, high energy density, and endurance.The battery capacity is 3800mAh (milli-amp hour), nominal voltage 3.7V, debited in Figure 5c.This popular Integrated Circuit (IC) offers a continuous 5V DC power supply over a specified input voltage range.The designation 7805 refers to the voltage regulator's class and output, with "78" denoting positive voltage and "05" denoting a 5V output from the IC.This component reduces the voltage from the three 3.7V Li-ion batteries' total output voltage for both the transmitter and receiving stations.Input Voltage: 7V ~ 35V, Current Rating: 1A, Output Voltage: 4.8V ~ 5.2V as shown in Figure 5d.This is an accessible single pole singlethrough switch (SPST) commonly used as a reset button.When pressed, it closes a circuit and reopens it when released.When necessary, this switch is utilized to reset the receiver station.One side is connected to the ESP32 microcontroller's Enable Pin (EN), while the other is connected to the ground (GND).When the operator clicks the button, the EN pin connects to GND and briefly resets the microcontroller.The specifications include; Input Voltage: 1V ~ 12V, Input Current: 10µA ~ 50mA, Operating Temperature: −25℃ ~ +70℃, Switch Dimensions: 4.5mm × 4.5mm × 3.5mm as illustrated in Figure 5e.It supports and connects electrical and Figure 6 . Figure 6. Circuit diagram for the transmitter station 2.2.1 Circuit design and construction of the transmitter stationThe transmitter stations are responsible for detecting leaks in various locations along the pipeline by utilizing an MQ-5 sensor that will detect discharged PMS exiting a leak location or fracture.In each station, an ESP32 microcontroller is used to collect input from the gas sensor and send the concentration of the gas present to the receiver station.The following processes are involved in the building of the transmitter stations:i.Making use of prototyping software (Design), for the circuit to be prototyped on a breadboard.ii.Configured the PCB for the transmitter stations using Express PCB online.iii.Drilled three circuit box connection holes for the ESP32 USB port, the DC power supply jack port, and the MQ-5 sensor.iv.Using the specified PCB design, etch and solder the female pin headers, two 7805 voltage regulators, MQ-5 sensor jumper wires, and power supply wires from the batteries onto the PCB. × Figure 7 . Figure 7. Hardware structure for the transmitter station 2.2.2 Circuit design and construction of the receiver stationThe receiver stations collect sensor data from each transmitter station and alert a local operator of a potential leak using a buzzer and LED light indicator representing each station.An LCD panel is also employed to display the MQ-5 sensor readings for each transmitter.A real-time clock records the time a leak occurs while the tabulated report is stored in an SD card for trend analysis and reporting.An ESP32 Devkit V1 microcontroller is also responsible for taking inputs from the transmitters through the ESP-NOW communication protocol and providing output for the operator.The microcontroller also provides over-the-internet updates for monitoring through an IoT platform enabling remote monitoring capability.The procedures involved in the construction of the receiver station include the following:i.Prototyped the circuit on a breadboard using online prototyping software. • In the `loop` function, it retrieves the current time from the RTC module.•It accesses the readings from each board (stored in `boardsStruct`) and displays them on the LCD.The readings show that it controls the LEDs and buzzer to indicate abnormalities. Figure 11 .Figure 12 . Figure 11.Response time chart and number of experimental runs Table 1 . Materials used The call back function 'OnDataSent' is defined to handle the event when data is sent.It prints the status of the packet delivery.•In the 'setup()' function.The serial monitor is initialized with a baud rate of 115200.• The device is set to Wi-Fi station mode using 'WiFi.mode(WiFiSTA)'. • The variable 'xSensorPin' is assigned the value 36, representing the ADC0 pin (GPIO36) where the MQ-5 sensor is connected.Int x.Sensorpin = 36; It also initialises the SD card for data storage and checks if it is mounted correctly.• Furthermore, it sets up the initial display on the LCD, opens a file on the SD card (if it exists), or creates a new file if it doesn't exist.Alert System and Local Monitoring: Table 2 . T-Test data for station 2 at 1m vs 2m Table 3 . T-Test data section for station 2 at 1m vs 3m Table 4 . T-Test data for station 2 at 2m vs 3m
v3-fos-license
2022-08-17T06:16:19.418Z
2022-08-16T00:00:00.000
251592771
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11136-022-03216-w.pdf", "pdf_hash": "427c03bd4814592062fec912ed9907efe6edd5c5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2468", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c02c876a8e85e6fd021c2422b6753b17c0039ef8", "year": 2022 }
pes2o/s2orc
Stages of lipoedema: experiences of physical and mental health and health care Purpose Lipoedema is a progressive adipose (fat) disorder, and little is known about its psychological effect. This study aimed to determine the experiences of physical and mental health and health care across stages of lipoedema. Methods Cross-sectional, secondary data from an anonymous survey (conducted 2014–2015) in Dutch and English in those with self-reported lipoedema were used (N = 1,362, Mdnage = 41–50 years old, 80.2% diagnosed). χ2 analyses of categorical data assessed lipoedema stage groups ‘Stage 1–2’ (N = 423), ‘Stages 3–4’ (N = 474) and ‘Stage Unknown’ (N = 406) experiences of health (physical and psychological), and health care. Results Compared to ‘Stage 1–2’, ‘Stage 3–4’ reported more loss of mobility (p =  < .001), pain (p =  < .001), fatigue (p = .002), problems at work (p =  < .001) and were seeking treatment to improve physical functioning (p =  < .001) more frequently. ‘Stage 3–4’ were more likely to report their GP did not have knowledge of lipoedema, did not take them seriously, gave them diet and lifestyle advice, dismissed lipoedema, and treated them ‘badly’ due to overweight/lipoedema compared to ‘Stage 1–2’ (p =  < .001). ‘Stage 3–4’ were more likely to report depression (p =  < .001), emotional lability (p = .033) eating disorders (p = .018) and feeling lonelier, more fearful, and stayed at home more (p =  < .001) and less likely to have visited a psychologist (p =  < .001) compared to ‘Stage 1–2’. Conclusions A divergent pattern of physical and psychological experiences between lipoedema stages reflects physical symptom differences and differences in psychological symptoms and health care experiences. These findings increase the understanding of lipoedema symptoms to inform psychological supports for women with lipoedema in navigating chronic health care management. Lipoedema has a considerable negative impact on everyday functioning in terms of appearance and mobility. Often painful for individuals, the progressive abnormal distribution of fat leads to deterioration of joints and reduces mobility [3], impacting the ability to engage in physical activity, and work [6,11,12]. Importantly, the physical appearance of lipoedema is distressing and reduces emotional and social functioning [8]. Such that, mental health is more impaired than physical health on measures of quality of life in those with lipoedema [13]. Consequently, mental health conditions such as eating disorders, attempted suicide, depression, stress, fatigue, low self-esteem are highly prevalent in those with lipoedema [4,6,8,12,[14][15][16][17][18]. The complex aetiology of lipoedema has contributed to the limited research that differentiates symptoms across the progression of the condition [4]. Consequently, the experiences of lipoedema and associated symptoms based on reported stages of the condition are minimal. Similarly, the experience of engagement with health care within a complex diagnostic process has yet to be explored. By understanding and exploring relevant physical, psychological and health care experiences of those with lipoedema, this study seeks to understand the clinical physical and psychological symptoms relevant to stages of lipoedema that is needed to understand, treat and support women with lipoedema. This study aimed to determine the experiences of physical and mental health and health care across stages of lipoedema. Design, setting, and sample size This cross-sectional study is a secondary analysis of international data gathered (May 2014 to January 2015). Data were collected through purposive sampling by inviting individuals who self-identified as having lipoedema to complete the survey. Participants were recruited through online and social media via one of the largest support networks for lipoedema, based in the Netherlands. The survey was hosted in English and Dutch, and survey responses were collected anonymously at the time. The dataset was provided as a Microsoft Excel sheet and contained 1417 entries. After inspection, 11 duplicate entries were removed, leaving N = 1,406 entries in the dataset of which 44 entries showed missing data (N = 6 completed no items and N = 38 did not answer items relevant to the current study). Analyses reported here reflect N = 1,362 (N = 1,298 complete and N = 64 partially complete data entries). Materials The current study uses 51 categorical items from the dataset: demographics and comorbidities (four items: country, age, lymphoedema, and obesity), lipoedema characteristics (five items: whether participants were diagnosed, the time it took to become diagnosed, loss of mobility, pain, and fatigue), lipoedema management (seven items: sports (exercise), diet, healthy eating, stress reduction, psychological help, therapy, and liposuction) and motives for treatment (liposuction) (six items: to be thinner, more mobile, walk better, buy clothes more easily, less pain), work (four items: have a job, difficult to find a job due to lipoedema, lost a job due to lipoedema, and problems at work due to lipoedema), experiences with General Practitioners (GPs) (ten items: GP has knowledge about lipoedema, gives information about lipoedema, takes you seriously, is willing to help, is willing to learn, gives mental support, gives diet advice, gives lifestyle advice, said lipoedema is 'bullshit', and treated badly by GP/specialist become of lipoedema or overweight) and perceived impact on psychological characteristics and behaviours (13 items: depression, emotional lability, eating disorder, more lonely, easily depressed, quickly angry, sensitive, cry more, more fearful, inferiority complex, stay at home more, always think about lipoedema, satisfied with life), and psychological support seeking (two items: visited a psychologist about lipoedema and did it help). Ethics Permission to analyse and share data was obtained. Ethics approval to analyse and report the secondary dataset was provided by the Central Queensland University Human Research Ethics Committee (reference: HREC 22,882). Statistical analysis Data were analysed with IBM SPSS Statistics for Windows, Version 27.0. Descriptive and non-parametric inferential statistics were used to describe the sample as a whole and to understand whether results differ significantly by stage of lipoedema. Descriptive statistics on survey items are reported for the overall sample and for each of the three groups. Kruskall-Wallis tests were used to assess group differences in age and time to diagnosis followed by Mann-Whitney U tests to identify differences specifically between 'Stages 1-2' and 'Stages 3-4' groups. χ 2 analysis assessed stage group differences on the remaining items. Table 1 reports participants' physical characteristics and diagnoses. The median age for the sample as a whole and for each group was 41-50 years old, with Kruskall-Wallis test identifying a significant difference between age category groups. Follow-up Mann-Whitney U tests showed that the Stage 1-2 group were significantly younger (Mdn rank = 404.8) as compared to the Stage 3-4 group (Mdn rank = 485.09), z = − 4.77, p < 0.001. Obesity was reported by 53% of participants and lymphoedema by 41%. All three groups differed from one another with obesity and lymphoedema most often reported by the Stage 3-4 group and least often by the Stage 1-2 group. Participant characteristics As shown in Table 1, those within the Stage 1-2 group and Stage 3-4 groups were significantly more likely to report receiving a formal diagnosis (86% and 89%, respectively) as compared to the Stage Unknown group in which 64% had received a formal diagnosis but did not know their stage. Median time to diagnosis was 20-25 years for all participants with follow-up Mann-Whitney U Test revealing that those in Stage 1-2 took less time to diagnose (Mdn = 11-14 years) compared to those in Stage 3-4 (Mdn = 20-25 years). Physical characteristics and impact Participants often reported pain (85%) and fatigue (82%) and 33% reported a loss of mobility (Table 1). Post-hoc comparisons showed that a significantly higher percentage of those in the stage 3-4 group reported fatigue compared to the Stage 1-2 group alone and pain (pressure) compared to the Stage Unknown group alone. All three groups differed significantly on loss of mobility which was most often reported by the Stage 3-4 group (52%) and least reported by the Stage 1-2 group (18%). Less than two thirds of participants had a job, with the Stage 1-2 group being the most likely to have a job, followed by the Stage Unknown Group, and then the Stage 3-4 group, with all groups differing significantly from each other ( Table 1). The stage 3-4 group was also significantly more likely to report that lipoedema caused problems at work, difficulty finding a job and causing job loss as compared to both other groups. Treatment methods and motives The most common method reported for managing lipoedema symptoms was healthy eating, followed by sports (exercise), dietary changes and therapies ( Table 2). The least used were psychological help and liposuction. Stages 1-2 were significantly more likely to have had liposuction and use sports and less likely to use diet and psychological help as compared to both the Stages 3-4 and stage unknown groups. Those of unknown stage were significantly less likely to use therapies and healthy eating as compared to both other groups. Over 80% of participants wanted to have liposuction or were considering it ( Table 2). The Stage 3-4 group was less likely to report they wanted liposuction to be thinner and more likely to report they wanted liposuction to increase mobility and to walk better as compared to both other groups, and decrease pain as compared to the Stage unknown group. Table 3 shows significant differences between groups on all measures of doctor-patient experiences. Over half reported a perception and experience of GPs as being unaware of lipoedema with the Stage 3-4 group significantly more likely to report that their GP did not have knowledge about lipoedema than both other groups. Additionally, nearly 75% reported their GP did not provide information about the condition with the 'Stage 3-4' group significantly more likely to report not being provided with information than the 'Stage Unknown' group. Overall, 60% reported their GP took them seriously at least a little bit. The Stage 3-4 group was more likely to report that their GP did not take them seriously as compared to the Stage 1-2 group, and the Stage Unknown group was less likely to report that their GP did take them seriously compared to both other groups. Only 58% of participants perceived that their GPs were willing to help and 46% perceived their GPs were willing to learn (46%), at least a little bit, with the Stage Unknown group significantly less likely to report GPs as willing to help and willing to learn as compared to both other groups. Approximately 34% perceived their GP provided at least a little mental support. The Stage Unknown group was significantly less likely to be provided with mental support as compared to both Psychological experiences Overall, approximately 40% reported depression, 28% reported emotional lability and 16% reported eating disorders ( Table 4). The stage 3-4 group were significantly more likely to report depression and eating disorders compared to both other groups, and emotional lability compared to the Stage Unknown group alone. Individuals self-reported that their experience of lipedema 'changed their personality' in terms of 'sensitivity' (75.8%), 'always thinking about lipoedema' (73.4%), 'inferiority complex' (72.8%), 'staying at home more' (67.7%), 'easily depressed' (65.4%), 'more lonely' (59.8%), 'cry more' (55.0%), 'quickly angry' (49.0%), and being 'more fearful' (47.4%) ( Table 4). The 'Stage 3-4' group were significantly more likely to report that lipoedema changed their personality in terms of being more lonely, staying at home more, and being more fearful as compared to both other groups. Further, those in Stage 3-4 were more likely to report being more easily depressed, always thought about lipoedema, cried more and were quick to anger compared to the Stage unknown group alone. Just over a quarter of participants reported they were not satisfied with life, and this was significantly higher in Stage 3-4 compared to both other groups (Table 4). Few participants reported having seen a psychologist (22%), with Stage 3-4 significantly more likely to report having seen a psychologist compared to both other groups. Amongst those who had visited a psychologist (N = 287), 41.1% reported that it was helpful. Discussion The current study provides unique insight into lipoedema, and the negative impact experienced across physical and psychological functioning and health care. Those with lipoedema experience considerable difficulty in gaining timely and appropriate health care, increased pain and fatigue, and reduced mobility and ability to work, which in turn increases negative psychological, emotional, and social experiences. Importantly, a divergent pattern of experiences between those of higher and lower stages of lipoedema was identified. Those with stages 3-4 lipoedema were more likely to report physical difficulties and difficulty with work which in turn reduced use of exercise and increased motivation for medical treatment to reduce the physical burden of lipoedema than those in stages 1-2. Further, stages 3-4 were more likely to report difficulties in health care (not being taken seriously and exposure to weight stigma) and experience psychological and social dysfunction (depression, eating disorders, fearfulness, staying at home and loneliness) compared to stages 1-2. Despite these difficulties, few accessed psychological support. This differential profile of experience between lipoedema stage groups has not yet been fully understood. Physical characteristics and impact The results of this study profile the differential effects of lipoedema stages on physical functioning. Those within stage 3-4 lipoedema were more likely to be older and have comorbid obesity and lymphoedema, likely related to the progression of lipoedema over time with age, the associated increase in Body Mass Index (BMI), and increased burden on the lymphatic system [3]. Like previous research, pain and fatigue were highly prevalent across stages [4,6,12,14]. The current study showed that stages 3-4 lipoedema disproportionately experienced pain, fatigue, and mobility issues. This may be expected as mobility issues can result from damaged hip and knee joints and the development of orthopaedic disorders and gait alterations with lipoedema progression [3]. All stage groups reported a negative impact on work, mirroring previous research that shows between 51-73% are impacted in their career and restricted career choices [6,12]. These findings extend this research by showing that those of stages 3-4 were significantly more likely to report experiencing difficulties at work and in finding and keeping a job due to lipoedema compared to stages 1-2. Qualitative research with stage 3-4 women with lipoedema found mobility issues are a primary concern, impacting everyday activities and work [11]. Thus, the higher prevalence of impact on work functioning within stages 3-4 in the current study may reflect the greater prevalence of mobility issues, pain and fatigue found in this group. Those within stages 3-4, therefore, may need additional physical supports to preserve mobility. Lipoedema management and motives Results indicated that stages 3-4 are significantly less likely to exercise to manage lipoedema compared to stages 1-2, perhaps due to the higher prevalence of pain and mobility issues in this group. For example, UK research found that pain, health issues and lack of mobility drive reduced Inapplicable 5 (5.6%) a 10 (7.5%) b 6 (9.2%) a 21 (7.3%) exercise in lipoedema [6]. This is concerning as exercise is an important component of conservative treatment to move lymphatic fluid, manage weight, reduce inflammation, and improve overall physical and mental health [2,5,19]. Increasing participation in exercise is, therefore, important, particularly for those of stages 3-4. Increasing walking capability and mobility and reducing pain are important motivators behind wanting liposuction across stages, mirroring UK findings [12]. This is likely because liposuction treatments remove abnormal lipoedema tissues and improve pain, mobility, lymphatic functioning, social functioning, quality of life, career prospects, ability to exercise, and reduced need for care [2,12,13,20,21,22]. This study further demonstrates that stages 3-4 are more likely to report being motivated to receive liposuction to walk better and improve mobility, and less likely to want liposuction to be 'thinner', as compared to stages 1-2. This is likely related to the greater impact of mobility and reduced working capacity found within stages 3-4. Despite this, those in stages 3-4 were less likely to receive liposuction treatment compared to stages 1-2. Navigating the health care system to receive appropriate diagnosis and treatments, however, can be difficult. Experiences in health care A key finding was the difficulties patients experienced with GPs in lipoedema-related health care. Results showed limited awareness of lipoedema by GPs alongside significant delays in receiving a diagnosis (20-25 years). Similarly, the UK research in 2012 showed only 5% of those with lipoedema reported their GPs provided diagnosis [6] and survey research on a Dutch lipoedema website found an average of 18 years and 2.5 doctors to diagnose after onset [8]. More recently, Fetzer and Warrilow [12] showed a median of 26-40 years to diagnosis in the UK suggesting delays in diagnosis remains, with the current study showing that difficulties in diagnosis occur internationally. The lack of awareness of lipoedema by GPs may explain why some reported resistance by GPs to treat lipoedema. For example, many participants reported that their GPs were unwilling to learn about lipoedema, unwilling to help and did not take them seriously and 30% reported their GP dismissed lipoedema as a valid health condition. This mirrors UK research that showed that many of those with lipoedema report that their doctors were unhelpful and dismissive in response to complaints of lipoedema symptoms and often misattributed them to obesity [6]. The current study showed that stages 3-4 are more likely to report not being taken seriously by their GP than stages 1-2, perhaps due to misattribution of lipoedema symptoms as being related to obesity. Many participants reported experiencing being treated badly by their doctor/s due to their weight/lipoedema, consistent with research showing that lipoedema patients report having been blatantly 'weight-shamed' by doctors and told to reduce food intake and increase exercise [6,7]. Our findings provide greater nuance, with results indicating that those of stages 3-4 are more likely to experience mistreatment due to their weight/lipoedema compared to stages 1-2, perhaps related to increased prevalence of comorbid obesity in this group or misdiagnosis/perception of lipoedema as obesity and general lack of awareness of lipoedema as a valid health condition. Health care providers are well-known sources of weight stigma. For example, a recent meta-analysis of 40 studies showed that health care providers are a source of both implicit and explicit weight biases [23]. These weight biases and stigmatisation can adversely affect the quality of care provided and delay appropriate diagnosis and care due to misattribution of symptoms to obesity, and lead to reduced health behaviours, avoidance of health care and adverse psychological outcomes [24,25]. Further, a recent systematic review across 17 studies (N = 21,172) found that experienced/perceived weight stigma can become internalised, leading to negative outcomes such as body shame [26] which has been linked to increased self-criticism and depression [27]. Training GPs to identify lipoedema and targeting weight bias and stigmatisation are, therefore, important areas to address in lipoedema-related health care. Psychological experiences There is significant psychological distress experienced by patients with lipoedema, which differed by stage. Approximately 40% reported depression and 16% reported eating disorders, reflective of research using validated scales showing a prevalence of depression within lipoedema between 31 and 59% [8,16,17] and eating disorders of 18% [17]. The results indicated that stages 3-4 are more likely to experience depression and eating disorders than those in stages 1-2. This adds to research showing that increased lipoedema symptom severity is associated with increasingly severe depression [16] and a higher prevalence of depression and eating disorders in those with a body mass index (BMI) of ≥ 40 compared to < 40 [17]. This study demonstrated symptoms of psychological distress in that those with lipoedema often experience sensitivity, rumination (always Bold values indicate statistically significant results (p < .05) a , b indicates the groups that differed significantly from one another thinking about lipoedema), inferiority, loneliness, isolation (staying at home more), crying, anger, and fearfulness, which participants attributed to lipoedema. These findings indicate that those with lipoedema care about and focus on themselves and their lipoedema, often feeling inferior compared to others. Feelings of inferiority may be linked to shame and fearfulness of the judgements of others, leading to social avoidance and feelings of loneliness. Interestingly, those in Stage 3-4 reported experiencing greater social impairment (fearfulness, loneliness and staying at home) as compared to those in Stage 1-2, extending research reporting social impairment in lipoedema [6,8,13]. The higher prevalence of social impairment in stages 3-4 may be linked to increased exposure to weight stigma. For example, qualitative research with 11 women with stage 3 lipoedema showed they were often exposed to weight stigma and discrimination that impacted their confidence and selfworth but led to impaired social and working lives and social avoidance [28]. The results are concerning as the effects of social isolation and disconnection are well known to influence mental health. Together then, this research demonstrates the importance of supporting not only physical, but also social and emotional functioning and support-seeking behaviours in lipoedema. Psychological support seeking Best practice guidelines for lipoedema internationally show psychological support is a key component of lipoedema health management [3,5,19] and emotional support and reassurance that lipoedema is not the fault of the individual and referrals to professional psychological support are important [9,10]. However, only 22% of participants sought mental health support from a psychologist about lipoedema. This could be related to a range of barriers such as stigma in mental health as well as negative experiences with weight stigma in health care, shame, and isolation. Engaging in treatment-seeking behaviour is, therefore, difficult and together with the finding that few GPs provided mental support, this demonstrates that increased awareness and understanding of the psychological effects of lipoedema and its importance to chronic health care is vital. Limitations The international, cross-sectional design provides insight into the self-reported experience of those with lipoedema. It is noted that participants were primarily from the United States and Netherlands, which is to be expected given the strong global presence of the support group networks for lipoedema in these countries. Further, because of the diagnostic ambiguity about lipoedema, it is likely that many low-middle income countries (LMIC) may not have health knowledge of lipoedema and, thus, be able to participate in this research. From this study, the data highlight that broad health knowledge about lipoedema is low and as such is likely to be low and under-represented in LMIC countries. Further, the variables measured were brief and relied on the self-reported presence of mobility issues and mental health concerns such as depression rather than using validated scales or a clinical diagnostic framework. Despite this, the current study showed key areas in which future research may follow up using more sensitive and validated measures to provide further clarity to the clinical interpretation of the findings. It is also noted that data collection took place in 2014-2015 and as such, awareness and recognition by GPs of lipoedema as a valid health condition may have improved due to emerging standards of care and best practice guidelines [2,5]. As such, increasing awareness and knowledge of lipoedema and how it differs from obesity may increase experiences of appropriate health care support and treatments. Conclusions This study provided a unique description of the lived experience of those with lipoedema and a divergent pattern of experiences was identified between stages. Lipoedema involves physical, emotional, and social constraints that are misunderstood in health care settings. This study profiles symptoms of lipoedema to promote education about the condition and highlight the negative impact of weight bias and stigmatisation to better understand the mental health supports and treatments to promote psychological and social functioning. As such, there is a need to develop targeted interventions aimed at increasing knowledge and awareness of lipoedema and the use of appropriate strategies to promote physical and psychological functioning. Funding Open Access funding enabled and organized by CAUL and its Member Institutions. The authors declare no funding or financial support was used for the project and manuscript. Conflict of interest The authors declare there are no perceived or actual conflicts of interest or relevant financial or non-financial interests to disclose. Ethical approval Ethical approval was provided by the Central Queensland University Human Ethics Research Committee (01/03/2021/No: 22882) for the conduct of the study and publication. Informed consent Informed consent was granted when participants took part in the survey. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2021-10-21T15:37:59.626Z
2021-09-07T00:00:00.000
239679631
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://e-journal.undikma.ac.id/index.php/jurnalkependidikan/article/download/3921/2802", "pdf_hash": "fc6ea54bb5e4c17cf4e0f7f15130a72a49746f0a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2469", "s2fieldsofstudy": [ "Linguistics", "Education" ], "sha1": "12a500e3bbe405ecc573cc2369e4ca970dfefc30", "year": 2021 }
pes2o/s2orc
EFL Students’ Pronunciation Problems in Presenting Thesis Proposal at Tertiary Level of English Department The aim of the research is to investigate EFL Students’ Pronunciation Problems in Presenting Thesis Proposal at Tertiary Level of English Department. This qualitative study reported in this article focused on segmental features problems. The instruments used to collect the data needed in this study were the researcher, recording, and Dictionary. The data analysis covered consonant and vowel pronunciation problems. was based on the theories of phonetics proposed by George Yule and Jacobs, which embraces voicing, manner of articulation and place of articulation for English consonant production and the tongue part and position, sound length, and mouth forming for English vowel production. The result of the study shows that the research subject encountered a number of segmental pronunciation problems consisting of consonants and vowels including pure vowels and diphthongs. Furthermore, this research revealed that the problem with consonant sounds were the substitution of the sounds [v], [ð], [θ], [t∫], [ʒ], [ʃ] [z] and the deletion of the sounds [k], [ɡ], [t], and [s]. The problem with pure vowel sounds were the substitution of the sound [ɪ], [iː], [ɛ], [ʊ], [ʌ], [ɜː], [ɒ], [ɔː] and [ə] and the insertion of the sound [ə] between two consonant sounds. The problem with diphthongs were: the monophthongization of the sound [aɪ], [aʊ], [eɪ], [ɪə], [əʊ], and the replacement of the sounds [eɪ] and [ɪə] with other diphthongs. It is suggested for the next researchers to investigate pronunciation problems related to supra-segmental aspects and phonemic opposition. and factors driving pronunciation problems in the EFL Classroom setting. Article History Received: 29-05-2021 2 Introduction Pronunciation plays an important part in improving an English speaking skill, It is inline with Ellis as cited in Sahatsathatsana (2017) points out that good pronunciation is a key to have good speaking. In communication process, good pronunciation can avoid verbal misunderstanding speakers need to deliver their speech with proper English pronunciation in order that the message clearly delivered and understandable. At the same line, Gilakjani (2011) says that people who had incorrect pronunciation would not be successful in communication. However, learning pronunciation is quite difficult for Indonesian students since they have been used to speaking their mother tongue. Due to the English is greatly different from Indonesian language in its pronunciation system. The common problem of learning English pronunciation is caused by the differences between the sound systems of the two languages. There are some sounds in English which do not exist in Indonesian. The vowels, such as [ae], [I:], [u:] and consonants, such as [ð], [θ], [ʒ], do not exist in Indonesian. It will be difficult for Indonesian students to pronounce them. Pronunciation, defined by the Oxford English Dictionary, refers someone's competence in producing sound used to deliver used to collect the data needed in this study were the researcher, recording, and Dictionary. The researcher came to the presentation of the thesis proposal in which the subjects of the research were performing their English speaking practices. Then, the researcher analyzed the results qualitatively with three steps by Miles, Huberman and Saldana (2014), i.e., 1) Recording: the data-collecting process in this research was done by recording the speaking practices performed by the English Department students during the presentation of the thesis proposal. The recordings were then listened to catch and write down the transcription, 2) Dictionary: the Oxford Tertiary Learners' Dictionary was used as a determinant to do the analysis since it was the commonly used dictionary among the English Department students. The data of this research were the speaking practices performed by the English Department students in the presentation of the thesis proposal. The researcher recorded students' thesis proposal presentation during thesis proposal seminar in March to April 2021. Totally, ten students' thesis proposal presentations were taken as the data source. This research took and recorded the data directly during the presentation of the thesis proposal. Some steps were done by the researcher to collect the data. The first step was directly coming to the class to observe and listen to the presentation of the thesis proposal. While observing and listening, the students' performances were recorded. The second step was listening to the recording and then writing down the transcription and marking every single English sound which was mispronounced. The last step was noting and classifying the data based on their segmental phonological characteristics (Miles, Huberman and Saldana, 2014). The data analysis of the research covered two stages; The first stage was grouping the mispronounced English sounds. The whole collected data which have been written down are firstly classified or grouped based on their segmental phonological characteristic. Second, the data were allocated into two main classifications; consonant and vowel pronunciation problems. The vowel pronunciation problem is divided into three branches; pure vowels, diphthong and trip thong while the consonant pronunciation problem does not have any branch to divide. The second stage was analyzing how the mispronounced sounds were becoming the pronunciation problems. It began with analyzing the segmental features of the mispronounced English phonemes which had been classified. To check off the accuracy of the analysis, The Oxford Tertiary Learners' Dictionary and the theory of English phonetics and phonology proposed by Yule (2010) and Roach (2009) were used as the determinants to convey the accuracy of the analysis. The phonological environments of some mispronounced English sounds were explained as it was considered necessary. Finding and Discussion To answer the research question, the finding presents the analysis of the research data by categorizing the data into two classes; consonant and vowel including diphthong, and explaining the segmental features which become the pronunciation problems occupied by the ten research subjects during the presentation of their thesis proposals. 1) The problem with the English Consonants This research identified that there are the consonant sounds which were inaccurately pronounced by the subjects of the research in their presentation of their thesis proposal. The accuracy was determined with the phonetic transcription provided on the Phonetic Latin alphabet standardized by (International Phonetic Association, and the Oxford Tertiary Learner's Dictionary, 2015). The inaccurate production of the consonants will be described through the phonetic theories of consonants proposed by Roach (2009) andYule (2010) which focus on the voicing, the manner of articulation and the place of articulation. The English consonant sound [ð] is described as a voiced dental fricative sound that its production should fulfill those three main features of [ð] sound. The problem encountered by the research subjects regarding of the sound [ð] arose in two position; initial and final. In the case of the sound [ð] in the initial position, the research subjects articulated the /ð/ in the improper way of the manner and the place of articulations while the voicing remained correct. One subject did not touch his tongue to the dental area, but to the nearest place of articulation which was the alveolar ridge and changed its manner of articulation to be a stop or plosive. Thus, this eventually result edit the production of the sound [d] which substituted the sound [ð]. This kind of substitution happened, for example, in the words the /ðə/and "then" /ðen/. The substitution of the initial [ð] with the sound [d] by the research subjects made them pronounced /də/ and /den/. This initial substitution occur under one phonological environment. The change of the [ð] in the initial position is always followed a vowel. For instance, in the subject pronunciation of the word, "then" as /ðen/, it can be identified that the sound [d] is followed by the sound [e] which is a vowel. c). The Sound [θ]. The following table shows the sound [θ] which was substituted with sound [t] by the subject occurring in the initial and final position and the substitution of the sound [θ] with [s] in the initial position committed by the subjects. Table 3. problem with the sound [θ] Position Word The correct pronunciation Initial Thank Three Third is described as a voiceless dental fricative sound. Those three phonetic aspects should be occupied when producing the sound [θ]. However, some of the subject in some cases did not meet two of the three phonetic aspects when they articulated the sound [θ]. They changed the place of articulation of the sound [θ] from dental to alveolar. In accordance with the manner of articulation, the sound [θ] that should actually be articulated through producing an air stream as it is fricative was articulated with the manner of stop or plosive. The voicing of this sound was produced correctly. Those two changes of the phonetic aspects of the sound [θ] resulted in the production of the sound [t] that substituted the sound [θ]. This kind of substitution occurred in two position; initial and medial. In this cases, the subject articulated sound [θ] is in the proper way of voicing and manner of articulation. The sound [θ] was voicelessly articulated with a fricative manner. However, the place of articulation of the sound [θ] was changed by the subject from dental to alveolar. This eventually ended up with the production of the sound [s] which is phonetically described by a voiceless alveolar fricative sound. It means that the sound [θ] was substituted with the sound [s] by the research subjects. This kind of substitution happened only once and it was in the initial position. It happened to the word "third" /θəː(r)d/. The medial [θ] sound was substituted with the sound [s] that made it pronounced /səːrd/. The change of the initial sound [θ] to the sound [s] performed by the subject occurs in one phonological environment. The change of the sound [θ] to the sound [s] in the initial position is followed by a vowel. For instance, in the subject pronunciation of the word "third" as /səːrd/, it can be identified that the sound [s] is followed by the long vowel [əː]. d). The Sound [t∫] The following table shows the sound [t∫] which was replaced with sound [c] by the subject occurring in the medial position. This problem did not happen in the initial and final position. is a consonant sound that should be articulated through three phonetic aspects: voiced, palatal and fricative. Those three aspects should be completely fulfilled all together in order to produce the proper sound [ʒ]. Otherwise, the sound [ʒ] will be mistakenly altered into another sound. The problem encountered by the research subjects related to the sound [ʒ] was in the voicing aspect. They devoiced sound [ʒ] that resulted in the production of a voiceless, palatal and fricative sound. This sound can be addressed by the sound [ʃ] which substituted the sound [ʒ]. This kind of substitution happened only in one position; medial position. It happened to the word "conclusion" /kənˈkluːʒ(ə)n̩ / and "cohesion" /kəʊˈhiːʒ(ə)n̩ /. The substitution of the medial [ʒ] sound with the sound [ʃ] made those words pronounced /kɒnˈkluːʃən̩ / and /kəʊˈhɛʃən̩ /. It can be seen that some vowel changes also happened and they will be discussed in the vowel section. 2). The Problem with the English Vowels and Diphthongs The findings of the study shows that there are same vowel sounds which were inaccurately pronounced by EFL learners during presentation of thesis proposal. This is in line with Purba et al (2019) who said that these problems were occurred because the students didn't have the material about phonetic symbols yet. They still confused how to pronounce English short vowel sounds in English words correctly. The accuracy was determined with the phonetic transcription provided on the Latin alphabet standardized by the (International Phonetic Association, 2015) and the Oxford Advanced Learner's Dictionary. The inaccurate production of the consonants will be described through the phonetic theories of vowels proposed Yule (2010) which focus on the part and the position of the tongue. In addition, it is important to note that the phonological environment of the vowel substitution will not be described in each vowel as there must always be only one possible phonological environment; an initial vowel must always be followed by a consonant, a medial vowel must always be preceded and followed by consonants, and a final vowel must always be preceded by a consonant. a) The Vowel [ɪ] The sound [ɪ] is produced in the close front area. This means that when the sound [ɪ] is produced, the front part of the tongue is heightened to the roof of the mouth with the lips are slightly spread. However, the front part of the tongue is not heightened as high as possible to the mouth roof. It is slightly pulled down near the quality of the close-mid vowel. When it came to the sound [ɪ], some of the subjects found it problematic. They did not use and posit their tongue properly for producing t the sound [ɪ]. As the result, the sound [ɪ] is changed with other sounds. The following table shows the change of the sound [ɪ] performed by the research subject. . This substitution occurred, for example, in the words "this" /ðɪs/ and "examine"/ɪɡˈzaemɪn/, the medial [ɪ] sound was substituted by t he research subjects with the sound [i] which made them pronounced /ðis/ and /ɪɡˈzaemin/. Second, the sound [ɪ] was produced in the front tongue but the tongue was not raised sufficiently that they failed to produce it as a close vowel. The tongue was just raised as the degree of mid-open vowel and it brought about dropping jaw. Then, the vowel sound [ɛ] was eventually produced instead of [ɪ]. The substitution of the sound [ɪ] with the sound [ɛ] happened in two position; initial and medial. The initial substitution can be seen from the word "examine" /ɪɡˈzaemɪn/ which was pronounced /ɛɡˈzaemɪn/. This means that the initial [ɪ] sound was substituted with the sound [ɛ]. The medial substitution can be seen from the words "perfect" /ˈpəː(r)fɪkt/ and "preferred" /prɪˈfəː(r)d/ which were pronounced /ˈpəːrfɛkt/ and /prɛˈfəːrd/. This means that the medial [ɪ] sound was substituted with the sound [ɛ]. b). The Sound [ʌ] The sound [ʌ] is an open central vowel. Producing the sound [ʌ] involves the central part of the tongue. However, the sound /ʌ/ is not a fully open vowel. The central part of the tongue is a little bit raised near the area of the quality of the open-mid vowel sound. The following table show the sound /ʌ/ caught to be incorrectly produced by the subjects during the research. . This error occurred in the medial position and it occurred in the words "multiple" /ˈmʌltɪpl̩ /, "public" /ˈpʌblɪk/ and "construct" /kənˈstrʌkt/. The substitution of the sound [ʌ] with the sound [a] made them pronounced /ˈmaltɪpl̩ /, /ˈpablɪk/ and /kənˈstrakt/. c). The Sound [ɜː] The sound [ɜː] is a long mid-central vowel. This vowel sound is, thus, produced a little longer than short vowels. The production of the sound [ɜː] makes use of the central part of the tongue. The central part of the tongue is raised in halfway between open and close area of vowel sound quality. More specifically, the central part of the tongue position is a little bit pulled down near the area of open-mid vowel sound quality. The shape of the lips when producing the sound [ɜː] is neutral. The following table shows the altered sound [ɜː] performed by the subjects during the research. Ramasari (2017) who found that the participants employed an error in pronouncing the word "data" /deɪtə/ as /data/. It was happened because the students used Indonesian language system in pronouncing English words. The diphthong substitutions which were done by the research subjects in this research can be exemplified as when they had to pronounce "classified" /ˈklasɪfaɪd/ and "main" /meɪn/ which contained the diphthong sounds [aɪ] and [eɪ], they were replaced by the subjects with the sound [ɛ] and [aɪ] by means they monophthongized the diphthong sounds [aɪ] and substituted the sound [eɪ]. As a result, /ˈklasɪfaɪd/ and /meɪn/ were pronounced /ˈklasɪfɛd/ and /maɪn/. From this example, it can be identified that the research subjects substituted and monophthongized some English diphthong sounds. Other than sound substitution, the research subject also committed the consonant sound [k], [ɡ], [t] and [s] deletions and the schwa insertion. However, this pronunciation problem needs to be revealed through the discipline of phonological process rather than phonetic aspects and it is beyond the limitation of this study (Cahya, 2017). All in all, the segmental pronunciation problems encountered by the research subjects can be summed up as sound substitution. In common, the target sounds are substituted with the similar sounds or the sounds which are usually represented by the orthographic writing. Conclusion The conclusion is stated based on the analysis of the pronunciation problems in terms of segmental phonetic features committed by EFL students of English Departement in their presentations of their thesis proposal,i.e., 1) During the presentation of their thesis proposal, the research subject under pronunciation problems with a number of consonants, vowels and diphthongs. 2) There were two consonant pronunciation problems encountered by the research subjects. First, they substituted some consonant sounds with other consonants sounding similar to the target sounds. Second, they deleted some consonant sounds when they occurred in consonant clusters. Suggestion After finding the result of this research, the researcher suggests the students of English Department, to seriously learned and concerned with an English pronunciation, in case distinguishing to pronouns vowel and consonant, so that they can assist their speaking skills with an appropriate English pronunciation. Even though the goal of speaking is to deliver meaning without concerning with pronunciation, but they are still expected to have a good pronunciation since they are the students of English Department. Furthermore, the researcher also suggests the lecturers to give feedback and correction the students' mispronunciation during learning activity to get accustomed with a correct pronunciation. For further research, the researcher suggests the next researchers to investigate pronunciation problems related to suprasegmental phonetic features which include stress, intonation and rhythm since pronunciation involves both segmental and suprasegmental features. Furthermore, the researcher suggests the next researchers to investigate the quality change of sound driven by its position in the phonological environment. This field of study is called phonemic opposition. Finally, as English pronunciation is very problematic even for those who have been studying English for years, the researcher finally suggest the next researchers to find out the factors causing pronunciation problems.
v3-fos-license
2014-10-01T00:00:00.000Z
2012-02-28T00:00:00.000
10209513
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.mdpi.com/1420-3049/17/3/2428/pdf", "pdf_hash": "5c5d890feeb868349600280b5d5fdb879766dde6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2470", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "5c5d890feeb868349600280b5d5fdb879766dde6", "year": 2012 }
pes2o/s2orc
Lycopene/Arabinoxylan Gels: Rheological and Controlled Release Characteristics Arabinoxylan gels exhibiting different rheological and lycopene transport properties were obtained by modifying the polysaccharide concentration from 3 to 4% (w/v). The apparent lycopene diffusion coefficient decreased from 2.7 × 10−7 to 2.4 × 10−7 cm2/s as the arabinoxylan concentration in the gel changed from 3 to 4% (w/v). A low amount of lycopene is released by diffusion from arabinoxylan gels. These results indicate that arabinoxylan gels could be carriers for lycopene delivery in specific sites after network degradation. The possibility to modulate lycopene release from arabinoxylan gels makes these biomaterials potential candidates for the controlled delivery of biomolecules. Introduction Gels are polymeric three-dimensional networks, which swell on contact with water but do not dissolve [1]. The water absorption property enables gels for applications like food additives, enzyme immobilization and controlled release of active compounds. Gels have been used as controlled release OPEN ACCESS matrices in the food, medicine, agronomy and cosmetic industries [2]. Although most studies concern gels made from synthetic polymers, gellable native or tailored polysaccharides, generally non-toxic and highly biocompatible, are receiving increasing attention [3]. Polysaccharide gels can be used as matrices to control the release of functional components. Some polysaccharide networks can protect entrapped molecules while passing through stomach and small intestine, releasing them in the colon during gel degradation [1]. Studies on the utilization of polysaccharide gels as controlled release matrices usually involve chitosan, alginate, modified dextran and starch derivatives [2]. Other polysaccharides such as arabinoxylans have been studied to a minor extent. Arabinoxylans are non-starch polysaccharides from the cell walls of cereal endosperm constituted by a linear backbone of β-(1→4)-linked xylose units containing α-L-arabinofuranosyl substituents attached through O-2 and/or O-3 [4]. Arabinoxylans can present some arabinose residues ester-linked on (O)-5 to ferulic acid (FA, 3-methoxy-4-hydroxycinnamic acid), being called "ferulated". Ferulated AX can gel by covalent cross-linking involving FA oxidation by some chemical or enzymatic (laccase and peroxidase/H 2 O 2 system) free radical-generating agents [5]. Diferulic acids (di-FA) [6] and tri-ferulic acid (tri-FA) [7] have been identified as covalent crosslinked structures in laccase gelled arabinoxylans. Both covalent bridges (diFA, tri-FA) and physical interactions between AX chains have been reported to be involved in the arabinoxylan gelation process and the final gel properties [7]. AX gels present interesting properties like neutral taste and odour, high water absorption capacity (up to 100 g of water per gram of dry polymer) and absence of pH, electrolyte and temperature susceptibility [4], which confer them potential applications as a delivery matrix. Lycopene is an important biological compound found mainly in tomatoes. This molecule has received increasing attention because of its possible role in the prevention of chronic diseases such as atherosclerosis, skin cancer, prostate cancer and colon cancer [8]. However, lycopene can be susceptible to oxidation, especially when stored in the presence of oxygen [9]. Therefore, entrapment of lycopene in polymeric networks could be an alternative to reduce lycopene oxidation. To our knowledge, lycopene entrapment and release capability of arabinoxylans gels has not been reported elsewhere. The objective of this research was to investigate the entrapment and release of lycopene from arabinoxylan gels exhibiting different rheological characteristics. Lycopene Analysis The lycopene content in the sample used in the present study was 438 µg/g of tomato dry basis, which is in the range reported for several tomatoes varieties and tomato products (from 6.6 to 490 µg/g) [10]. It is known that lycopene is the predominant pigmented compound in red tomatoes, but other carotenoids such as carotene and lutein constitute about 20% of the total carotenoids in fresh red tomato tissue [11]. In this regard, it has been suggested that on the basis of the levels and the 560 nm extinction coefficients of these minor carotenoids, the contribution of these compounds will result in an overestimation of lycopene content by less than four percent [12]. Lycopene Entrapment Laccase-treated arabinoxylan/lycopene mixtures were monitored by small deformation dynamic rheology. For all arabinoxylan/lycopene treatments the storage (G') and loss (G") modulus rose to reach a pseudo plateau ( Figure 1). At the end of gelation (4 h) G' and G" values were 9 and 0.3 Pa and 20 and 0.4 Pa for gels at 3 and 4% (w/v) in arabinoxylans, respectively. No gelation was detected in arabinoxylan and arabinoxylan/lycopene mixtures without laccase addition. In the present study 12.5 and 16.0 mg of lycopene per g of arabinoxylans have been entrapped for gels at 3 and 4% in arabinoxylans, respectively. A previous study reported 0.1 mg of lycopene entrapped per g of gelatin/polyglutamic acid as carrier [13], but in that study the rheological characteristics of the gels were not investigated. On the other hand, most of non modified polysaccharides form physical gels sensible to temperature, ionic strength or pH changes while arabinoxylans gels are covalent networks which are not affected by these factors [4]. In addition, it is possible that higher lycopene amounts could be entrapped in arabinoxylan gel as in the present study G' values of the networks were not affected and differences in the gel mechanical properties are determinant in their practical use. The mechanical spectra of arabinoxylan and arabinoxylan-lycopene gels after 4 h gelation (Figure 2) was typical of solid-like materials with a linear G' independent of frequency and G" much smaller than G' and was dependent on frequency [14]. This behavior is similar to that previously reported for arabinoxylan gels [4]. Controlled Release of Lycopene The kinetics of lycopene release from arabinoxylan gels are shown in Figure 3a. Linear relationships between cumulative release (Mt/Mo) and the square root of time were found for lycopene release from arabinoxylan gels, allowing the calculation of the apparent diffusion coefficients (Dm) of this compound (a) (b) from the gels (Figure 3b). The rate of lycopene release from arabinoxylan gels was dependent on the polysaccharide concentration. The apparent diffusion coefficient was 2.7 × 10 −7 and 2.4 × 10 −7 cm 2 /s for lycopene in arabinoxylan gels at 3 and 4% in polysaccharide, respectively (Table 1). A previous study [15] reported a higher apparent diffusion coefficient value for lycopene in a tomato juice/soy mixture, which could be attributed to a faster lycopene mobility in a liquid phase in comparison to those registered in arabinoxylan gels. As presented in Table 1, the percentages of lycopene released by the end of the test (4 h) were 3.7 and 2.6% for gels at 3 and 4% in arabinoxylans, respectively. These results become of importance when it is desirable to protect the carried lycopene from the gastric environment for further release inside intestinal lumen and promote their uptake after gel degradation by colonic bacteria. It has been previously reported [16] that maize arabinoxylans gels are degraded to oligosaccharides by bacteria from the colon. In the present study, gel incubation was fixed at 4 h as the average transit time from oral ingesta to the colonic region. The total amount of lycopene released during 4 h was minimal and thus possibly more than 96% of this compound could reach the intestinal region where it is expected to act. Materials Maize bran arabinoxylans were obtained and characterized as previously described [17]. They presented an A/X ratio of 0.8 and a ferulic acid content of 0.34 µg/mg. The relative percentages of each di-FA structure were: 16, 21 and 63% for the 8-5', 8-O-4' and 5-5' structures, respectively. All chemical products were purchased from Sigma Chemical Co. (St. Louis, MO, USA). Lycopene Extraction and Quantification Tomatoes (Lycopersicon esculentum Mill.) cvar. Sedona were kindly provided by a commercial greenhouse in Northern Mexico. Fresh tomato samples were homogenized and then diluted in deionized water to produce an uniform slurry. The extraction and quantification of lycopene was performed as reported before [10]. A UV-Visible spectrophotometer at 503 nm was used to estimate the lycopene content. Preparation of Arabinoxylans Gels and Arabinoxylans-Lycopene Gels Arabinoxylan solutions at 3 and 4% (w/v) and arabinoxylan-lycopene mixtures at 3 or 4% (w/v) in polysaccharide containing 12.5 and 16.0 mg of lycopene per g of arabinoxylans were prepared in 0.05 M citrate phosphate buffer at pH 5 containing sodium taurocholate and Triton X-100. Laccase (1.675 nkat/mg arabinoxylan) was used as cross-linking agent. Gels were allowed to form for 4 h at 25 °C. Rheological Measurements The formation of the arabinoxylan gel was followed using a strain-controlled rheometer (AR-1500ex, TA Instruments, AR1500ex, TA Instruments, New Castle, DE, USA) in oscillatory mode as reported before [7]. Arabinoxylan gelation was studied for 4 h at 25 °C. Arabinoxylan solutions were mixed with laccase and immediately placed in the cone and plate geometry (5.0 cm in diameter, 0.04 rad in cone angle) maintained at 25 °C. Exposed edges of the sample were covered with mineral oil fluid to prevent evaporation during measurements. Arabinoxylan gelation kinetics was started monitored at 25 °C for 4 h by following the storage (G') and loss (G") modulus. All measurements were carried out at 0.25 Hz and 5% strain. From strain sweep tests, arabinoxylan gels showed a linear behavior from 1.5 to 10% strain. The mechanical spectra of gels were obtained by frequency sweep from 0.1 to 10 Hz at 5% strain and 25 °C. Lycopene Release Arabinoxylan-lycopene mixtures (2 mL) were poured into a 30 mL cylindrical plastic cell (diameter 30 mm) just after laccase addition. Arabinoxylan-lycopene gels were allowed to form during 4 h at 25°C. Then, lycopene was released in 0.02% (w/v) sodium azide solution (6 mL) containing sodium taurocholate and triton X-100 placed on the gel surface. Gels were incubated at 25 °C and 90 rpm tangential rotation and liquid medium was renewed every 30 minutes from 0.5 to 4 h. At the end of the test, the gels were hydrolyzed as described before [18] in order to quantify un-released lycopene. Lycopene recovery (released lycopene + un-released lycopene) was near to 100%. The lycopene was quantified as described elsewhere [10]. Lycopene release was characterized by calculating an apparent diffusion coefficient (Dm). This Dm was estimated from the release kinetics curve, fitted by using an analytical solution of the second Fick's law [19], which gives the solute concentration variation as a function of time and distance: where Mt is the accumulated mass of lycopene released at time t, Mo is the mass of lycopene in the gel at time zero, L is the sample thickness (0.4 cm) and Dm is the diffusion coefficient. By plotting the relative solute mass released (Mt/Mo) at time t versus the square root of time, a simplified determination of Dm can be made assuming that Dm is constant and that the sample is a plate with a thickness L. In this study, the apparent Dm was calculated from the linear part of the Mt/Mo vs. time curves [18]. The percentage of lycopene released at the end of the test was also calculated. Statistical Analysis All measurements were made in triplicate and the coefficients of variation were lower than 10%. Results are expressed as mean values. Conclusions The cross-linking method used allowed the formation of arabinoxylan gel in the presence of lycopene without modifying the rheological properties of the gel. The lycopene release rate and quantity are dependent on the polysaccharide concentration in the gel. A low amount of entrapped lycopene is released by diffusion while most of this compound would be liberated only after gel degradation. These results indicate that arabinoxylan gels could be carriers for lycopene delivery in specific sites after network degradation. Additional studies will be required in order to understand the effect of different lycopene concentrations in the diffusion coefficient value.
v3-fos-license
2022-06-12T05:15:44.765Z
2022-03-01T00:00:00.000
249574003
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "833e62fae7dab75e5b7da7e6b3ab117c7517cf42", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2471", "s2fieldsofstudy": [ "Medicine" ], "sha1": "833e62fae7dab75e5b7da7e6b3ab117c7517cf42", "year": 2022 }
pes2o/s2orc
Nurses' Knowledge, Practice, and Associated Factors with Enteral Nutrition in Adult Intensive Care Units of Public Hospitals Background In critically ill patients, enteral nutrition is recommended as a route for nutrient delivery. Nurses' knowledge and practice of enteral nutrition influence patients' clinical outcomes. Therefore, this study sought to assess nurses' knowledge, practice, and associated factors regarding enteral nutrition in adult intensive care unit patients in public hospitals in Addis Ababa, Ethiopia. Methods A cross-sectional study design was used to collect data from 196 nurses working in public hospitals in Addis Ababa from April 11 to April 30, 2020. The data were entered into Epi Data version 3.1 and analyzed with SPSS version 21. The correlation between independent variables and dependent variables was estimated using bivariate and multivariate logistic regression at a 95% confidence level. Results The level of inadequate knowledge and poor practice of nurses relating to enteral nutrition was 67.7% and 53.8%, respectively. Bachelor's degree holders were less likely to be knowledgeable (AOR= 0.24, 95% CI: (0.61, 0.93)). Nurses' practice about enteral nutrition was significantly associated with nurses' age (AOR = 0.023, 95 % CI: (0.001,0.52), nurses receiving training on enteral nutrition (AOR = 1.951, 95 % CI: (0.06, 0.60)), and nurses from ICUs having a guideline and protocol on enteral feeding practice (AOR = 3.401, 95 % CI: (1.186, 9.789). Conclusions In the study, it was revealed that a substantial proportion of nurses had inadequate knowledge of enteral nutrition and practiced poor enteral nutrition. INTRODUCTION In the intensive care unit (ICU), nutrition is one of the most important aspects of medical care for critically ill patients (1)(2)(3)(4). A high prevalence of morbidity and mortality is associated with malnutrition, affecting up to 40% of hospitalized patients (5)(6)(7). Enteral nutrition (EN) is a delivery system that supplies all the essential nutrients -including water and minerals -into the The Canadian Critical Care Practice Guidelines (CCPGs, 2013) and the American Society of Parenteral and Enteral Nutrition (ASPEN, 2009) recommended that EN is the preferred feeding method for critically ill patients due to its costeffectiveness, prevention of intestinal mucosal atrophy, and maintenance of intestinal immunity through gut-associated lymphoid tissue (9,10). In patients with functional gastrointestinal tracts, EN is indicated when oral nutritional intake is inadequate to meet estimated nutritional needs (3,6). The use of EN should begin within the first 24 to 48 hours of admission for patients who receive ventilator support and have stable hemodynamic states with an adequate total caloric intake of 20 to 25 calories per kilogram of body weight for most adults in the ICU (8,11,12). It is crucial to consider the underlying medical condition, nutritional status, and available routes of nutrient delivery when determining the type and amount of nutritional support (13). The ICU nurses play an important role in maintaining the daily nutritional status of patients. To prevent complications related to enteral feeding and improve outcomes, effective nursing practices such as the use of prokinetic agents, decreasing feeding rate, measurement of gastric residual volume, and maintaining the correct positioning of patients are required (14). Although few data are available on nursing knowledge and practice regarding nutritional feeding in ICU, some studies have shown that nurses working in ICU have inadequate knowledge and poor practice that leads to the highest prevalence of malnutrition among hospitalized patients (15)(16)(17)(18). In some studies, factors such as lack of guidelines and protocols on EN, no established nutrition committee, a shortage of feeding tubes, patient characteristics such as refusal of tube feeding, differences in nurses' characteristics, and the environment where tube feeding practice is conducted may affect nurses' knowledge and their practice (19). The investigators are unaware of any study conducted on nurses' knowledge, practice, and factors associated with enteral nutrition in ICUs of public hospitals in Addis Ababa, Ethiopia. So, this study aimed at assessing nurses´ knowledge, nurses´ practice, and associated factors with enteral nutrition in ICU of public hospitals in Addis Ababa, Ethiopia. METHODS AND MATERIALS Study area and period: The study was conducted from April 11 to April 30, 2020, in ten public hospitals in Addis Ababa, the capital city of Ethiopia. Those A cross-sectional quantitative study design was conducted to assess knowledge, practice, and factors associated with enteral nutrition among nurses working at ICUs of public hospitals in Addis Ababa, Ethiopia. Source population: All nurses who were working at ICUs of public hospitals in Addis Ababa were taken as the source population. Study population: Nurses who were on the duty during data collection time and who fulfilled the inclusion criteria were our study population. Inclusion criteria and exclusion criteria: Those nurses who were present during the study period and volunteer to participate in the study were included whereas those did not available (annual leave, maternal leave) during the study period were excluded from the study. Sample size and sampling procedure: Sample size (n) was determined based on a single proportion formula with the following assumption. Since there was no similar study done in Ethiopia, we took the prevalence of nurses´ knowledge and nurses´ practice on enteral nutrition as 50%. The level of confidence (α) was taken at 0.05 (Z (1α/2) =1.96); the margin of error was taken as 0.05. By consideration of the 10% non-response rate, the final sample size (nf) of this study was 196. Also, the number of study units to be sampled from each ICU was determined using the proportion to size allocation formula. Lists of nurses were taken from each unit of hospitals and simple random sampling was used to select respondents from each ICU of the public hospitals in Addis Ababa ( Figure 1). Data collection techniques and instruments: A pre-tested, structured, self-administered questionnaire was used to collect data from study subjects. The questionnaire was developed after reviewing different kinds of literature (18,20,21) and some modification was done by different experts on the area from the nursing and nutrition departments of Addis Ababa University. The questionnaire was divided into four sections: demographic of respondents, nurses´ knowledge, nurses´ practice, and challenges toward delivering enteral nutrition. Demographic data included sex, age, level of education, and work experience of nurses. Nurses´ knowledge about EN consisted of sixteen questions with three options (Yes, No and I don't know). One point was given for each correct answer; and for all other responses, zero points were assigned. The total score for knowledge ranged from zero to sixteen with high scores indicating adequate knowledge of EN. Nurses´ practice about EN consisted of six questions with 4-point Likert scale questions ordered as never, sometimes, almost always, and always. One point was allocated to always and zero points for all other answers. The total score for practice ranged from zero to six with high scores indicating good practice. Challenges of nurses to apply practice on EN consisted of five questions labeled as dichotomy items (Yes/No). A score of one was given to answers that reflected challenges of delivering EN, and a score of zero was given to answers that reflected no challenges of delivering EN. Data collectors (one nurse from each unit of ICU) were experienced nurses who were selected according to their previous experiences of data collection. Data entry, process and analysis: The data were checked for completeness and consistency by the principal investigator; then cleaned and coded for entry. The coded data were entered into epi data version 3.1 and exported to SPSS version 21.0 for analysis. Frequency and percentage were used to summarize the findings while tables and graphs were used to present the data. Bivariate logistic regression was used to show the association of each independent variable with dependent variables. First, the crude odds ratio (COR) of all independent variables on knowledge and practice of enteral feeding was calculated at a 95% confidence interval (CI), and all variables with a p-value of <0.25 were considered for multivariable logistic regression to control the effect of other confounders. Then, the significance level was set at P< 0.05. Figure 1: Schematic presentation of sampling procedure for the study participants, 2020. Ethics approval and consent to participate: Approved ethical clearance letter was obtained from Addis Ababa University, College of Health Sciences, Department of Emergency Medicine Ethical Review Committee. A support letter was written to the administration of the study hospitals for grant permission to conduct the study and permission was obtained from each ICU directorate. Participants were informed verbally and those who were not volunteers had been permitted not to participate in the study. Informed written consent was obtained from respondents who had participated in the study. The voluntary nature of the study and privacy of the participants during the data collection was assured by conducting in a comfortable private place and their personal information was protected from the public and secured by the researchers. Level of knowledge: Each answer for knowledge questions was given a "1" score for correct answers and "0" for incorrect answers. The total score was sixteen and it was then converted to a percentage and interpreted as follows. Those who scored <65% were considered as having inadequate knowledge whereas those who scored ≥ 65% were having adequate knowledge. Level of practice: Practice has six Likert scale questions; for never, sometimes and almost always, a "0" score was given whereas for always "1" score was given. Then, all values were converted to percentages and interpreted as follows. A total score of <70% were considered as having a poor practice whereas a score of ≥70% were considered as having good practice. Practice of the study participants on enteral nutrition: In assessing their level of practice about enteral feeding in the ICU, participants were asked six questions. According to the results of the current study, more than half 103 (53.8%) respondents had poor enteral feeding practices, while only 89 (46.4%) had good practices. One hundred fourteen (59.4 %), 115 (59.9 %), and 98 (51.0 %) of the participants responded that they always confirm tube placement before delivery of feeding, flushing of the tube before and after administration of feeding, and documenting any nutritional support or complication about their patient respectively whereas 117(60.9 %), 125 (65.1 %) and 129 (67.2 %) of them were not always participating in checking gastric residual volume before initiate feeding, conducting a daily inspection of nostrils and discussing nutritional management of patients during ward rounds respectively (Table 3). Bivariate and multivariable analysis of factors affecting level of knowledge and practice of the participants: As a result of bivariate analysis, females were 0.48 times less likely to have adequate knowledge (COR= 0.48, 95% CI: 0.19, 1.16) than males, and on multivariable analysis, nurses with BSC degrees were 0.24 times less likely to possess adequate knowledge (AOR = 0.24, 95% CI: (0.61, 0.93)) than nurses with MSC degrees. As for the practice of respondents, those in the age group of 20-28 were 0.02 times less likely to have a good practice on enteral nutrition (AOR = 0.02, 95% CI: (0.001,0.52)) than those in the age group of 46-50, and respondents who participated in school nutritional training were about 2 times more likely to have a good practice on enteral nutrition (AOR = 1.95, 95% CI: (0.06, 0.60)) than those didn't get the training. Also, participants who had awareness of enteral nutritional protocol were about 3 times more likely to have a good practice (AOR =3.40, 95% CI: (1.18, 9.78) than those who had no awareness (Table 4). DISCUSSION According to the findings, almost two-thirds of the respondents 130 (67.7%) had insufficient understanding, while just 62 (32.3%) had acceptable knowledge of enteral nutrition. The findings were consistent with a study conducted in Pakistan, which indicated that just 10% of participants had appropriate levels of expertise (22). The findings were similarly incongruent with those of Al Kalaldeh (2015), who evaluated 253 critical care nurses from three major Jordanian hospitals; the results revealed that almost 70% of the participants scored less than 60% in enteral nutrition knowledge understanding (23). This was also similarly comparable to a study conducted in Alexandria, Egypt, where just 15 nurses (17.6%) out of 85 possessed adequate understanding (24). One hundred and three participants (53.8%) had poor practice, whereas 89 (46.3%) had a good practice. Enteral nutrition in Addis Ababa ICUs is based on views rather than evidence-based approaches. This conclusion was consistent with research conducted at Egypt's Ismailia General Hospital, which found that more than half of the nurses examined had an unacceptable level of skill when it came to providing care prior to NG tube feeding administration (25). In terms of participant awareness of guideline availability, 104 (54.8%) of study participants were unaware that a guideline was accessible in their ICU, and the majority of them 116 (60.4%) said their ICU lacked a procedure. This was countered by research conducted in Australia, which found that following procedures improved enteral feeding delivery and improved clinical outcomes in critically ill patients (26). The majority of the people polled said they didn't have access to an enteral nutrition protocol to help them supply nourishment. This conclusion corresponded to research conducted in Pakistan, which found that only a few nurses performed adequately in this area (22). All facilities are supposed to provide safe nutritional assistance based on guidelines and protocols, but only 76 (39.6%) and 88 (45.8%) of the participants knew about EN protocols and guidelines, respectively. This contradicted Hyland et al findings which showed that procedure can greatly increase nutritional support (11). According to the findings, the most significant factor contributing to unintentionally underfeeding habits in our study was a lack of resources, which hampered 45.8% of participant's ability to provide optimum enteral nutrition, followed by families' inability to provide nutritional support (22.4 %). Participants recognized barriers modestly, with a higher emphasis on insufficient resources in the ICU, which was consistent with a study conducted in Jordan (21). The majority of study participants said that a lack of feeding tubes, a lack of understanding, and work overload were primary obstacles they faced during enteral nutrition procedures. The findings were similar to those of a study conducted in Egypt's Ismailia general hospital, which found that factors affecting nurses' practice regarding nasogastric tube feeding included a lack of learning, physical exhaustion, stress from being contaminated, a lack of nursing staff, reduced pay, non-appearance defensive garments, increased workload, and no incentives or redesigns for effective medical attendants (25). Multivariable regression demonstrated no statistically significant difference in enteral nutrition knowledge and practice between males and females in this study. This result matched the findings of AlKalaldeh's (2015) study, which looked at nurses from three Jordanian hospitals and found no significant differences between male and female nurses in terms of knowledge and practice (24). This finding demonstrated that nurses' educational status was closely linked to their understanding of enteral nutrition. Participants with a BSC degree were 0.24 times less likely than MSC degree holders to have appropriate knowledge (AOR = 0.24, 95 % CI: (0.61, 0.93)) in multivariable logistic regression. This conclusion contradicted a prior study conducted in Malawi, which found no statistically significant difference in knowledge between certificate nurses and state registered nurses. This disparity could be related to the fact that all Malawian nurses received inservice training on enteral feeding, which is not the case in our nation (18). Study participants in the age group of 20-28 were 0.02 times less likely than the age group of ≥46 to have a good practice on enteral nutrition (AOR = 0.02, 95 % CI: (0.001,0.52). This study was not comparable to a study conducted in Egypt, which found that the majority of respondents were under the age of 30; however, this variable was not associated with any significant knowledge and skill results (22). This study found that nurses who had enteral nutrition training in school were twice as likely as those who did not (AOR = 1.951, 95 % CI: (0.06, 0.60) to have good enteral nutrition practice. This was in line with research conducted in Egypt, which found that nurses who had previously attended knowledge-related educational sessions scored much higher than those who had not (25). In conclusion, the nurses' knowledge and practices related to enteral nutrition in public hospitals in Addis Ababa, Ethiopia, were found to be inadequate, with certain dangerous procedures. Enteral nutrition was dependent on views rather than evidence-based methods in these ICUs.
v3-fos-license
2023-11-15T06:41:21.229Z
2023-11-14T00:00:00.000
265157729
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.109.074018", "pdf_hash": "8ba537a765687699291344d1a51be3b7e93dcccb", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2472", "s2fieldsofstudy": [ "Physics" ], "sha1": "6db1391ce9e0cc05285e032192df039af0fe595a", "year": 2023 }
pes2o/s2orc
Relaxation time for the alignment between quark spin and angular velocity in a rotating QCD medium . We compute the relaxation times for massive quarks and antiquarks to align their spins with the angular velocity in a rigidly rotating medium at finite temperature and baryon density. The rotation effects are implemented using a fermion propagator immersed in a cylindrical rotating environment. The relaxation time is computed as the inverse of the interaction rate to produce an asymmetry between the quark (antiquark) spin components along and opposite to the angular velocity. For conditions resembling heavy-ion collisions, the relaxation times for quarks are smaller than for antiquarks. For semicentral collisions the relaxation time is within the possible life-time of the QGP for all collision energies. However, for antiquarks this happens only for collision energies √ 𝑠 𝑁𝑁 ≳ 50 GeV. The results are quantified in terms of the intrinsic quark and antiquark polarizations, namely, the probability to build the spin asymmetry as a function of time. Our results show that these intrinsic polarizations tend to 1 with time at different rates given by the relaxation times with quarks reaching a sizable asymmetry at a faster pace. These are key results to further elucidate the mechanisms of hyperon polarization in relativistic heavy-ion collisions. I. INTRODUCTION Relativistic heavy-ion collisions are the best tool to explore, in a controlled manner, the properties of strongly interacting matter under extreme conditions.The study of the different observables emerging from these reactions has produced a wealth of results revealing an ever more complete picture of these properties for temperatures and densities close to or above the deconfinement transition.However, some other phenomena still miss a clearer understanding and pose a challenge for the evolving standard model of heavy-ion reactions.One of these observables is the relatively large degree of polarization of Λ and Λ hyperons measured in semicentral collision for energies 2.5 GeV ≲ √ ≲ 27 GeV, which shows an increasing trend as the energy and centrality of the collision decreases.The raising trend is different for Λs than Λs [1][2][3][4].For semicentral collisions, the matter density profile in the transverse plane induces the development of a global angular momentum, quantified in terms of the thermal vorticity [5,6].Such angular momentum could be transferred to spin degrees of freedom and be responsible for the observed global polarization [7].This expectation is supported by the relation between rotation and spin, nowadays referred to as the Barnett effect, whereby a spinning ferromagnet experiences a change of its magnetization [8] and the closely related Einstein-de Haas effect, based on the observation that a change in the magnetic moment of a free body causes this body to rotate [9].As a consequence, significant efforts have been devoted to quantify how this vorticity may be responsible for the magnitude of the observed polarization, assuming that the medium rotation is transferred to the spin polarization regardless of the microscopic mechanisms responsible for the effect [10][11][12][13][14][15][16][17][18].However, the transferring of rotational motion to spin can only happen provided the medium induced reactions occur fast enough so that the alignment of the spin and angular velocity takes place on average within the lifetime of the medium.In the recent literature, this question has been addressed using different approaches [19][20][21][22][23][24][25]. In a couple of recent works, we have explored whether this relaxation time for the alignment is short enough so that the observed polarization of hyperons can be attributed to the transferring of rotation to spin degrees of freedom [26,27].This is achieved by computing the interaction rate for the spin of a strange quark to align with the thermal vorticity, assuming an effective spin-vorticity coupling in a thermal QCD medium.The findings have been used to compute the Λ and Λ polarization in the context of a core-corona model [28][29][30][31].The calculation resorts to computing the imaginary part of the self-energy of a vacuum quark whose propagator does not experience the effects of the rotational motion.To improve the description, also in a recent work, we have computed the propagation of a spin one-half fermion immersed in a rigid, cylindrical rotating environment [32].For these purposes, we have followed the method introduced in Ref. [33] which requires knowledge of the explicit solutions of the Dirac equation.These have been previously studied in different contexts by imposing different boundary conditions [34][35][36][37][38][39][40][41]. In this work we use the propagator found in Ref. [32] to compute the imaginary part of the self-energy of a quark immersed in a rotating QCD medium at finite temperature () and baryochemical potential ( ).We show that for values of and where the chiral symmetry restoration/deconfinement transition is thought to take place, the relaxation time for quarks turns out to be small enough, compared to the medium life-time, for the inferred, commonly accepted values of the medium angular velocity, after a semicentral heavy-ion collision.However, this is not the case for the antiquarks except for collision energies √ ≳ 50 GeV.The work is organized as follows: In Sec.II we briefly revisit the derivation of the fermion propagator in a rotating environment.In Sec.III we use this propagator to compute the interaction rate for a quark spin to align with the vorticity in a QCD rotating medium at finite temperature and baryo-chemical potential.In Sec.IV we compute the relaxation time for values of and close to the chiral symmetry restoration/deconfinement transition and show that for quarks this relaxation time is within the putative life-time of the system produced in the reaction, although this is not the case for antiquarks except for large collision energies.We finally summarize and conclude in Sec.V. II. PROPAGATOR FOR A SPIN ONE-HALF FERMION IN A ROTATING ENVIRONMENT The physics within a relativistic rotating frame is most easily described in terms of a metric tensor resembling that of a curved space-time.We consider that the interaction region can be thought of as a rigid cylinder rotating around the ẑaxis with constant angular velocity Ω which is produced in semicentral collisions.We can thus write the metric tensor as A fermion with mass within the cylinder is described by the Dirac equation where Γ is the affine connection.In this context, the matrices in Eq. ( 2) correspond to the Dirac matrices in the rotating frame, which satisfy the usual anti-commutation relations The relation between the gamma matrices in the rotating frame and the usual gamma matrices are In this notation, = {, , , } refers to the rotating frame while = {0, 1, 2, 3} refers to the local rest frame.Therefore, Eq. ( 2) can be written as In the Dirac representation, where 3 = diag(1, −1) is the Pauli matrix associated with the third component of the spin.Therefore, we can rewrite Eq. ( 5) as where This expression defines the total angular momentum in the ẑ direction.The term L represents the orbital angular momentum, whereas Ŝ is the spin.On the other hand, the term − ì ∇ is the usual momentum operator.We can find solutions to Eq. ( 7) in the form and then, the function () satisfies a Klein-Gordon like equation Notice that the spin operator Ŝ when applied to () produces eigenvalues = ±1/2.Consequently, conservation of the total angular momentum expressed in terms of the eigenvalues = + imposes solutions with for =1/2 and + 1 for = −1/2.With these considerations, the solution of Eq. ( 10) can be written in cylindrical coordinates (, , , ) → (, sin , cos , ) as where are Bessel functions of the first kind, is the transverse momentum squared and we have defined Ẽ ≡ + Ω, representing the fermion energy observed from the inertial frame.Hence, the solution of Eq. ( 9) is Before writing the expression for the fermion propagator in the rotating environment, it is important to highlight some features of the solution.First, causality requires that Ω < 1, where is the radius of the cylinder.Therefore, the solution is valid as long as Ω < 1/.Second, we can simplify the solution assuming that the fermion is totally dragged by the vortical motion such that the angular position is determined by the product of the angular velocity and the time, specifically + Ω = 0.This is a reasonable approximation when considering that during the early stages of a peripheral heavy-ion collision, particle interactions have not yet produced the development of a radial expansion.With this approximation, the propagator is translational invariant and can be simply Fourier transformed.With these features in mind, we write the fermion propagator (, ′ ) as where In this last expression, and () represent the eigenvalues and eigenvectors of Eq. (10).Taking , ⊥ , , as independent quantum numbers, the closure relation is written as Hence, Eq. ( 14) becomes where and we have defined with Carrying out the integration and the summation, and taking the Fourier transform, we obtain We can write Eq. ( 21) in terms of the Dirac-gamma matrices as where is the spin projection operator.Notice that the derivation of the fermion propagator is performed in vacuum.To use this propagator including a finite temperature and chemical potential, recall that in equilibrium it is sufficiently general to make the replacement 0 → ω + where = (2+1) are Matsubara frequencies for fermions.Equation ( 22) represents our approximation for the fermion propagator in a cylindrical rigidly rotating environment.We now use this propagator to compute the relaxation time for the fermion spin to align with the angular velocity in the rotating medium. III. INTERACTION RATE FOR A QUARK SPIN TO ALIGN WITH THE ANGULAR VELOCITY IN A QCD ROTATING MEDIUM In a QCD plasma in thermal equilibrium at temperature and baryon chemical potential , the interaction rates Γ ± for a quark with spin components = ±1/2 in the direction of ì Ω and four-momentum = ( 0 , ì ) to align its spin in the direction of the angular momentum vector can be expressed in terms of the total interaction rate, which in turn is given by the probability (per unit time) for a transition between the same quantum quark state ± , represented by properly normalized spinors with a definite spin projection (±) along the direction of the angular velocity.This transition is mediated by the imaginary part of the self-energy, ImΣ.In symbols, Shuffling the indexes around, we can also write where we used that the spin projection operators O ± are given by To extract the creation rate for e for spin-aligned quark states from the total interaction rate, as discussed in Ref. [42], we multiply the total interaction rate by the fermion distribution function f , for a grand-canonical ensemble in the presence of a conserved charge to which a quark chemical potential = 1/3 is associated, namely, In previous analyses [26,27] the interaction has been modeled using an effective vertex coupling the thermal vorticity and the quark spin.To improve the description, hereby we consider the case where the fermion is subject to the effect of a rotation within a rigid cylinder.The one-loop contribution to Σ, depicted in Fig. 1, is given by where is the quark propagator in a rotating environment obtained in Eq. ( 22), * is the effective gluon propagator in the thermal medium and are the (3) group generators.The four-momenta are = ( ω, ì ) for the fermion and = ( , ì ) for the gluon, with being the gluon Matsubara frequencies.Also, * = * and in the hard thermal loop (HTL) approximation * is given by * () = Δ () + Δ () (29) where , are the polarization tensors for threedimensional longitudinal and transverse gluons [42].The gluon propagator functions for longitudinal and transverse modes, Δ , (), are given by where and 2 is the gluon thermal mass squared given by where C and C are the Casimir factors for the adjoint and fundamental representations of (3). It is convenient to first look at the sum over Matsubara frequencies for the products of the propagator functions for longitudinal and transverse gluons, Δ with = , , and the Matsubara propagator for the quark in a rotating environment Δ, wich is described in Ref. [42], can be obtained as the inverse of the denominator of each of the components of Eq. 22 with the replacement 0 → ω + . The sum can be performed introducing the spectral densities and for the gluon and fermion, respectively.The imaginary part of can be written as where ( 0 ) is the Bose-Einstein distribution.The spectral densities are obtained from the imaginary part of Δ ( ) after the analytic continuation → 0 + and contain the discontinuities of the gluon propagator across the real 0 axis.Their support depends on the ratio = 0 /.For || > 1, have support on the (timelike) quasi-particle poles.For || < 1, their support coincides with the branch cut of 0 () and corresponds to Landau damping.On the other hand, the fermion spectral density is We now concentrate on the trace factors required for the computation of Eq. ( 27).The term proportional to the fermion momentum and angular velocity vanishes identically, whereas the terms proportional to the fermion mass are given by The delta functions in Eqs. ( 35) and ( 36) restrict the integration over gluon energies to the spacelike region, || < 1.Therefore, the parts of the gluon spectral densities that contribute to the interaction rate are given by With all these ingredients we write the interaction rate as with Notice that Therefore, we can integrate Eq. ( 41) over ′ 0 to obtain Notice that for the considered angular velocities appropriate to the early stages of the collision (10 MeV ≲ Ω ≲ 14 MeV), and for a strange quark mass ∼ 100 MeV, the combination ± Ω/2 can always be safely regarded as being positive.The kinematical constraint imposed by the first of the delta functions in Eq. ( 44) corresponds to the rate to produce rotating and thermalized quarks originated by the dispersion of vacuum nonrotating quarks as a result of dispersion with medium quarks.This is depicted in Fig. 2. We then single out this Feynman diagram representing a process whereby an initially nonrotating quark is dragged by the medium and aligns its spin either parallel or antiparallel to the angular velocity by means of its interactions with medium particles mediated by soft thermal gluons.contribution from the total rate which can then be written as The kinematical restrictions for the 0 integration translate into integration regions R ± .After integrating over the angle between ì and ì , and over the azimuthal angle , and finally using that where R ± are the regions defined by The total rate to align the quark spin with the angular velocity is thus given by the difference between the rate to populate the spin projection along and opposite to the angular velocity which is obtained by integrating the difference between Γ + and Γ − of Eq. ( 46) over the available phase space where is the volume of the collision region.To compute for conditions that depend on the collision energy, we consider a Bjorken expansion scenario where the volume and the QGP lifetime are related by where is the radius of the colliding species and Δ is the QGP lifetime, which is given as the interval elapsed from the initial formation 0 until the hadronization time [30]. Figure 3 shows the interaction rates Γ ± for positive and negative quark spin projections, respectively, as functions of the angular velocity Ω, for semicentral collisions at an impact parameter = 10 fm and chemical potential = 100 MeV for quarks.Since the occupation numbers for antiquarks are obtained from the occupation numbers for quarks by the replacement → − ≡ μ, the corresponding rates for antiquarks Γ± are computed performing such replacement in Eq 46. Figure 4 shows the rates for the antiquarks using μ =100 MeV.In both cases, we use a temperature = 150 MeV.Notice the symmetry Γ(−) = Γ().Also, hereafter, we take the value of the strong coupling as = 0.3, and, although the calculation is valid for any quark with nonvanishing mass, we conside the computation of the relaxation time for the case of a strange quark/antiquark mass = 100 MeV.This is chosen having in mind to later use the results in the context of the computation of the Λ and Λ polarizations, under the assumption that the whole hyperon polarizations come from the strange-quark polarization.Shown in the figures are also the phase space integrated differences Γ + ( 0 ) − Γ − ( 0 ) and Γ+ ( 0 ) − Γ− ( 0 ), respectively, which represent the rates to align the quark or antiquark spin with the angular velocity.Notice that although the rates Γ ± and Γ± are both decreasing functions, their difference increases with Ω.This means that overall the rate at which the positive spin component dominates over the negative one increases as the angular velocity increases.The decrease of the individual rates with Ω can be traced back to Eq. 22 that for large Ω decrease as 1/Ω.This behavior is translated to the fermion spectral density and through it to the region of integration and ultimately to each of the reaction rates.From the expression for Γ in Eq. (48) we can find the parametric dependence of the relaxation time for spin and angular velocity alignment, defined as which we proceed to compute. IV. RELAXATION TIME We now concentrate on the computation of the relaxation time when varying the parameters involved in the calculation.For a direct comparison with previous results, we will use the values obtained in [26] for the initial angular velocity. Figure 5 shows the relaxation time for quarks as a function of .The calculation is performed for a collision energy √ = 200 GeV, which corresponds to an angular velocity Ω = 0.052 fm −1 and chemical potentials = 0, 100 MeV. Figure 6 shows the relaxation time for quarks as a function of , this time for a collision energy √ = 10 GeV, which corresponds to a angular velocity Ω = 0.071 fm −1 and two values of the chemical potential = 0, 100 MeV.In both cases, < 5 fm for the considered temperature range. Figures 7 and 8 show the relaxation time τ for antiquarks as a function of obtained for a collision energy √ = 200 and 10 GeV, which correspond to a angular velocities Ω = 0.052 and 0.071 fm −1 , respectively, for chemical potentials = 0, 100 MeV.Notice that for the largest antiquark chemical potential the relaxation times are larger.This behavior is opposite to that of the quarks, where the relaxation time is lower for larger quark chemical potentials.However, also notice that τ < 6 fm for the considered temperature range. Figure 9 shows the relaxation time for quarks (top) and antiquarks (bottom) as functions of √ for semicentral collisions at impact parameters = 5, 8 and 10 fm.For each value of √ , the temperature and and maximum baryon chemical potential = 3 at freeze-out were extracted from the parametrization in Ref. [43] where and √ are given in GeV.Also, the values for Ω were obtained from the parametrization found in Ref. [26] where = 4 3 3 .The relaxation times for quarks and antiquarks exhibit overall a decrease as functions of √ .Notice that the relaxation times are smaller for the quark case than for the antiquark case.Also, for the largest impact parameters considered, the relaxation times for the quark case in the energy range considered are smaller than 10 fm, which is the ballpark lifetime of the QGP in heavy-ion reactions.However, for the antiquark case, this is true only for energies √ ≳ 50 GeV.This indicates that, although quarks are likely to align their spins within the lifetime of the QGP, this is not the case for the antiquarks at least for small collision energies. Recall that the relaxation time can be used to define the intrinsic polarization as the probability to polarize the quark spin along the direction of the angular velocity as a function of time.When an initial number of particles 0 , originally unpolarized, is placed in the rotating medium, the number of particles that remain unpolarized varies as a function of time as = 0 exp(−/).Therefore, the number of particles in the polarized state is given by = 0 [1 − exp(−/)].The factor [1 − exp(−/)] is therefore the intrinsic polarization.Figure 10 shows the intrinsic polarization for quarks () and antiquarks ( z), given by as functions of time for semicentral collisions at an impact parameter = 8 fm and a collision energy √ = 4 GeV.Notice that approaches 1 faster that z. V. SUMMARY AND CONCLUSIONS In this work, we used a thermal field theoretical framework to compute the relaxation times for massive quarks and antiquarks to align their spins with the angular velocity in a rigidly rotating medium at finite temperature and baryon density.The rigid rotation is implemented using the recently found fermion propagator immersed in a cylindrical rotating environment.In principle, the effects of rotation could also be included into the properties of the gluon propagator.However, notice that the kinematical gluon momentum region that contributes to the calculation corresponds to Landau damping and thus to soft modes.The main role of these modes at finite temperature and baryon density is to mediate the interaction between plasma quarks and the test quark whose spin alignment with the angular velocity has been monitored.Notice that the energy associated to a typical angular velocity for semicentral collisions is of order Ω ∼ 0.05 fm −1 ∼ 10 MeV.In this sense, including the effects of rotation into the gluon propagator with a temperature of order ∼ 100 MeV, although not negligible, represents a subleading effect of order 10%.The relaxation time is computed as the inverse of the interaction rate to produce an asymmetry between quark (antiquark) spin projections pointing along and opposite to the angular velocity.We found that for conditions resembling a heavy-ion collision the relaxation times for quarks are within the putative life-time of the QGP.However, for antiquarks this is the case only for collision energies √ ≳ 50 GeV.We quantified these results in terms of the intrinsic quark and antiquark polarizations, that is, the probability to build the spin asymmetry as a function of time.Our results show that these intrinsic polarizations tend to 1 with time at different rates given by the relaxation times and τ with quarks building the asymmetry at a faster pace.These intrinsic polarizations are essential ingredients to describe the polarization of Λ and Λ hyperons in relativistic heavy-ion collisions.The consequences of the results hereby found are currently being explored and will be reported elsewhere. FIG. 1. One-loop quark self-energy diagram that defines the kinematics.The gluon line with a blob represents the effective gluon propagator at finite density and temperature.The open circle on the fermion propagator represents the effect of the rotating environment. FIG. 3 . FIG.3.Interaction rates Γ ± for positive (+) and negative (-) spin projections for quarks as functions of the angular velocity Ω for semicentral collisions at an impact parameter = 10 fm and chemical potential = 100 MeV for a temperature = 150 MeV.Shown is also the interaction rate Γ obtained as the phase space integrated difference Γ + ( 0 ) − Γ − ( 0 ). FIG. 4 . FIG.4.Interaction rates Γ± for positive (+) and negative (-) spin projections for antiquarks as functions of the angular velocity Ω for semicentral collisions at an impact parameter = 10 fm and chemical potential μ = 100 MeV for a temperature = 150 MeV.Shown is also the interaction rate Γ obtained as the phase space integrated difference Γ+ ( 0 ) − Γ− ( 0 ). FIG. 5 . FIG. 5. Relaxation time for quarks as a function of the temperature for semicentral collisions at an impact parameter = 10 fm for √ = 200 GeV which corresponds to a angular velocity Ω = 0.052 fm −1 with = 0, 100 MeV. FIG. 6 . FIG.6.Relaxation time for quarks as a function of the temperature for semicentral collisions an impact parameter =10 fm for √ = 10 GeV which corresponds to a angular velocity Ω =0.071 fm −1 with = 0, 100 MeV. FIG. 7 . FIG. 7. Relaxation time τ for antiquarks as a function of the temperature for collisions at an impact parameter =10 fm for √ = 200 GeV which corresponds to a angular velocity Ω = 0.052 fm −1 with μ = 0, 100 MeV. FIG. 9 . FIG. 9. Top: relaxation time for quarks as a function of √ for semicentral collisions at impact parameters = 5, 8 and 10 fm.Bottom: relaxation time τ for antiquarks a function of √ for semicentral collisions at impact parameters = 5, 8 and 10 fm.
v3-fos-license
2017-10-18T12:36:06.280Z
2015-01-01T00:00:00.000
26350793
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.avensonline.org/wp-content/uploads/JNP-2332-3469-03-0018.pdf", "pdf_hash": "901506de6b89a6a5bbf6ed2c7d8b573367390fde", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2473", "s2fieldsofstudy": [ "Medicine" ], "sha1": "901506de6b89a6a5bbf6ed2c7d8b573367390fde", "year": 2015 }
pes2o/s2orc
The Relationship between Stress and Diabetes Mellitus Diabetes mellitus (DM) is a metabolic disease characterized by chronic hyperglycemia that results from an alteration of the secretion or action of insulin. This metabolic condition is not homogeneous and the World Health Organization distinguishes two main types of diabetes: Type 1 and Type 2 DM (T1DM and T2DM). Despite etiological and clinical differences, these two diseases share some characteristics including the role played by stress in the occurrence of the disease, its progression and chronicity, which involves habit changes and influences psychological and social life. In this review we will investigate the psychological correlates of T1DM and T2DM. We will focus on the role of stress in the disease and the need for global care plans in diabetic patients in order to improve their quality of life and metabolic control. Gemma Falco2, Piero Stanley Pirro3, Elena Castellano1, Maura Anfossi2, Giorgio Borretta1 and Laura Gianotti1* The Concept of Stress The Canadian physiologist Hans Selye was the first scientist to study the effects of psychological stress on the human body in 1936 [9].He made a distinction between stress, stressor and stress reaction which he considered as a complex phenomenon consisting of a set of nonspecific responses that result when the subject faces the situation.In front of a danger, the system enters into a defense condition, trying to restore balance in different ways.It is an adaptive mechanism, in which a series of physical changes predisposes the body to a "fight or flight" reaction.However, if the state of arousal continues over time, it can lead to negative consequences which Selye defines "General Adaptation Syndrome". Introduction Diabetes mellitus (DM) is a metabolic disease characterized by an inability to maintain normal glucose homeostasis that results from an alteration of the secretion or action of insulin, the hormone responsible for the uptake of glucose in the body [1].This metabolic condition is not homogeneous and the World Health Organization distinguishes two main types [2] of diabetes.Type 1 DM (T1DM) is an autoimmune disease that occurs mostly in childhood or youth [3] and is characterized by the cell-mediated destruction of insulin-producing β-cells, leading to impaired glucose homeostasis, insulin insufficiency, and other complications [4].T1DM sometimes also clusters with other autoimmune disorders [5].The consequences of the impaired assimilation of sugar are: paradoxical polyphagia, polyuria, polydipsia, and ketosis.Once established, the disease is irreversible and it will require lifelong insulin replacement therapy by injection [6].Type 2 DM (T2DM), in contrast, usually manifests itself in adults [7].It does not involve autoimmune destruction of β-cells but it is due to a combination of both insulin resistance and an inability of the β-cells to compensate adequately with increased insulin release.It is related to family predisposition, sedentary lifestyle and obesity [8].Insulin secretion is not completely compromised, but just altered.Sometimes, the pancreas does not produce adequate amounts in relation to the carbohydrates ingested.Other times, the release of insulin is normal, but the body becomes resistant and does not react properly.Generally, T2DM can be controlled through a balanced diet, weight reduction and drugs which stimulate hormone production or reduce insulin resistance.Insulin therapy is necessary only in a few cases, in particular for long term disease, when a secondary pancreatic failure occurs. ISSN: 2332-3469 contrary, if the stressful stimulus persists, we enter the second stage, called "resistance or adaptation" in which the pattern of biological reactions is modified.As long as the danger is present, visceral functions and physico-chemical parameters are maintained in an altered condition.This implies a considerable energy expenditure and a higher central and peripheral functional level.However, the individual cannot cope with the threatening environment forever and his resilience varies according to genetic, cognitive and psychosocial factors.In the long run, it results in an "exhaustion" of adaptability (third phase), and the subject can contract illness or even die.This happens especially when the source of the threat is inevitable, unwanted and repetitive [10], while, if the stress is short-lived, the body returns back to normal without any negative consequences. In recent years, researchers have confirmed Selyes' hypothesis, showing more and more clearly that chronic stress may favor the onset of somatic disorders in susceptible subjects [11,12].Prolonged exposure to adverse events, in fact, affects hormonal balance, metabolism and immune function [13].The prolonged activation of the hypothalamus-pituitary-adrenal (HPA) axis, in particular, increases glucocorticoid levels, causing pathologies related to hypercortisolism.Furthermore, this condition promotes the alteration of the immune function and facilitates the development of central obesity, peripheral tissue resistance to insulin and glucose intolerance.These processes, however, are not the same in all individuals and researchers have found some gender-related variations [14].Some authors actually believe that gender differences in the stress system may explain, at least in part, the greater vulnerability of men to vascular and infectious disease and the greater susceptibility of women to autoimmune diseases [15].In fact, emerging data from molecular studies show that estrogenic hormone plays a central role in the development of autoimmune disease [16]. A brief overview on the functioning of the endocrine system and physiological mechanisms involved in the stress reaction follows. The endocrine system The word endocrine derives from the Greek words "endo," meaning "within", and "crinis," meaning "to secrete". The endocrine system plays a role in regulating mood, growth and development, tissue function, metabolism, sexual and reproductive processes [17]. The foundations of the endocrine system are the hormones and glands.As the body's chemical messengers, hormones transfer information and instructions from one set of cells to another.Many different hormones move through the bloodstream, but each type of hormone is designed to affect only certain cells.Some types of glands release their secretions in specific areas.For instance, exocrine glands, such as the sweat and salivary glands, release secretions in the skin or inside the mouth.Endocrine glands, on the other hand, release hormones directly into the bloodstream where they can be transported to cells in other parts of the body. Roughly simplifying, the endocrine system is made up of the pituitary gland, the thyroid gland, the parathyroid glands, the adrenal glands, the pancreas, the ovaries (in females) and the testicles (in males), the adypocites and the hypothalamus.The latter is a collection of specialized neurones located in the lower central part of the brain and it is the main link between the endocrine and the nervous systems.Nerve cells in the hypothalamus control the pituitary gland by producing chemicals that either stimulate or suppress hormone secretions by the pituitary. The pituitary makes hormones that control several other endocrine glands.Their secretion can be influenced by several factors, including psychological and physical stimuli.The pituitary is divided into two parts: the anterior lobe and the posterior lobe.The first one regulates the activity of the thyroid, the adrenals, and the reproductive glands.The second one, instead, releases antidiuretic hormone and oxytocin. A negative feedback regulates the amounts of hormones available by detecting when blood levels rise above a threshold and inhibiting hormone production.This prevents hormone levels in the blood from continuing to rise, which could result in illness.Likewise, a positive feedback may occur [17]. The physiology of stress In a physiological perspective, stress reaction is organized into two branches: the one governed by the sympathetic nervous system, that operates quickly, and the one governed by neuroendocrine axis (HPA), that activates a delayed response. The first one starts in the parvocellular nucleus of the hypothalamus, which is connected by a bundle of nerve fibers to the locus coeruleus, in the spinal cord.From here, the adrenal medulla is stimulated, so that it produces catecholamines (adrenalin, noradrenalin, and dopamine), with physiological arousal (Figure 1). The functioning of the HPA axis, instead, begins from the paraventricular nucleus of hypothalamus, which releases corticotropin-releasing hormone (CRH) and arginine vasopressin (AVP).These substances stimulate the pituitary gland to produce adrenocorticotropic hormone (ACTH), that is released into the bloodstream and induces the adrenal cortex to secrete cortisol [17,18]. The system is integrated because hypothalamic CRH and norepinephrine stimulate each other, according to a positive feedback mechanism.However, there is also a negative feedback, which prevents the physiological activation from lasting too long and damaging the body. The hypothalamus, in fact, has particular receptors which detect cortisol levels and, depending on the case, activate the axis more or deactivate it altogether. Stress and the Onset of DM Type 1 Diabetes Mellitus. The relationship between stress and autoimmunity The functioning of the immune system is extremely complex and it consists of several circuits, which are activated according to the noxious stimulus.For example, when faced with a virus or an intracellular parasite, the body prepares an inflammatory response, regulated by TH1 lymphocytes.If, instead, the threat consists of a bacterium or an extracellular parasite, an antibody response is activated, mediated by TH2 lymphocytes.Between these two systems, there is a balance, ensured by a mutual inhibition [18]. ISSN: 2332-3469 It has been shown that the activation of the HPA axis during stress reaction can change the homeostasis between the two systems and alters the activity of suppressive cells (T-suppressors) that usually prevent damage to the individual.Accordingly, epidemiological studies indicate that severe stress often precedes the development of certain Th1-mediated autoimmune diseases [19][20][21].Among these, T1DM is characterized by the development of antibodies against islet cells in the pancreas and autoantibodies against insulin.However, the link between stress and T1DM (and, more generally, between stress and autoimmunity) cannot be reduced down to a linear causality.In fact, it depends on many variables related to the stressor, the physical and mental characteristics of the patient, as well as to his resilience and to the presence of a supportive family environment.Moreover, it is not certain that psychological events implicated in the development of the disease immediately precede the onset.On the contrary, these events may take place throughout the patient's entire life and, in particular, in childhood [22].Psychological stress in children, in fact, can affect their immune functions, altering the activity of the antigens GAD65, HSP60 and IA-2, responsible for diabetes-related autoimmunity [23].Furthermore, early negative experiences result in lifelong changes in coping strategies.Many studies show that people who have suffered stressful situations during childhood have a greater psychological vulnerability, combined with hyper-vigilance in the face of danger and low propensity to seek help [24].These elements, in turn, facilitate the development of the physical disease or otherwise have an impact on the emotions arising out of it.It is to be remembered, however, that there is still controversy within the scientific community about the relationship between stress and T1DM [25]. Type 2 Diabetes Mellitus: direct and indirect impact of stress T2DM is characterized by a chronic hyperglycemia, insulin resistance and a relative insulin secretion defect.At present, the causes of T2DM are not entirely clear, but predictors have been found in recent studies.Among these, obesity, hypertension, sedentary lifestyles, alterations in the glycemic status and lipid metabolism correlate with T2DM and its diffusion. Insulin resistance and other conditions with minor degrees of glucose intolerance commonly occur together with a collection of clinical and biochemical features, that have been called Metabolic Syndrome (MS).This term defines a cluster of components that reflect overnutrition, sedentary lifestyles and an excess of adiposity [26].MS affects about a quarter of the world's adult population, with significant variations according to age, gender and body mass index [27].The prevalence of the MS is growing to epidemic proportions all over the world, both in the urbanized than in developing nations.MS predisposes to T2DM with concomitant cardiovascular diseases (CVD) and generates a cluster of cardiovascular risk factors whose core components are impaired glucose metabolism, obesity, dyslipidemia, and hypertension.MS is also associated with other co-morbidities, such as a prothrombotic and proinflammatory state, nonalcoholic fatty liver disease and other disorders in the renal, visual and reproductive systems. A clearly defined pathophysiology and universal definition of MS is still lacking.As a result, several definitions for MS have been proposed by various international regulatory bodies: the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) describes Metabolic Syndrome as the presence of any three of the following components: abdominal obesity, dyslipidemia (high Page -04 ISSN: 2332-3469 levels of triglycerides, low HDL), hypertension, and elevated fasting glucose [28]; the International Diabetes Federation (IDF) considers central obesity a mandatory component for the diagnosis of MS along with any two of the other components: hypertension, abnormal blood glucose, high serum triglycerides and low high density lipoprotein cholesterol [29]; recently, the IDF, the National Heart, Lung and Blood Institute (NHLBI), the American Heart Association (AHA), the World Heart Federation (WHF), the International Atherosclerosis Society (IAS) and the International Association for the Study of Obesity (IASO) have proposed a new harmonized definition which requires any three of the five components included in the IDF definition for the diagnosis of MS and don't consider central obesity an obligatory component [30]. As previously said, MS is usually a pre-clinical condition.Both genetic predisposition and lifestyle (such as being overweight, living a sedentary lifestyle and adopting bad dietary habits among others) lead individuals to develop T2DM. Stress is one of the triggers [31] and it has been linked to a higher risk of T2DM, especially in women [32,33].Nervous strain seems to have a direct and indirect influence on the probability of becoming ill.The reaction to a stressor may consist, in some cases, in the development of an unhealthy lifestyle, including the neglect of physical well-being, and eating in a disorderly fashion, often using food in a compensatory or consoling manner.These factors indirectly affect the risk of developing the disease [34].In addition, physiological changes triggered by stress may directly affect the endocrine and immune systems [35]. Cortisol is one of the main actors mediating the effect of stress on metabolism in general, and on glucose metabolism in particular.Cortisol raises blood glucose levels by stimulating hepatic gluconeogenesis, and inhibiting the action of insulin [36].These reactions -useful for initiating a fight or flight reaction -are not entirely suited to cope with the stressors triggered by modern life, which are mostly relational, intangible and durable.The pressing rhythms imposed by sedentary work, for example, do not involve an increase in energy requirements.The glucose mobilized from the liver is not used and remains in the bloodstream, causing a rise in blood sugar.Moreover, the way by which individuals evaluate events may influence these reactions: an anxious person may anticipate difficulties and amplify the feeling of danger in the face of everyday situations.This anxiety may generate a state of perpetual alarm, that can induce chronic hypercortisolism, likely to facilitate the onset of metabolic syndrome and T2DM. The Disease as a Source of Stress From a biopsychosocial perspective, it is important to consider, on the one and, that stress is the component that may trigger T1DM and T2DM, but, on the other hand, that it can also be the result of the disease itself.Getting ill, in fact, may cause personal and interpersonal conflicts, where the normal rhythms of life and habits are disrupted, forcing the individual to question personal values and long-term objectives [12].T1DM and T2DM require a complex and largely self-managed treatment which includes the daily use of drugs (insulin or hypoglycemic agent), the regular measurement of blood glucose levels through invasive means and special attention to diet and everyday activities. In Type 2 diabetics -that usually affects elderly people -changing established routines may create emotional and cognitive fatigue.In fact, they should reduce the intake of carbohydrates and learn new dietary guidelines and new procedures for the self-administration of drugs.On the other hand, Type 1 diabetics have no dietary restrictions, but they must make sure that insulin units are proportional to the glucose ingested through constant monitoring.This operation may be complex especially for those who need to consume their meals in a restricted lapse of time, such as happens in many work environments.In addition, in order to obtain metabolic control, meals must be regular, as well as the measurements of the glucose levels and insulin administration.These requirements are difficult to reconcile with the habits of a young person and can generate concrete difficulties and discomfort in social interactions. The need to control aspects of life which were previously considered "normal" can be experienced as a loss of freedom and spontaneity [37].This is what some authors have called the frustration of chronicity [38], which makes diabetes a disease which can be managed but never defeated, and which has an impact on mood, as evidenced by the higher percentage of anxiety, depression and eating disorders among diabetic subjects.In particular, when the onset is compounded with other changes and transitions -such as adolescence [39] or aging -physical and social identity are affected.Some people may develop an image of their own body as suffering and "broken".Their body is different from the past and if compared to that of their peers, this diversity is interpreted in a negative way.Moreover, the external references impact on self-esteem and fear of judgment or contempt may force the patient to hide the symptoms of the disease from others as if they were something to be ashamed of. Concerning behavior, one can observe different reactions, depending not only on the severity of the clinical situation, but also on personality, self-efficacy [40] and the social support they have [41].For fear of being a burden, some patients isolate themselves, while others show provocative and hostile attitudes towards family and healthcare staff.One of the greatest risks, however, consists in the denial of the disease [42] and of the limitations that it entails.In order to maintain self-esteem, the patient avoids dealing with reality, calming, in his fragility, that he is omnipotent and refusing treatment.There is a risk that a vicious circle of poor compliance and metabolic decompensation will set in. Early identification and treatment of these issues may help the patient develop an adaptive style for coping, which will give positive results on compliance and metabolic balance.In addition, it may prevent the risk of long term complications, which would further deteriorate the quality of life, and introduce new stressors and new blows to personal identity [43,44]. The Impact of Stress on the Disease Progression The role of stress in the etiology of diabetes is difficult to define and measure, but there is significant evidence of its metabolic consequences in individuals already suffering from chronic diseases, such as DM [45].Psychological strain, in fact, activates neuroendocrine processes which influence the blood glucose level through the release Page -05 ISSN: 2332-3469 of cortisol, growth hormone and endorphins.Faced with an external threat, blood glucose level increases, in order to mobilize energy.This reaction has an adaptive importance for a healthy organism, but, in diabetic patients, the stress-induced hyperglycemia may aggravate the disease since the hypoglycemic agents cannot counterbalance it. These mechanisms mostly affect young people, in whom the endocrine system undergoes continuous adaptation, making them particularly sensitive to the effects of environmental stimuli [46].On the other hand, in the case of the elderly, stress plays a major role in the development of complications such as neuropathy, nephropathy and retinopathy [47].In addition, negative emotions may reduce and undermine the willingness to comply to treatment and to diets.Thus, a vicious circle of nervousness, poor compliance, poor glycemic control and physical vulnerability is established.This, in turn, makes it more difficult for people to cope with any new problems that arise [48,49].On the contrary, early intervention on the acceptance of the disease seems to start a cycle of good compliance, adequate monitoring and further improvement of the patients' attitude towards diabetes.Therefore, care must be given through a process which supports resilience and personal self-efficacy, favoring the activation of problem-focused coping [50]. Treatment plans should also involve the families so they can help their family member to adapt to the illness.For instance, some research in developmental psychology has shown that there is a connection between parents' stress and that of their diabetic children [51] and this may influence the quality of glycemic control [52].Family support is also important in adult patients: a partner's attitude, in fact, influences adherence to treatment [53] and the psychological impact of the disease has been shown to be more serious in type 2 diabetics who live alone [54]. Therapeutic Approaches In view of the foregoing, we suggest that an effective cure for diabetes requires a global care plan which also takes into consideration the physical, relational and emotional aspects.Clinicians and patients must establish a collaborative relationship, requiring the clinicians to abandon any paternalism [55].The main objective is to support the individuals' resources, making them protagonists in their own decisions and capable of making change [56,57].Patients must feel they have equal dignity in the care administration process and that they are free to speak openly also about doubts and fears. Taking charge of people with chronic disease is clinically challenging.It is therefore important that the working group be made up of different professionals with various responsibilities, including psychologists.They promote the acceptance of the disease, supporting the subjects in the management of conflicts and ambivalences as well as in the search of resources to promote resilience.Psychological support allows individuals to express emotions such as fear of the future, sadness for changes in lifestyle and anger [58].Patients wonder why this misfortune has happened to them and may consider the disorder as an enemy to contend with, which cannot be defeated.Special attention must be paid when the disease occurs in childhood and adolescence.The diagnosis of diabetes is critical both for the child and the parents.The family must be helped to understand the illness and to be aware of the importance of the treatment in everyday life.They must be helped to accept the implications of the situation, without however interfering with normal development milestones.Frequently, at the onset of the disease, children and caregivers express anxiety and depression, which may lead them to become too apprehensive with regard to care.Parents, in these cases, should be encouraged to promote -age permitting -the independence of their offspring, overseeing their social adaptation and self-esteem.There must be both collaboration and empowerment with teenagers, who need assistance in achieving autonomy in managing their therapy [59].In most cases, they welcome the involvement of adults who give them support rather than oppress them.Hence, adults should reinforce positive behaviors and avoid excessive reproaches, never replacing their child in his decisions.Dealing with this age group, it is essential to always consider the patients' point of view, respecting their doubts, which often arise from the disease compounding with the adolescents' changes in body, feelings and thoughts.Gradually, the acceptance of the diagnosis allows patients to transform diabetes from an invincible enemy into a part of themselves with which they are able to make compromises.This new perspective generates a positive impact on emotions, behaviors and the overall outcome.The disease may remain a source of suffering and a personal limitation, but it can, nonetheless, still allow patients to take actions, build relationships and develop projects for the future. Conclusions This review examines recent literature about the psychological consequences of DM1 and DM2, with particular regard to the role played by stress and emotions.According to this literature, stress is often observed in conjunction with the diagnosis of diabetes and it alters the glucose metabolism and the immune response.Furthermore, the disease itself is a source of stress, because it requires considerable changes in lifestyle, thereby influencing the patient's identity.The ability to cope with these challenges, finally, may affect the actual management of the therapy and glycemic control.Our conclusion is that patients with DM require global care from a multidisciplinary team who is there to listen to their experiences and emotions, with the goal of helping them to accept and manage the disease.
v3-fos-license
2016-05-16T12:11:25.302Z
2015-11-18T00:00:00.000
15092232
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://journals.iucr.org/f/issues/2015/12/00/cb5089/cb5089.pdf", "pdf_hash": "172f076b80440de91b7a1e2b87dba2ef40b844b7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2475", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "1b09c59d64c8ecedea9d29dfb7043517399d60e9", "year": 2015 }
pes2o/s2orc
Seeding for sirtuins: microseed matrix seeding to obtain crystals of human Sirt3 and Sirt2 suitable for soaking In the present study, microseed matrix seeding was successfully applied to obtain a large number of crystals of the human sirtuin isotypes Sirt2 and Sirt3. These crystals appeared predictably in diverse crystallization conditions, diffracted to a higher resolution than reported in the literature and were subsequently used to study the protein–ligand interactions of two indole inhibitors. Introduction Sirtuins are a unique NAD + -dependent enzyme family conserved from bacteria to humans that catalyse the cleavage of acyl groups from the "-amino group of lysines (Brachmann et al., 1995). Originally thought to function only as lysine deacetylases, it was recently reported that sirtuins are also able to cleave off other acyl groups (Feldman et al., 2013). Sirt5 efficiently converts proteins containing succinylated, malonylated and glutarylated lysines, while the isotypes Sirt1, Sirt2, Sirt3 and Sirt6 preferentially cleave off fatty-acid moieties from the "-amino group of lysines (Du et al., 2011;Bao et al., 2014;Tan et al., 2014;Liu et al., 2015;Teng et al., 2015). Additionally, several sirtuin isotypes have also been described to be ADP-ribosylases (Du et al., 2009;Pan et al., 2011). However, the physiological roles of post-translational modifications other than acetylation are far from being understood and still need to be further investigated. The initial interest in sirtuins was provoked by reports that sirtuins are critical regulators of aging and that overexpression of specific isotypes may prolong lifespan and impede the onset of age-related diseases by imitating the state of calorie restriction (Howitz et al., 2003;Bordone et al., 2007). However, extensive research has revealed that sirtuins have many other cellular functions (reviewed in Schemies et al., 2010). Thus, they are involved in the regulation of cell-cycle control (Vaquero et al., 2007;Serrano et al., 2013), metabolism (Hirschey et al., 2010), stress response (Bell et al., 2011), autoimmunity (Chuprin et al., 2015) and mitochondrial biogenesis (Milne et al., 2007). Their misregulation has been linked to a variety of pathological processes involved in neurodegenerative diseases (Outeiro et al., 2007) and metabolic disorders (Hirschey et al., 2011). Generally considered as tumour suppressors (Kim et al., 2010(Kim et al., , 2011, Sirt1, Sirt2 and Sirt3 have also been shown to assume a role in promoting tumorigenesis in various forms of cancer (Alhazzazi et al., 2011;Liu et al., 2013;Yang et al., 2013). Sirtuins are therefore considered to be potential drug targets. They may also represent targets for antiparasitic drugs (Lancelot et al., 2013). However, cellular studies as well as animal studies have been hampered by the lack of suitable modulators. Of the seven members of the human sirtuin family, Sirt2 is mainly found in the cytoplasm, while Sirt3 is the main mitochondrial deacylase (North et al., 2003;Lombard et al., 2007). Sirtuins consist of a highly conserved catalytic deacylase domain of around 275 amino acids that is flanked by unstructured N-and C-termini that differ in their length and in their sequence. They have been grouped into four different classes, of which the class I sirtuins Sirt1, Sirt2 and Sirt3 share the highest sequence identity (Fig. 1a). The crystal structures of the deacylase domain of all sirtuin isotypes except Sirt4 and Sirt7 in differently ligated states have been solved to date (Finnin et al., 2001;Jin et al., 2009;Du et al., 2011;Pan et al., 2011;Zhao et al., 2013;Davenport et al., 2014). They show a two-domain structure consisting of a large Rossmann-fold domain and a smaller zinc-binding domain that is typical of sirtuins (Fig. 1b). The two domains are separated by a large cleft that constitutes the catalytic core. Both domains are The NAD + -dependent deacylases share a highly conserved catalytic core that consists of a larger NAD + -binding domain and a smaller zinc-binding domain. (a) Structural sequence alignment of the deacylase domains of human Sirt1 (UniProt Q96EB6; amino acids 244-495), human Sirt2 (UniProt Q8IXJ6; amino acids 65-337) and human Sirt3 (UniProt Q9NTG7; amino acids 126-379). The Sirt2-specific insertion is marked in yellow. The deacylase domains of Sirt1, Sirt2 and Sirt3 share a sequence identity of 44%. (b) Superposition of the deacylase domains of apo Sirt1 (teal; PDB entry 4ig9), apo Sirt2 (brown; PDB entry 3zgo) and apo Sirt3 (light grey; PDB entry 3gls). The NAD + -binding domain of Sirt1, Sirt2 and Sirt3 is very similar, while the zinc-binding domain is structurally more variable. (c) Proposed mechanism of sirtuin-catalyzed deacetylation. The carbonyl O atom of the acetyl group attacks the C1 0 atom of the ribose. On nicotinamide cleavage, the acetyl group is then transferred via an alkylimidate and a bicyclic intermediate to the connected via a series of loops that play important roles during cofactor binding, acyl-lysine binding and the 'open-toclosed' rotation. During catalysis, NAD + adopts a kinked conformation which brings the C1 0 of its ribose moiety into proximity for a nucleophilic attack by the carbonyl O atom of the acetyl lysine that is inserted in a hydrophobic tunnel (Fig. 1c). On cleavage of nicotinamide and the formation of an alkylimidate complex, the acetyl group is transferred via several intermediates from the "-amino group to the ADP ribose (ADPR) moiety, generating 2 0 -O-acetyl-ADPR and releasing the deacetylated lysine. Deacylation is thought to proceed in a similar fashion (Feldman et al., 2015). Structural information on sirtuin inhibition is still rather scarce and only recently have several structures of sirtuins in complex with inhibitors been reported (Disch et al., 2013;Gertz et al., 2013;Zhao et al., 2013;Nguyen, Schafer et al., 2013;Yamagata et al., 2014;Rumpf et al., 2015). Additional work is still needed to develop rationales for the design of potent and selective modulators that are suitable for cellular studies. Crystallization of proteins often turns out to be a very demanding objective and initial crystals usually require optimization for diffraction experiments. Seeding is one of the methods that has helped to facilitate the optimization process (Stura & Wilson, 1991;Bergfors, 2003;D'Arcy et al., 2003). It has been shown to enhance the reproducibility of crystal formation and to improve crystal morphology as well as diffraction. Ireton and Stoddard extended the concept of seeding to what they termed microseed matrix seeding (MMS), in which microseeds are transferred to a matrix of diverse buffer conditions (Ireton & Stoddard, 2004). D'Arcy and coworkers incorporated the addition of microseed solutions to screening procedures with standard crystallization robots (D'Arcy et al., 2007). MMS is also widely used in drugdiscovery programs. It is usually the best way to obtain large numbers of crystals for high-throughput soaking experiments. Cross-seeding, an approach in which seed crystals of one homologue are used to initiate crystal formation of another one, has also been applied successfully (Obmolova et al., 2010). Current developments in the field of MMS have recently been reviewed and summarized (D'Arcy et al., 2014). In this work, we describe for the first time the successful application of MMS techniques to human isotypes Sirt2 and Sirt3 from the sirtuin family. We obtained well diffracting crystals of Sirt3 in its apo form and of Sirt2 in complex with the product analogue ADP ribose (ADPR) in diverse crystallization conditions. Using MMS, crystal formation was predictable, less error-prone and yielded a large number of crystals. The crystals were used to obtain crystal structures of Sirt2 in complex with ADPR and of apo Sirt3 at an improved resolution. They also proved to be perfectly suited for the investigation of protein-ligand interactions and were subsequently used to solve two novel crystal structures of Sirt2 in complex with ADPR and indole inhibitors. Both Sirt2-ADPR-indole complexes unexpectedly contained two inhibitor molecules in the active site of Sirt2, highlighting the specific characteristics of Sirt2 within the sirtuin family. Crystallization and soaking experiments All crystallization trials were performed in 96-well plates (Intelli-Plate 96-3 Low Profile, Art Robbins Instruments, Sunnyvale, USA) using an OryxNano pipetting robot (Douglas Instruments, Berkshire, England). Index screen was obtained from Hampton Research (Aliso Veijo, USA). The composition of the crystallization solutions in the Index screen can be found at http://hamptonresearch.com/documents/ product/hr005585_2-134_formulations.pdf. Crystal formation was monitored with a Minstrel HT UV imaging unit (Rigaku, Kent, England). Initial crystals that were used for MMS of apo Sirt3 (18.5 mg ml À1 ) were obtained in a solution consisting of 0.2 M Li 2 SO 4 , 60%(v/v) Tacsimate (Hampton Research) at pH 7.0 and 4 C with a 1:1 ratio of Sirt3 solution to reservoir solution. Initial crystals of the Sirt2 56-356 -ADPR complex (13 mg ml À1 , 20 mM ADPR from a 1 M stock solution in 1 M Tris-HCl buffer pH 9.0) were obtained in a solution consisting of 17.5%(w/v) PEG 10 000, 0.1 M ammonium acetate in 0.1 M bis-tris buffer pH 6.75 at 20 C using a 1:3 ratio of Sirt2-ADPR solution to reservoir solution. Microseed solutions were prepared as follows: 5-10 crystals were harvested, washed, diluted with mother liquor and transferred into an Eppendorf research communications tube, where they were crushed with a seed bead [five cycles of slight vortexing (10 s) followed by incubation on ice (20 s)]. The supernatant was then used for crystallization trials. For crystallization trials using microseed solutions the drop consisted of 17%(v/v) microseed solution, 50-33%(v/v) Sirt3 solution (18.5 mg ml À1 ) or Sirt2-ADPR solution (13 mg ml À1 , 20 mM ADPR) and 33-50%(v/v) reservoir solution. Control experiments to verify that Sirt3 crystal formation was dependent on microseeds were performed with the mother liquor only [60%(v/v) Tacsimate pH 7.0 or 60%(v/v) Tacsimate, 0.2 M Li 2 SO 4 pH 7.0]. Crystals of Sirt2 50-356 in complex with ADPR [20 mg ml À1 , 10 mM NAD + (Sigma-Aldrich, Deisenhofen, Germany), 100 mM stock solution in 25 mM HEPES, 200 mM NaCl, 5%(v/v) glycerol, 5 mM -mercaptoethanol pH 7.5] were obtained in a solution consisting of 30%(w/v) PEG 3350, 0.2 M NaCl in 0.1 M bis-tris buffer pH 6.25 at 4 C. The crystals formed after 3-4 d and were mounted on nylon loops before flash-cooling in liquid nitrogen. Apo Sirt3 crystals were obtained by MMS in a solution consisting of 25%(w/v) PEG 3350, 0.2 M MgCl 2 in 0.1 M bis-tris buffer pH 5.5 at 4 C. The crystals were cryoprotected by the addition of PEG 3350 to a final concentration of 30%(w/v) PEG 3350, mounted on nylon loops and flash-cooled in liquid nitrogen. For soaking experiments with indole inhibitors, crystals of Sirt2 56-356 in complex with ADPR were obtained via MMS in a solution consisting of 18%(w/v) PEG 10 000 in 0.1 M bis-tris buffer pH 5.75 at 20 C. These crystals formed after 1 d and were then soaked in a buffer consisting of 18%(w/v) PEG 10 000, 10%(v/v) DMSO, 0.1 M bis-tris buffer pH 5.75 and either 10 mM EX527 (Sigma-Aldrich, racemic) or 10 mM Table 1 Data-collection and refinement statistics. Data collection Data were collected on beamlines X06SA (Sirt2 50-356 -ADPR) and X06DA (apo Sirt3) at the Swiss Light Source, Villigen, Switzerland using Pilatus detectors (Dectris, Baden, Switzerland) at 100 K with oscillations of 0.2 or 0.5 and an X-ray wavelength of 1.0 Å . Data for Sirt2 56-356 in complex with ADPR and EX243 and data for Sirt2 56-356 in complex with ADPR and CHIC35 were collected using a MicroMax-007 HF rotating-anode X-ray generator (Rigaku) at a wavelength of 1.5418 Å equipped with a MAR345 image-plate detector (MAR Research, Hamburg, Germany). For each structure, one single crystal was used. Data were processed with MOSFLM (Leslie & Powell, 2007) or XDS (Kabsch, 2010) and scaled using the CC 1/2 criterion (Karplus & Diederichs, 2012) with AIMLESS (Evans & Murshudov, 2013) from the CCP4 suite . (Adams et al., 2010). Crystal twinning as well as the twin fractions were detected using phenix.xtriage from the PHENIX suite (Adams et al., 2010). Twin refinement was achieved with the intensitybased twin-refinement option of REFMAC5. Ligands were generated with the Grade web server (Global Phasing Ltd, Cambridge, England) and were placed into 2F o À F c electrondensity maps using AFITT-CL (v.2.1.0; OpenEye Scientific Software, Santa Fe, USA). All residues of Sirt2 and Sirt3 except those of the flexible N-and C-termini are defined by electron density. In the crystal structures of the Sirt2-ADPRindole complexes, a methionine and a histidine are seen at the N-terminus. They originate from the NdeI restriction site of the pET vector. All structures were validated using PROCHECK (Laskowski et al., 1993) and the MolProbity server (Chen et al., 2010). Ramachandran plots for each structure are shown in Supplementary Fig. S3. The datacollection and refinement statistics are summarized in Table 1 Seeding Initial crystals of apo Sirt3 were obtained in a solution consisting of 0.2 M lithium sulfate, 60%(v/v) Tacsimate at 4 C (Figs. 2a and 2b). The epitaxically twinned crystals appeared after one week and diffracted to 2.4-3.0 Å resolution. Further optimization of the crystallization conditions did not improve either the crystal morphology or diffraction. Furthermore, initial soaking experiments failed owing to the insolubility of our drug-like ligands in the highly polar Tacsimate condition. We therefore opted for seeding. The use of microseeds in the Index screen (Hampton Research), as described by D'Arcy et al. (2007), proved to be very successful. In the presence of the microseeds, apo Sirt3 crystallized rapidly at different temperatures, in diverse crystallization conditions and in a pH range from 3.5 to 8.5 (Fig. 2c). The best results were obtained with a microseed solution of 1-2 crystals per 10 ml reservoir solution. In most cases, mainly in acidic PEG 3350-containing conditions, apo Sirt3 formed single crystals (Figs. 2d, 2e, 2f and 2g) predictably within several days. They showed an improved diffraction pattern to a resolution of below 2.0 Å . During the initial crystallization screenings we also found that the presence of lithium sulfate was an important nucleation factor, as the initial crystals could only be obtained in the presence of lithium sulfate. We also investigated the influence of lithium sulfate on the crystallization of apo Sirt3 in the presence of microseeds using the Index screen (Hampton Research). Again, apo Sirt3 crystallized rapidly in diverse conditions, over a wide pH range and at different temperatures within days ( Supplementary Fig. S1). In a control experiment, we also performed crystallization trials in a similar fashion with the same seeding solutions but without microseeds. Here, we could only observe crystal formation in two different crystallization conditions (data not shown). The formation of the apo Sirt3 crystals was therefore dependent on the presence of microseeds. Crystal structure of human apo Sirt3 Diffraction data from a crystal obtained using microseeds were used to solve the structure of the apo form of human Sirt3 by molecular replacement at a resolution of 1.83 Å [the resolution of the crystal structure of apo Sirt3 deposited by Jin et al. (2009) with PDB code 3gls is 2.7 Å ]. Sirt3 crystallized in space group P2 1 with six monomers in the asymmetric unit. All monomers feature the typical sirtuin-like two-domain structure with a larger Rossmann-fold domain and a smaller zincbinding domain and adopt the 'open' conformation. However, they do show some differences within the asymmetric unit (r.m.s.d. ranging from 0.2 to 0.8 Å for all C atoms of each monomer). A similar behaviour has also been observed in the published crystal structure of human apo Sirt3 (Jin et al., 2009). Superposition of chain A of the published apo Sirt3 structure (PDB entry 3gls) with chain A of the improved structure of Sirt3 shows a high similarity (r.m.s.d. of 0.25 Å for all C atoms). Similar to the published X-ray structure of apo Sirt3, the acyl-lysine binding site of all monomers is occupied research communications Improved crystal structure of human Sirt2 in complex with the product analogue ADPR. (a) Superposition of the published X-ray structure of human Sirt2 in complex with ADPR (PDB entry 3zgv, chain A, yellow) with the improved X-ray structure of the Sirt2-ADPR complex (chain A, dark grey) shows that the structures are very similar (r.m.s.d. of 0.18 Å for all C atoms). (b) Close-up view of the active site of the improved Sirt2-ADPR complex. ADPR (overall B factor of 13.8 Å 2 ) is shown as aquamarine sticks and the -weighted 2F o À F c electron-density map is contoured at 1.0. A -weighted F o À F c electron-density OMIT map of ADPR is shown in Supplementary Fig. S4. (c) Close-up view of the hinge loop of Sirt2. In the published X-ray structure of Sirt2 parts of this loop were not defined by the electron density. In the improved structure of Sirt2, the conformations of all amino acids that form this loop are well defined by the electron density. The -weighted 2F o À F c electron-density map is also contoured at 1.0. A -weighted F o À F c electron-density OMIT map of the hinge loop is shown in Supplementary Fig. S4. (d) Schematic representation of crystal hits for the complex of human Sirt2 56-356 and ADPR in a screen to optimize the initial crystallization conditions. Two different crystallization-drop compositions were used. In the presence of microseeds, Sirt2 56-356 crystallizes more rapidly and yields more crystals using diverse drop compositions, pH and PEG 10 000 concentrations. A red condition indicates crystals in both drops, while orange indicates crystal formation in only one. (e) Representative Sirt2-ADPR crystal after 5 d in a solution consisting of 15%(w/v) PEG 10 000 in 0.1 M bis-tris buffer pH 6.0. ( f ) UV image of (e). (g) Representative crystals of the Sirt2-ADPR complex obtained in the presence of microseeds in the solution mentioned in (e). (h) UV image of (g). by a PEG molecule. This has also been observed in other sirtuin structures (Avalos et al., 2004(Avalos et al., , 2005. Improved crystal structure of human Sirt2 in complex with ADP ribose We used two different truncated forms of Sirt2 (Sirt2 50-356 and Sirt2 56-356 ) for crystallization trials of Sirt2. Cross-seeding using microseed solutions of apo Sirt3 failed. However, we succeeded using conventional screening methods and obtained crystals of both Sirt2 forms in complex with ADPR. Initially, Sirt2 50-356 crystallized in the presence of the cosub-strate NAD + (2 mM) after a month in Index screen condition F10 (Hampton Research) with a 1:1 ratio of reservoir solution to protein solution at 4 C. This crystallization condition was optimized to 30%(w/v) PEG 3350, 0.2 M NaCl in 0.1 M bis-tris buffer at pH 6.25 and 4 C. Additionally, the NAD + concentration and the protein:reservoir ratio were increased to 20 mM and 3:1, respectively. Thus, Sirt2 50-356 crystals appeared within 3-4 d. Using the diffraction data obtained from one of these crystals, we determined the space group as P2 1 2 1 2 1 and were able to solve the structure of Sirt2 by molecular replacement at a resolution of 1.63 Å (Figs. 3a and 3b; the resolution of the deposited crystal structure of Sirt2-ADPR was The indole inhibitors CHIC35 and EX243 occupy the extended C-site (ECS) as well as the selectivity pocket with two molecules and induce a conformational change at the hinge region. (a) Chemical structures of CHIC35 and EX527 (racemic) and their IC 50 values to inhibit deacetylation by Sirt1, Sirt2 and Sirt3 (taken from Napper et al., 2005). The S-enantiomer of EX527 is termed EX243. (b) Surface representation of the catalytic core of Sirt2 in complex with ADPR. The corresponding subpockets of the catalytic core are labelled according to the literature. (c) An overlay of the improved structure of Sirt2-ADPR (grey), the Sirt2-ADPR-EX243 complex (light pink) and the Sirt2-ADPR-CHIC35 complex (salmon) reveals only minor differences in the overall structure; however, significant differences can be observed at the hinge region. (d) Close-up view of the hinge region of the structures shown in (c). The binding of the two indole molecules induces a 6 Å shift of one hinge loop. (e, f ) Close-up view using the same orientation as shown in (c) of the active site of the Sirt2-ADPR-CHIC35 complex (e) and the Sirt2-ADPR-EX243 complex ( f ). ADPR is shown as turquoise (Sirt2-ADPR-CHIC35) or light yellow (Sirt2-ADPR-EX243) sticks. CHIC35 is shown as pale green sticks and EX243 as light blue sticks. To better differentiate between the two indole molecules, they are termed the ECS molecule and the hinge molecule, respectively. The -weighted 2F o À F c electron-density map is contoured at 1.0. The cofactor-binding loop is not shown for the sake of clarity. -Weighted F o À F c electron-density OMIT maps for ADPR and the indole inhibitors are shown in Supplementary Figs. S4 and S5. 2.27 Å ; PDB entry 3zgv; Moniot et al., 2013). Each Sirt2 molecule contains the product analogue ADPR formed through the hydrolysis of NAD + . The structure of Sirt2 50-356 in complex with ADPR is very similar to the structure of Sirt2-ADPR recently published by Moniot et al. (2013), but everything is observed in greater detail. The asymmetric unit also contains two very similar monomers (r.m.s.d. of 0.19 Å for all C atoms) that adopt the 'closed' sirtuin conformation. In contrast to the recently published structure, we were also able to include all of the residues of one loop of the hinge region (amino acids 136-144) that connects the Rossmann-fold domain to the zinc-binding domain in our model (Fig. 3c). However, the handling of the Sirt2 50-356 crystals turned out to be difficult and the crystals obtained were pseudomerohedrally twinned. We were also able to obtain crystals with the other truncated form of Sirt2 56-356 in the presence of ADPR. These crystals formed after just 1 d in an acidic solution of PEG 10 000 at 20 C. They crystallized in the same space group, diffracted equally well, were easier to handle and were not twinned. We also performed MMS with these crystals. As observed for apo Sirt3, the use of microseeds significantly improved the crystallization process in either fine screens or initial screens such as Index screen (Hampton Research; see Figs. 3d, 3e, 3f, 3g and 3h and Supplementary Fig. S2). Again, the crystal count and the predictability of crystal formation were better compared with conventional screening methods set up in the absence of microseeds, while the diffraction quality was equally good. Crystal structures of Sirt2 in complex with ADPR and the indole inhibitors EX243 and CHIC35 Owing to the lack of potent Sirt3 modulators, we focused our studies on Sirt2 to validate the suitability of the MMS crystals for soaking experiments. Napper and coworkers published a set of indole inhibitors that show a preference for inhibiting Sirt1 but also block Sirt2 and Sirt3 (Napper et al., 2005;Gertz et al., 2013). Of these indoles, EX527 (selisistat) and CHIC35 are the most potent inhibitors (Fig. 4a). Both have been widely used to study sirtuins in vivo (Solomon et al., 2005;Smith et al., 2014), and EX527 is the first sirtuin inhibitor to date that has been evaluated in clinical trials for the treatment of Huntington's disease (Sü ssmuth et al., 2015;Westerberg et al., 2015). Furthermore, the indole inhibitors are some of the few inhibitors that have been cocrystallized with human Sirt1, human Sirt3 and the archaeal sirtuin Sir2Tm Zhao et al., 2013). The crystal structures revealed that the more potent S-enantiomer of the indoles is bound to the active site of Sirt1 and Sirt3. The carboxamide of the indole occupies the C-pocket and mimics the physiological inhibitor nicotinamide (Fig. 4b). The chlorinated hydrophobic indole moiety also extends into another binding pocket adjacent to the C-pocket that has been termed the extended C-site (ECS). In addition to the indole, all sirtuin-indole complexes contain either the cofactor NAD + or the product analogue ADPR. This is in line with kinetic studies, which concluded that the presence of NAD + or the product analogue ADPR is essential for binding of the indole inhibitor (Napper et al., 2005;Gertz et al., 2013;Zhao et al., 2013). We therefore assumed that the Sirt2-ADPR crystals were perfectly suited for soaking experiments with the indole inhibitors. Soaking with both indole inhibitors proved to be successful, and DMSO concentrations of up to 10%(v/v) with soaking durations of 90 min did not deteriorate the diffraction pattern of the crystals. Using the diffraction data obtained from the soaked crystals, we were able to solve the crystal structures of Sirt2 in complex with ADPR and either EX243 (EX243 is the S-enantiomer of EX527; Sirt2-ADPR-EX243 structure) or CHIC35 (Sirt2-ADPR-CHIC35 structure) by molecular replacement. The space group of the soaked crystals was determined to be P2 1 2 1 2 1 , similar to the space group of the unsoaked Sirt2-ADPR crystals. The asymmetric unit contains two monomers that show no significant differences [r.m.s.d.s of 0.29 Å (Sirt2-ADPR-EX243) and 0.23 Å (Sirt2-ADPR-CHIC35) for all C atoms]. The following descriptions are therefore based on chain A. The Sirt2-ADPR-indole complexes adopt the 'closed' conformation and share a very close resemblance to the Sirt2-ADPR structure described in x3.3 [r.m.s.d.s of 0.23 Å (Sirt2-ADPR-EX243) and 0.14 Å (Sirt2-ADPR-CHIC35) for all C atoms; Fig. 4c]. The only significant differences can be observed at the hinge region (Fig. 4d). Here, the indole inhibitors induce a 6 Å shift of one hinge loop (amino acids 136-144; Fig. 4d). The ADPR molecules of both structures assume a position that is almost identical to that observed in the Sirt2-ADPR complex. To our surprise, each Sirt2 molecule of both Sirt2-ADPR-indole complexes contains two identical indole molecules in an S-configuration (Figs. 4e and 4f). One molecule occupies the C-pocket as well as the extended C-site, and we will refer to this molecule as the 'ECS molecule'. The other molecule is found at the hinge region in a pocket that we recently described as the selectivity pocket (Rumpf et al., 2015). We will refer to this molecule as the 'hinge molecule'. Both indole molecules of each structure are well defined by electron density; however, the higher B factors of the hinge molecule indicate that the hinge molecule is not present in all Sirt2 molecules or that it is more mobile within the selectivity pocket of Sirt2 (e.g. B factors of 24.1 Å 2 for the ECS molecule of Sirt2-ADPR-EX243 and 45.6 Å 2 for the hinge molecule of Sirt2-ADPR-EX243). The ECS molecule in both Sirt2-ADPR-indole complexes is involved in a network of hydrophilic and hydrophobic interactions with Sirt2, ADPR and two highly coordinated water molecules (Figs. 5a and 5c). The carboxamide of both indole inhibitors binds in a similar fashion as the carboxamide moiety of the physiological inhibitor nicotinamide and hydrogen-bonds to the highly conserved residues Ile169 and Asp170 and, via one of the water molecules (W24 in Sirt2-ADPR-CHIC35 and W206 in Sirt2-ADPR-EX243), to Ala85, Ile93 and Pro94. The amide of the indole hydrogen-bonds to Gln167 and, via the other highly coordinated water (W3 in Sirt2-ADPR-CHIC35 and W203 in Sirt2-ADPR-EX243), to Asp168 and ADPR. The ECS molecule also interacts with the hydrophobic side chains of Ile93 and Phe96 as well as the hinge molecule. The binding research communications of the hinge molecule is mainly driven by hydrophobic interactions with the side chains of Ala135, Leu138, Tyr139, Phe143 and Phe190. Additionally, the carboxamide moiety of the hinge molecule forms hydrogen bonds to the backbone carbonyl O atoms of Leu138, Tyr139 and Gly141. The interaction pattern of both indole inhibitors in the Sirt2-ADPRindole complexes is nearly identical. Slight differences can be observed for the conformation of the side chains of Tyr139 and Phe190. Similar interaction patterns for the ECS molecule of both Sirt2-ADPR-indole complexes have also been observed in the crystal structures of Sirt1 and Sirt3 Zhao et al., 2013). Discussion and conclusions In the presented work, we used an MMS approach to obtain crystals of the potential drug targets human Sirt2 and human Sirt3. With microseeds, we were able to obtain large numbers of crystals predictably in solutions of versatile compositions. Crystal formation was not limited to a specific pH and was The indole inhibitors CHIC35 and EX243 interact with the residues of the active site of Sirt2 in a similar fashion as observed in other sirtuin-indole complexes. (a) The residues that interact with the two molecules of CHIC35 (a) or EX243 (c) are shown as sticks. Asn168 and Ile169, which are located beneath the ECS molecules, are not labelled. Pro94, Phe96, Leu103, Phe119, Leu134 and Leu138 are not shown for the sake of clarity. The carboxamide moiety of the ECS molecule of both Sirt2-ADPR-indole complexes hydrogen-bonds to the highly conserved residues Asp170 and Ile169 and, via a structural water, to Ala85, Ile93 and Pro94 (not shown). For both inhibitors, the amide of the ECS molecule interacts with Gln169 and, via another structural water, with ADPR and Asn168. The aromatic chlorinated indole protrudes into the hydrophobic extended C-site. The binding of the hinge molecule is mainly driven by hydrophobic interactions with the side chains of Leu103, Phe119, Phe131, Ala135, Leu138 (not shown), Tyr139 and Phe190. Additionally, the carboxamide moiety of the hinge molecule also hydrogen-bonds to the backbone carbonyl O atom of Leu138 (not shown), Tyr139, Gly141 and, via a structural water, to Asp170. Waters are shown as yellow spheres and hydrogen bonds are shown as grey dashes. (b, d) The binding mode of the ECS molecule of both Sirt2-ADPR-indole complexes is very similar to that observed in the analogous complexes of Sirt1 (brown, PDB entry 4i5i) or Sirt3 (ruby, PDB entry 4bvb). not dependent on the presence of a specific crystallizationcondition component. As we sought to use these crystals of Sirt2 and Sirt3 for structured-based inhibitor-development studies, the MMS approach provided diverse crystallization conditions to perform soaking experiments. This would not have been possible with such ease using conventional screening approaches. The MMS crystals also diffracted to a higher resolution or equally well in comparison to crystals obtained by conventional screening methods. Using these MMS apo Sirt3 crystals, we were able to solve the X-ray structure of apo Sirt3 at a higher resolution than in the deposited X-ray structure. Additionally, we also provide an improved crystal structure of the Sirt2-ADPR complex. To validate the suitability of the Sirt2-ADPR crystals for the investigation of protein-ligand interactions, we also solved the X-ray structures of Sirt2-ADPR in complex with two indole inhibitors termed CHIC35 and EX243. To our surprise, each Sirt2 molecule contained well defined electron density for two indole molecules. One indole mimics the physiological sirtuin inhibitor nicotinamide and occupies the C-pocket and the adjacent extended C-site. Its binding mode is similar to that observed in the crystal structures of the Sirt1-indole and One of the hinge loops of Sirt2 exhibits a high flexibility and seems to be important for inhibitor binding. (a) Superposition of the hinge loops of Sirt1, Sirt2 and Sirt3 in complex with either EX527 or CHIC35 (Sirt1, brown cartoon; Sirt2, deep salmon; Sirt3, ruby) and of the hinge loop of Sirt2-ADPR lacking an indole (dark grey). Only the hinge loop of the Sirt2-ADPR-EX243 complex adopts a different conformation, while the hinge-loop conformation of the complexes of Sirt1, Sirt2 and Sirt3 with indole is similar to that observed in the Sirt2-ADPR complex. This loop shift is induced by the binding of the hinge molecule. (b) Superposition of the Sirt2-SirReal2-NAD + complex structure (PDB entry 4rmg; slate blue cartoon with SirReal2 in light pink sticks and NAD + in light orange sticks) with the crystal structure of Sirt2-ADPR-EX243 (deep salmon cartoon with ADPR in yellow sticks and EX243 in light blue sticks) reveals that the hinge molecule of the Sirt2-ADPR-EX243 complex occupies the selectivity pocket that is occupied by the dimethylpyrimidine moiety (DMP) in the Sirt2-SirReal2 complex. (c) Superposition of the Sirt2-thiomyristoylated peptide complex structure (Sirt2-Thiomyr-peptide; PDB entry 4r8m; turquoise cartoon with the thiomyristoylated peptide shown as dark blue sticks) with the crystal structure of Sirt2-ADPR-EX243 shows that the hydrophobic myristoyl moiety also occupies the selectivity pocket that is occupied by the hinge molecule in the Sirt2-ADPR-EX243 complex. (d) Comparison of the conformation of the hinge loop of Sirt2 (amino acids 136-144) in the Sirt2-ADPR complex (dark grey), the Sirt2-ADPR-EX243 structure (deep salmon), the complex of Sirt2, NAD + and SirReal2 (slate blue) and the Sirt2-thiomyristoyl peptide structure (dark turquoise). This hinge loop adopts a different conformation in all four crystal structures. Tyr139 and Pro140 of the Sirt2-thiomyristoyl peptide complex are not defined by the electron density. Sirt3-indole complexes. The other inhibitor molecule binds to the selectivity pocket of the hinge region. Such a second indole molecule is not found in the X-ray structures of the Sirt1-NAD + -CHIC35 or the Sirt3-ADPR-EX243 complexes, even though it has to be noted that the authors used slightly lower indole concentrations during crystallization Zhao et al., 2013). The absence of a hinge molecule in the Sirt1-indole and Sirt3-indole complexes is also supported by the fact that the hinge loops of Sirt1 or Sirt3 adopt a similar conformation to that observed in the Sirt2-ADPR complex lacking the indole inhibitors (Fig. 6a). A conformational change of the hinge loop is only seen in the case of the Sirt2-ADPR-indole complexes. Occupation of the selectivity pocket has also been observed in other crystal structures of Sirt2 by the dimethylpyrimidine moiety (DMP) of the Sirt2-selective inhibitor SirReal2 ( Fig. 6b; Rumpf et al., 2015) as well as the long hydrophobic fatty-acid alkyl chain of either thiomyristoylated or myristoylated lysine-containing oligopeptides (Feldman et al., 2015;Teng et al., 2015). Both moieties also extend into the selectivity pocket and partially overlap with the hinge molecule of the Sirt2-ADPR-indole complex (exemplified by the superposition of the Sirt2-ADPR-EX243 complex with the Sirt2-Real2 complex or the Sirt2-thiomyristoylated peptide structure). The hinge molecule, the DMP moiety of SirReal2 or the alkyl chain of the fatty-acid acyl groups induce diverse conformations of the hinge loop (amino acids 136-144) and enlarge the selectivity pocket significantly (Fig. 6d). Such variable structural changes have not been observed for either Sirt1 or Sirt3 (Supplementary Fig. S7). The hinge molecule may supposedly be of no physiological relevance for Sirt2 inhibition, but it highlights the adaptability of the hinge region of Sirt2 and the selectivity pocket. In conclusion, this example of MMS underlines the strength of seeding techniques to obtain crystals for structure-activity studies during drug development. Further experiments are still needed to explore the hinge region of sirtuins, to verify the different characteristics within the sirtuin family and to exclude the possibility that the observations are crystallographic artefacts. However, so far a targeting of the selectivity pocket with hydrophobic moieties and exploiting the flexibility of the hinge loop of Sirt2 seems to be a plausible strategy in the search for new Sirt2-selective inhibitors (Fig. 7). Related literature The following references are cited in the Supporting Information for this article: Clark & Labute (2007) and Szczepankiewicz et al. (2012). Figure 7 Occupation of the selectivity pocket of Sirt2 with large hydrophobic moieties such as EX243 or the DMP group of SirReal2 induces a conformational shift of the hinge loop and consequently significantly enlarges the selectivity pocket, which leads to an isotype-selective inhibition of Sirt2. Exploiting this binding pocket and the flexibility of the hinge loop of Sirt2 with large hydrophobic moieties presents a potential strategy for the development of Sirt2selective inhibitors.
v3-fos-license
2021-04-17T03:49:49.370Z
2021-01-15T00:00:00.000
233268173
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "http://www.bircu-journal.com/index.php/birci/article/download/1584/pdf", "pdf_hash": "70472812ed9f226b9eb08fe53b24638bcaea4f11", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2477", "s2fieldsofstudy": [ "Business" ], "sha1": "70472812ed9f226b9eb08fe53b24638bcaea4f11", "year": 2021 }
pes2o/s2orc
Talent Acquisition Implementation with People Analytic Approach Based on research by Gavin and William (2018) regarding Talent Rising: People Analytics and Technology Driving Talent Acquisition Strategy, it is stated that many large or advanced organizations and companies have successfully used people analytics as a tool to deal with challenges related to HR problems, such as talent acquisition, talent pipeline planning, organizational development, engagement, and learning and talent development. However, the use of technology must be very careful, considering that human factors are the main factors that can guarantee long-term success. Based on Randhawa's research (2017), it is stated that talent acquisition is a strategic approach to identify, attract, and get the best talent to meet dynamic business needs effectively and efficiently. Described in Hasibuan (2012: 46), with the implementation of a good selection, employees who are accepted will be more qualified so that coaching, development, and employee management will be easier. Abstract I. Introduction Based on research by Gavin and William (2018) regarding Talent Rising: People Analytics and Technology Driving Talent Acquisition Strategy, it is stated that many large or advanced organizations and companies have successfully used people analytics as a tool to deal with challenges related to HR problems, such as talent acquisition, talent pipeline planning, organizational development, engagement, and learning and talent development. However, the use of technology must be very careful, considering that human factors are the main factors that can guarantee long-term success. Based on Randhawa's research (2017), it is stated that talent acquisition is a strategic approach to identify, attract, and get the best talent to meet dynamic business needs effectively and efficiently. Described in Hasibuan (2012: 46), with the implementation of a good selection, employees who are accepted will be more qualified so that coaching, development, and employee management will be easier. Abstract The current human resource (HR) fulfillment conditions in this company are still quite low. This can be seen from the percentage of HR fulfillment of approximately 60% of the total HR needs. The strategy of fulfilling human resources through the recruitment and selection process must be done quickly and optimally. The problem that arises is related to the optimization of the talent acquisition process carried out, so that the results obtained are in accordance with the target and have quality that meets the required. In this study, data analysis was used using the random forest method. The method is used to develop a model that can predict the pass level of participants in recruitment and selection quickly and precisely in accordance with the profile of each participant, and can provide insight on the projected achievement of individual performance on each participant if passed at the company, to assist management in making decisions about the participants accepted in the recruitment and selection process. The data population used is data on recruitment and selection participants in 2018. To carry out the process of predicting the graduation rate of prospective employees, data for prospective employees who register for the recruitment and selection process will be used with a total of 17,294 people. The analytical tool in this study uses a people analytic approach. The conclusion of this study is that making people analytics on the process of talent acquisition can be done using the Random Forest Classification method. This method aims to determine the class of each predicted data. Modeling has been made to predict performance achievements, but the performance of the model is still not showing the level of significance in accordance with the standard level of confidence, which is still below 0.05. Keywords recruitment and selection; talent acquisition; people analytic; classification; random forest Budapest International Research and Critics Institute-Journal (BIRCI-Journal) Volume 4, No 1, February 2021, Page: 204-215 e-ISSN: 2615-3076 (Online), p-ISSN: 2615-1715 www. bircu-journal.com/index.php/birci email: birci.journal@gmail.com 205 Business activities can grow and develop for a long period of time is the goal of each company. Competitiveness, innovation, creativity, and the quality of the products produced must be in accordance with the needs of consumers and can adapt to a dynamic environment (Rosmadi, 2018). Kuswati (2019) stated that in the world of work, employees are required to have high work effectiveness. Organizational effectiveness is usually interpreted as the success achieved by an organization in its efforts to achieve predetermined goals. According to Werdhiastutie et al (2020) the development of human resources should focus more on increasing productivity and efficiency. This can be realized because today's competition, especially among nations, is getting tougher and demands the quality of strong human resources as managers and implementers in an organization or institution. The concept of the implementation of recruitment and selection or what is currently more commonly referred to as talent acquisition is not a new thing for the company. The process of implementing talent acquisition itself is carried out because there is an imbalance between supply and demand as well as the need for certain specifications from the Company. Based on Figure 1 above, it shows that the talent acquisition process is carried out differently depending on the quality and quantity expected to be received at the time of carrying out the process. In order to minimize bias at all stages in talent acquisition, the role of technology is needed so that the results obtained are faster and more accurate. With the rapid development of technology, the application of data analytics in the field of HR management or what is more commonly referred to as people analytics has been frequently carried out and applied both in the context of research and implementation in the real world. Basically, the application of people analytics can be divided into 7 (seven) main pillars, as follows: Based on Figure 2 above, it is stated that the talent acquisitions process is one of the pillars in implementing people analytics, so that the concept of implementing talent acquisitions using a data analytics approach can be carried out. According to Isson and Harriott (2016: 177), talent acquisition is a practice to get new talents after the candidate search process is complete. Talent acquisition analytics is the practice of adding predictive analytics to the process of acquiring new talents. Analytics are used to determine which candidate fits the company's needs from all the candidates that are accommodated in the recruitment process. With limited time, companies are required to find candidates who have the right abilities and at the right time. The challenge that arises is in selecting the method that has the best level of effectiveness and accuracy. The problem that arises is related to optimizing the implementation of the talent acquisition process so that the results obtained are in accordance with the target and have the quality according to the required specifications. Speed and accuracy are the most important indicators in the process of implementing talent acquisitions, so the application of data analytics through people analytics in the talent acquisition process is needed as a decision support system. Then the research questions are how is the best talent acquisition model to obtain a candidate employee profile according to the needs of the company and how is the best model that can predict performance achievements. The problem statement of this research is how the talent acquisition model can predict the profile of prospective employees according to the company's needs and how the model can predict the performance achievements of the company. To answer the problem statement then the purpose of this study is to obtain a talent acquisition model that can obtain a candidate employee profile that suits Company needs by studying recruitment patterns using predictive techniques and obtaining models that can predict the performance achievements of prospective employees for the Company. II. Research Methods In this study, researchers used a data analytic approach in conducting research stages. The stages of the research consist of, among others, Data Collection, Data Cleansing, Data Processing, Data Modeling, and Output Recommendations. This stage can be explained in 1. Data Collection: at this stage, data collection was carried out on all required attributes, the data collection process was carried out through the recruitment and selection system. The data taken is data on the recruitment and selection process in 2018. 2. Data Cleansing: at this stage, a check is carried out on the data that will be used and there is no effort to ensure that there is no data that does not have one of the attributes in it. The data that will be used is data that has complete attributes as specified in the Operational Variable. 3. Data Processing: at this stage, data processing is carried out using the Random Forest method to obtain classification results in accordance with the variables and attributes described in the Variable Operations section. 4. Analysis of Recommendations: after obtaining the model and the results from data processing, the next step is to analyze the recommendations that arise from the results of the data processing. This is done to assist the decision-making process for management by using information obtained from the results of data processing. This study used data analysis using the random forest method. Random Forest or random decision forest is a machine learning method introduced by Leo Breiman and Adele Cutler. According to Breiman (2001), "Random forest is a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest". Random forest can be explained as a combined decision tree. The number of decision trees will affect the accuracy of the overall random forest. If the decision tree is used in large enough data sets, it tends to "remember" more than "learn". However, the decision tree is quite accurate if it is re-applied to the same or relatively the same data set. The decision tree converts data into a tree (decision tree) and rules (decision rules). The decision tree learns through a set of if /then (if/else) or yes/no (yes/no) questions or other questions, which form a hierarchical tree. Every decision causes another decision or forms a prediction. The technique used by the decision tree is very similar to the way humans make decisions, so that the decision tree is more natural for humans than other models. To measure the results of the decision tree, it can be done by measuring the impurity of the decision tree modeling results. To measure impurity in a decision tree, a metric called entropy and gini index can be used, as follows: 1. Entropy is the amount of information needed to describe a sample accurately. If sample is homogeneous (all elements are similar), then the entropy is 0. If all samples are not homogeneous and equally divided, then the entropy is 1. The maximum value of entropy is 1. 2. Gini Index is a measure of inequality (inequality) in the sample. The value is between 0 and 1. A Gini index with a value of 0 means that the sample is perfectly homogeneous or all elements are the same. Meanwhile, the Gini Index with a value of 1 means that all elements are not the same (inequality). III. Results and Discussion Based on table 1, it shows that the final results of the participants who were accepted did not match the predetermined target, where the targeted number of passes was 250 but after going through all the selection stages only 81 or 32% of the targets had been accepted. The number of participants who passed in each function is also still far from the target set in each function. Model to predict the pass rate of participants was carried out using the Random Forest method, while model to predict performance was carried out using the Linear Regression method. The model process will be carried out separately based on objectives, phases, approaches and scenarios. The total model made in this study is according to table 2. Based on table 2, in this study, 18 (eighteen) types of models will be made for the prediction model and for the performance prediction model, 2 (two) types of models will be made. Each of these models will be compared the level of accuracy to get the best model to predict both graduation rates and performance. There are a total of 18 (eighteen) types of models for prediction models of passing rates and 2 (two) types of models for prediction models of performance. Each model has its level of accuracy measured so that results can be compared to determine which model is the best for predicting graduation and predicting performance according to table 3. Based on the results of measuring the importance level of features, level of accuracy, ROC Curve, and Area Under Curve (AUC), the best model in the short term can use a model made with approach 1 and scenario 3 is a model with the best level of accuracy and can provide much added value compared to other models. However, in the long term, if the data in each class (function) is sufficient, it is advisable to use a model with approach 2 and scenario 3. This is because the model with approach 2 is more in accordance with the selection process in the company seen from the test results of the level of importance features used in modeling. The strengths and weaknesses of each model that have been made in this study are as follows: 1. Pass Selection Level Prediction Model with Approach 1 a. Advantages : 1) The model can be used in the long term and is not affected by the functions opened during the recruitment and selection process; 2) The amount of data in each class is quite large, so that it can produce a more stable output; 3) The computation process is faster because the class division is not too much. b. Weakness: 1) The information generated from the model is limited to the pass rate only; 2) The model does not pay attention to the specifications that have been determined because it is carried out in general. 2. Pass Selection Level Prediction Model with Approach 2 a. Advantages : 1) The model can provide recommendations up to the probability in each function; 2) Information and insights that can be used by the management or recruitment and selection committee are numerous and varied; 3) The model takes into account the specifications that have been set by the company, for example, is the participant's education style, because each function has different specifications. b. Weakness: 1) The computation process takes longer than approach 1; 2) The model will be disturbed if there are additional functions outside of the functions that have been arranged in the system; 3) The amount of data in each class is small, so it is possible that the model will change if there is new data and is heterogeneous from the current data. Figure 4. Confusion Matrix Pass Selection Level Model Phase 1 Source: Generated with Scikit-Learn on Python Based on the calculation of confusion matrix for the model made in phase 1 or the administrative selection stage which can be seen in Figure 4. shows that in the model there are 1 data that fall into the False Positive category and 839 data that fall into the False Negative category. Data that falls into the False Negative category means that there are 839 candidates whose profiles match the profiles of candidates who passed the administrative selection stage. It can be interpreted that there is a potential loss of superior candidates who are not included in the next selection process. Based on the measurement of features importance, information is obtained that the most important feature at this stage is GPA. Figure 5. Confusion Matrix Pass Selection Level Model Phase 2 Source: Generated with Scikit-Learn on Python Based on the calculation of confusion matrix for the model created in phase 2 or the selection stage of the potential test which can be seen in Figure 5., it shows that in the model there are 15 data that fall into the False Negative category, but none are included in the False Positive category means that the level of precision of this model is 1. Data that falls into the False Negative category means that there are 15 candidates whose profiles match the profiles of candidates who passed the administrative selection stage. It can be interpreted that there is a potential loss of superior candidates who are not included in the next selection process. Based on the measurement of features importance, information is obtained that the most important features at this stage are related to salaries and personalities of the participants. Figure 6. Confusion Matrix Pass Selection Level Model Phase 3 Source: Generated with Scikit-Learn on Python Based on the calculation of confusion matrix for models made in phase 3 or the final selection stage which can be seen in Figure 3., it shows that in the model there is no data that falls into the False Negative or False Positive categories so that the level of precision and sensitivity of this model Very good. Based on the measurement of features importance, information is obtained that the most important features at this stage are related to the results of medical check-ups and the educational background of the participants. A ROC (Receiver Operator Characteristic Curve) can help in deciding the best threshold value. It is generated by plotting the True Positive Rate (y-axis) against the False Positive Rate (x-axis) as you vary the threshold for assigning observations to a given class.ROC curve will always end at (1,1). The threshold at this point will be 0. This means 213 that we will always classify these observations falling into class 1(Specificity will be 0. False-positive rate is 1). One should select the best threshold for the trade-off you want to make. According to the criticality of the business, we need to compare the cost of failing to detect positives vs cost of raising false alarms. Table 4 regarding the level of accuracy of each model made for performance prediction, the model that has the best level of accuracy is the model in phase 2 where the model is made using all available features from the beginning to the end of the recruitment and selection process. However, based on the results of the significance test, it is found that the independent variable that is owned still does not significantly affect the dependent variable, namely the Individual Achievement Value score. To be able to answer the research question, what is the model that can predict performance achievements for the company, that the model that has been made in this study can already predict the performance achievement only with an error rate above the confidence level. This happens because the individual Achievement Value score data used is not normally distributed or tends to lean to the right. This condition occurs because the individual performance appraisal process in the company still has too many subjective factors, so that the final results of the assessment are difficult to justify. For this reason, it is necessary to improve the prediction model by multiplying the data used in the prediction model making process. In addition, because the training data used in the modeling process is still too little, so it cannot produce models and predictions with the expected level of confidence. IV. Conclusion From a series of data collection processes, data processing, to discussion of research results, the following conclusions can be drawn: 1. The selection process is a series of steps carried out to determine the suitability of the qualifications of the participants with the predetermined specifications. The success of the selection process will only be seen after the participants who pass the selection have served at the Company, so that the decision making in the selection process cannot be wrong; 2. In its development, the selection process can be assisted by the application of data analytics through people analytics in the talent acquisition process, which is indispensable as a decision support system. The use of data analytics is used to improve the speed and accuracy of decision making; 3. Making people analytics on the talent acquisition process can be done using the Random Forest Classification method. This method aims to determine the class of each predicted data. In this study, the use of Random Forest Classification was carried out to create a prediction model for passing rates. For the short term, the best passing rate prediction model based on the data available in this study is a model using approach 1 and scenario 3, where the accuracy level produced in Phase 1 is 0.9514, Phase 2 is 0.9887, and Phase 3 is 1. Whereas in the long run, the prediction model for passing rates using approach 2 and scenario 3 is very suitable for the selection process as seen from the results of the feature importance level test used in the model. 4. To determine the level of success of the selection process, it is necessary to look at the performance achievements of each participant resulting from the selection process. Therefore it is necessary to make a model that can predict performance achievements, so that the decision-making process of management or recruitment and selection committees is better. In this study, a model has been made to predict performance achievements, but the performance of the model has not shown a significance level in accordance with the standard level of confidence, which is still below 0.05. When viewed from the level of accuracy of the model that can predict performance achievement, the best model is the model in phase 2 with a mean squared error of 4.6355. Suggestions Based on the results of this study, there are several suggestions that can be given both for further research and for companies, as follows: 1. For further research, in the process of making a prediction model, it is hoped that data can be used over a period of more than 1 (one) period of the recruitment and selection process. This will be useful for comparing the accuracy of the model in each period of the recruitment and selection process, because basically the recruitment and selection process is an independent process from each implementation period. If the resulting model has a good level of accuracy even though it is used in different periods, it can be said that the model can be used in the long term and continuously. 2. For companies, historical data is one of the important factors in the modeling process so that data management is expected to be one of the factors that need to be of concern to the company. The process of individual performance appraisal needs to be re-validated using other factors, for example applying the 360 degree concept or sociometry so as to reduce the subjectivity factor.
v3-fos-license
2021-08-31T13:18:53.439Z
2021-07-06T00:00:00.000
237357393
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/acisc/2021/6639769.pdf", "pdf_hash": "6e8889a8426fe7e6b332d385d8f5288174f3158a", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2478", "s2fieldsofstudy": [ "Computer Science", "Business" ], "sha1": "2cd4f624feeada7ed2cbcf211793e95ec468986b", "year": 2021 }
pes2o/s2orc
Simulation Optimization for the Multihoist Scheduling Problem Although the Multihoist Scheduling Problem (MHSP) can be detailed as a job-shop configuration, the MHSP has additional constraints. Such constraints increase the difficulty and complexity of the schedule. Operation conditions in chemical processes are certainly different from other types of processes. +erefore, in order to model the real-world environment on a chemical production process, a simulation model is built and it emulates the feasibility requirements of such a production system. +e results of the model, i.e., the makespan and the workload of the most loaded tank, are necessary for providing insights about which schedule on the shop floor should be implemented. A new biobjective optimization method is proposed, and it uses the results mentioned above in order to build new scenarios for the MHSP and to solve the aforementioned conflicting objectives. Various numerical experiments are shown to illustrate the performance of this new experimental technique, i.e., the simulation optimization approach. Based on the results, the proposed scheme tackles the inconvenience of the metaheuristics, i.e., lack of diversity of the solutions and poor ability of exploitation. In addition, the optimization approach is able to identify the best solutions by a distance-based ranking model and the solutions located in the first Pareto-front layer contributes to improve the search process of the aforementioned scheme, against other algorithms used in the comparison. Introduction Some chemical-production systems use tanks containing treatment baths in order to generate finished products. ose treatment baths can contain rinsing, acid, or electroplating solutions. is kind of production system is necessary when a product needs to be treated with a specific treatment bath to enhance its mechanical, electrical, or esthetic properties. e products are soaked in each tank according to a given sequence [1]. Commonly, these kinds of products are loaded on carriers or rack baskets. As other manufacturing facilities, the handling material of the chemical-production systems is executed by trackmounted hoists, i.e., the rack baskets are moved from facility to facility by one or more hoists. e hoists can be (or not) on the same track. erefore, the hoists are used as means of transport between tanks [2]. e hoists move rack baskets from facility to facility according to the production schedule. It means that the hoists should be available, as an assumption, in order to not compromise the already defined production schedule. However, the hoists frequently operate on shared tracks. erefore, the hoists also should be scheduled to avoid colliding with each other. Normally, the production schedule is considered as input data for the hoists' schedule. It means that these schedules are usually planned separately. It can be considered a risk to enhance the performance of the chemical-production system. In this case, any production schedule should consider the hoists' availability. e simultaneous schedule of jobs and hoists assumes a central relevance for surface treatments. e treatment processes can be modeled as a job-shop production. Consequently, a strong need for enhanced hoist scheduling systems emerged, representing a key enabler to maximize productivity and product quality [3]. A classic job-shop configuration can be detailed based on the aforementioned chemical processes [4]. Let a set of n jobs (products) J � J 1 , J 2 , . . . , J n on a set of m machines (tanks) M � M 1 , M 2 , . . . , M m . Each job J i requires a series of p i operations (treatments) O � O i1 , O i2 , . . . , O ip i with precedence constraints. As any job-shop configuration, each machine can process only one job at one time without preemption, and each job can be processed on one and only one machine at one time. However, compared with the classical job-shop configuration, the multihoist scheduling problem (MHSP) includes additional constraints. Such constraints increase the difficulty and complexity of the schedule. Constraints are outlined below: (i) e jobs require a hoist in order to move them between facilities. erefore, buffers are prohibited in the MHSP. e limitations mentioned above make a difference between the classical job-shop configurations and the MHSP. It is a strongly constrained problem, known as an NP-hard one [5]. e problem is indeed NP-hard, and each extension involves an increase of the complexity of both the model and its resolution. is is the reason why we have decided to work on a new approach. In addition, some industries such as pharmaceutical, chemical and plastic have similar operational conditions, i.e., no-wait in the processes. erefore, building schedules with the no-wait component should be more suitable for modeling in those industries, and also other problems such as train scheduling, aircraft landing scheduling, surgery scheduling, among others [6]. Based on the previous constraints, the assumption on identical jobs, in the simultaneous schedule of jobs and hoists, should be omitted. In addition, this research does not have the objective to find a repetitive hoist scheduling. ere exist multiple job types and diverse hoists on the chemical-treatment processes. Moreover, this research does not have as objective to preassign to each hoist a precise set of tanks. Although this simplification tackles the collision constraints, it restricts the sequence of treatment of the jobs to follow the layout of the tanks. If the requirements are low-demand, the utilization rate of the tanks would decrease. In order to identify the difficulty and complexity of the MSHP, an example is provided below. Assume a schedule with four jobs J � J 1 , J 2 , J 3 , J 4 . Every job J i is formed by a sequence of three operations O i, 1 ; O i,2 ; O i, 3 performed one after another. Table 1 details the alternative machines for every operation and additional data. Let a sequence be J 1 ≺J 2 ≺J 3 with O 11 in M 1 , O 21 in M 4 , and O 31 in M 2 . Assume the minimum processing time window for all the operations. Figure 1 details the sequence step by step by the hoist. Let the next set of operations be O 12 in M 2 , O 22 in M 4 , and O 32 in M 1 . According to the previous sequence, the hoist has to break free M 2 with O 31 , and break free M 1 with O 11 . However, unlike a normal job-shop configuration, a blocking occurs in the MHSP. e simultaneous schedule of jobs and hoists is a necessity to avoid blocking through the treatment processes. To find a suitable sequence and appropriate assignment of the operations to the machines in order to avoid potential blockings and nowaits in the processes is the problem statement. In order to simultaneously schedule jobs and hoists, this study proposes a solution for the MHSP. is paper details how a solution can be built by three decisions, i.e., operation scheduling, tank assignment decision, and hoist assignment. In particular, let us discuss vectors, such as permutations, and these vectors can represent the processing sequence of operations. In this research, the processing sequence of operations is considered as a permutation. Table 2 shows a vector (ranking) of five operations as an example. In this sense, the members of the population of the proposed algorithm are permutations of elements. Table 3 depicts some permutations, where each element represents an operation, and the processing sequence of the operations is executed according to each permutation. As previous research, this research also tries to use the space of permutations as solutions. e proposal of this research is to use a probability distribution based on permutations. For the aforementioned proposal, an Estimation of Distribution Algorithm (EDA) is considered for the MHSP. According to the literature review, the EDA has been scarcely studied for solving the MHSP. EDA is an experimental technique that belongs to the evolutionary computation field. Instead of using traditional evolutionary operators such as crossing and mutation, the EDA produces solutions through a probability model based on the previous solutions. e probability model mentioned above is built with statistical information, i.e., based on the search experience. e EDA makes use of the aforementioned probability model to describe the distribution of the solution space. In this research, the distribution of the permutations, for the MHSP, is the solution space. However, there is no probability model that can produce a feasible solution on a permutation-based representation [7]. Any EDA needs to be reconfigured to tackle permutation-based problems. en, as hypothesis, using a specific probability model for this issue should be effective against other models. In order to produce feasible solutions from the proposed EDA, this study uses the Mallows model. Mallows [8] establishes the Mallows exponential model. e Mallows model incorporates to each permutation σ, i.e., each solution, a probability. In order to compute the probability mentioned above, the model uses the distance concept between σ and a central permutation σ 0 . A distance metric is computed between permutations, by algebra of permutations. e aforementioned probability will be smaller if the distance between σ and σ 0 is bigger. Figure 2 details an example of such exponential distribution, with five vectors. en, the Mallows model is considered as the specific probability model for the distribution of the permutations. e search process is based on the exponential model, and any traditional evolutionary operator is omitted. Based on Figure 2, the exponential shape of the model is built using the distance between all the permutations σ n of the solution space and the central permutation σ 0 . Traditionally, the central permutation is estimated or selected between all the members of the population. e best solution, during the evolutionary progress, can be considered as the central permutation. However, the best solution is not a guarantee to be the best estimation. erefore, finding the best estimation should be reached in order to get the best scheduling solution for the MHSP. Once the central permutation is estimated or selected, generating new offspring, using the exponential model, is the main objective. en, the main idea is to use the probability between permutations to generate a new offspring. However, the Mallows model is not able to generate offspring by itself. e reason is because there can be many permutations with the same distance to the central permutation. It creates confusion when choosing the offspring. It is solved through the decomposition of the distance between each permutation and the central permutation. e Generalized Mallows Distribution (GMD) process, detailed in Fligner and Verducci [9], and Fligner and Verducci [10], explains the procedure to get the decomposition of the distance between permutations and thereby generate a new offspring. With this strategy, the inconvenience of probability models in permutation-based problems is solved. e GMD process permits to produce feasible solutions. Based on the concept of properly modeling the main variables that intervene in the performance of the process has been a priority in the solution of real-world scheduling problems [11]. Such variables or characteristics could be inside or outside of the shop floor and these should be incorporated to efficiently solve the scheduling problem. erefore, this research appends the real environment on the chemical production process mentioned above. Moreover, if the real environment is considered, theoretical assumptions are not necessary to incorporate in the MHSP. en, a simulation model is built, and it emulates the aforementioned chemical production system. Different resources, in the chemical production process, are considered in the simulation model mentioned above. ose resources perform all the operations required by any job that is scheduled. e aforementioned simulation model, built on Delmia-Quest® platform, included vast details that the chemical production process presents, i.e., the process itself. e results of the simulation model, i.e., the makespan and the workload of the most loaded machine, are necessary for providing insights about which schedule on the shop floor should be implemented. e proposed optimization method uses the results mentioned above in order to build new scenarios for the MHSP and to solve the aforementioned conflicting objectives. e research motivation is to show how the exponential distribution between permutations helps to reduce the deficiencies of the EDA. In addition, it is preferable that the methods used should contain some well-defined math expressions. It helps to understand how the solutions are generated. erefore, the research motivation is to use new methods to explicitly establish the search process of new solutions. e main reason, for doing this research, was to determine what is most useful, i.e., the recent algorithms or the use of exponential distributions to obtain the same or better solutions. Although diverse methods and strategies have been used to solve the MHSP, this paper contributes to the state of the art as follows: Mathematical Programming. Scheduling the movements of a hoist in electroplating facilities is tackled by Varnier et al. [1]. Based on their research, in most works, for the mono-product case, the layout of the shop is generally considered as a fixed data. However, the results of their research show that the layout should be considered in order to improve the productivity of the shop. e authors propose a combination of scheduling and layout design. Other works, such as Grunder et al. [12], try to identify the interaction between the layout of the shop and the productivity of a treatment line. e authors show that a particular type of configuration of the shop can be maximizing the performance of the line. However, it is not functional for all the studied cases. In addition, the authors propose a branch and bound algorithm to process the optimal layout of a saturated single-hoist production line. Other types of interaction can be found in Subai et al. [13], where environmental constraints are incorporated in the schedule of a treatment surface process. e authors focus to maximize the shop throughput by means of mathematical models. Such models include a nonlinear global cost function where the environmental cost plays a key role. A mixed-integer linear programming model for a multiproduct batch plant with nonidentical parallel processing machines is used in Berber et al.'s [14] research. e authors show how to minimize the total production time without using heuristic rules. Diverse numerical instances and one industrial problem are used to test their proposed model. e aforementioned model is able to produce better solutions for the industrial example considered. An exact solving method is proposed by El Amraoui et al. [15]. e mentioned method is a linear optimization approach. is work is implemented to optimize the cycle length and the throughput rate in a cyclic hoist scheduling problem where there are r different part-jobs in an electroplating line. Other mixed-integer linear programming model for multirecipe and multistage material handling processes is developed in Zhao et al. [16]. In this research, the authors simultaneously consider the production line arrangement, and the customized production ratio. Various case studies are used to demonstrate the efficacy of this methodology. For the two-hoist cyclic scheduling problem, Chtourou and Manier [17] propose a mixed integer linear programming model. e authors consider avoiding collision between the hoists which share a common track whilst the cycle time is minimizing. For the dynamic hoist scheduling problem with multicapacity machines, Feng et al. [18] detail a mixed-integer programming model. e mentioned model considers jobs randomly arriving throughout the horizon as a dynamic issue. In a copper production plant, Suominen et al. [19] present a nonlinear optimization and scheduling approach to maximize a smelting furnace production. e proposed approach is presented by simulating the evolution of the process over the optimization horizon. Applied Computational Intelligence and Soft Computing For the ethylene cracking process with feedstocks and energy constraints, Su et al. [20] address a scheduling problem by a hybrid mixed-integer nonlinear programming formulation. e problem mentioned above has a special situation, i.e., the facilities require periodic cleanup to restore the performance. e authors make useful suggestions for real cracking process production by means of numerous examples. erefore, the examples mentioned help illustrate the utility of the model. In an ice cream shop, Wari and Zhu [21] present a mixed-integer linear programming model. e key point of this contribution is that a multiweek production scheduling is considered. Furthermore, the model mentioned above incorporates operational aspects to enhance the solution quality. e aforementioned model is tested and compared with heuristics methods. e results report that the model of this research is able to efficiently and effectively handle the multiweek aspect. Qu et al. [22] study the effect of integrating scheduling with the optimal design of a two-dimensional production line to maximize production efficiency in the multirecipe and multistage material handling processes. Various case studies are used to demonstrate the efficacy of the integration mentioned above. Finally, Jianguang et al. [23] consider the cyclic jobshop hoist scheduling with multicapacity reentrant tanks and time-window constraints. As the authors explain, jobs are processed in a series of tanks with a defined processing sequence of operations for all of the jobs. A mixed-integer linear programming model is developed by addressing the time-window constraints and tank capacity constraints. Heuristics. For the cyclic hoist scheduling problem considering material as well as resource handling constraints, El Amraoui et al. [24] present a new heuristic where the time windows are maintained for all soaking operations and also overlapping cycles are allowed. As in almost any research, the authors compare the proposed heuristic mentioned above with other existing algorithms to prove its efficiency. Another example of heuristics for the hoist scheduling problem can be found in Kujawski andŚwiaojtek [2], where the sequence of products is analyzed by changing the ordered items in electroplating production lines in order to reduce the makespan. e items are located in queues where the sequence is monitored. For the online hoist scheduling problem, Kujawski and Swiątek [25] prepare a set of production scenarios before the real-time system starts. A real-life shop located at Wrocław, Poland, is used to explain carefully the algorithm. In this research, the utilization ratio is considered in order to choose the best schedule. For the flexible and nonlinear electrochemical processes, Beerbühl et al. [26] propose a heuristic by combining scheduling and capacity planning. e heuristic mentioned above is able to tackle nonconvex and mixed-integer problems, such as the electrolysis of water to produce hydrogen, reformulating these kind of problems into convex and continuous nonlinear problems. Recently, Laajili et al. [27] detail an adapted variable neighborhood search-based algorithm for the cyclic multihoist design and scheduling problem. e Laajili et al. study considers identical jobs, and therefore identical processing sequence of operations for all the jobs. When identical jobs occur, it permits to create a cyclic multihoist schedule. Metaheuristics. For the printed-circuit-board electroplating line, Lim [28] consider to determine the best throughput rate. e authors propose a genetic algorithmbased approach for the cyclic hoist scheduling problem. rough an experiment, the proposed algorithm is more efficient than the previous mathematical programmingbased algorithms. In an aluminum casting center, Gravel et al. [29] present an ant colony optimization metaheuristic. e representation of the solution in this research considers different objectives. In addition, the authors implement the proposed method, by introducing software, in the shop casting center. For the mixed batch and continuous processes, Wang et al. [30] develop a differential evolution algorithm. e solution representation considers capacity constraints. A key characteristic, in the proposed algorithm, is how to compute the crossover probability by using the logistic chaotic map method. For the single hoist cyclic scheduling problem, El Amraoui et al. [31] consider hard resources and timewindow constraints in their proposed genetic algorithm. For the multihoist scheduling problem with transportation constraints, Zhang et al. [4] study how to reduce the makespan. e global limitation is that there are no buffers to storage work in process. en, a right assignment is critical in the proposed solution. e authors detail a modified genetic algorithm to tackle the problem mentioned above. e aforementioned algorithm also uses a modified shifting bottleneck procedure, in order to build a feasible schedule. e results of this research show that the proposed algorithm is able to handle several transport resources. For other electroplating shops, El Amraoui et al. [32] examine the processing time which is confined within a time window. A genetic algorithm approach is presented to solve the multijobs cyclic hoist scheduling problem with a single transportation resource. For a single robot and flexible processing times in a robotic flow shop, Lei et al. [33] aim to increase the throughput rate. A hybrid algorithm based on the quantuminspired evolutionary algorithm and genetic operators is presented for solving the cyclic scheduling problem. e algorithm integrates three different decoding strategies to convert quantum individuals into robot move sequences. Besides, crossover and mutation operators with adaptive probabilities are used to increase the population diversity. A repairing procedure is proposed to deal with infeasible individuals. Comparison results on both benchmark and randomly generated instances demonstrate that the proposed algorithm is more effective in solving the studied problem in terms of solution quality and computational time. 6 Applied Computational Intelligence and Soft Computing Hybrid Approaches. In a surface treatment system with time window constraints, Chové et al. [34] investigate how to improve throughput rate without loss of treatment quality. e authors propose a new combined approach based on both predictive scheduling and reactive scheduling in an industrial case where only one hoist is used. Another example of hybrid approach can be found in El Amraoui and Nait-Sidi-Moh [35]. e authors use P-Temporal Petri Net models to describe the behavior of different scenarios of the shop in a specific cyclic hoist scheduling problem. e authors propose a linear programming model to determine the optimal plan, where the exact beginning and ending instants of each task is the output of the model. In addition, a simulation tool validates the hybrid method mentioned above. For a multicrane scheduling problem, where a set of coils should be transported from a storage location to another side, Xie et al. [36] formulate it as a mixed-integer linear programming model and propose a heuristic algorithm to solve the problem. Most publications belong to this category, i.e., hybrid approaches, such as the Yan et al.'s [37] research. e authors focus on a bi-objective, i.e., the cycle time and the material handling cost over a cyclic hoist scheduling problem where the parts have different flow patterns. A bi-objective linear programming model is detailed. In addition, a Pareto-front is formulated in this research with respect to the bi-criteria mentioned above. A hybrid discrete differential evolution algorithm computes the Pareto-front. Moreover, the work-inprocess level is used to adjust the exploration and exploitation of the search of the best solution. Another bi-objective case is found in El Amraoui and Elhafsi [38]; where improving the productivity and quality are studied. Firstly, the authors formulate the problem as a mixed integer linear programming model. In addition, the authors detail an efficient heuristic procedure to obtain the movements of the hoist. e results of the aforementioned heuristic are compared to a lower bound obtained from the model mentioned above and to the best available heuristic in the literature. For the aerospace and electroplating industries, Basán and Méndez [39] present a hybrid approach using mixedinteger linear programming, heuristics, and a simulation model. e mentioned simulation model contains multiple particularities in a multiproduct multistage production system where a single hoist is analyzed. Real data, of an aircraft manufacturing industry, are used to minimize the operating cost and maximize the productivity of the system. Another real case is found in Mori and Mahalec [40]. e authors deal with scheduling of the continuous casting of steelmaking. As the authors explain, a mixed-integer linear programming model computationally is intractable. en, the authors produce a production planning by solving a relaxed mixed-integer linear model at the first stage. After that, the authors build schedules by simulated annealing and a shuffled frog-leaping algorithm. As other previous studies, real data are utilized to test the proposed method. For the scheduling of multicrane operations in an iron and steel enterprise, Xie et al. [41] study how to reduce the makespan, by a mixed-integer linear programming model and a heuristic. e authors identify properties to avoid crane conflicts. As another study, in a case example of the chemical industry, Hahn and Brandenburg [42] present a linear programming model and an aggregate stochastic queuing network model in order to obtain the best solution. e proposed hybrid approach considers the production-related carbon emission and overtime working hours in the solution. For a steelmaking-continuous casting manufacturing system, Jiang et al. [43] develop a multiobjective soft scheduling to address the uncertain scheduling problem. ree objectives are tackled, i.e., waiting time, cast-break, and over-waiting. e authors detail a preference-inspired chemical reaction optimization algorithm, and a simulationbased t-test method is used to provide feedback of the solutions. e convergence is handled by a knowledge-based local search embedded to the aforementioned optimization algorithm. Real-world steelmaking-continuous casting instances served as the input parameter to work with the mentioned approach. Finally, in flexible flow shops, such as the paper mill industry, Zeng et al. [44] construct a multiobject optimization model, where three objectives are analyzed, i.e., makespan, electricity consumption, and material wastage are included in the mentioned model. At the beginning, only two objectives are considered in the solution, i.e., electricity consumption and material wastage. After that, a hybrid nondominated sorting genetic algorithm II method is employed to solve all the objectives. Real-world case study is used in this research. Other hybrid approaches can found in An et al. [45]; where the authors used simulation and cloud computing, to prevent collision detection, and hoisting path planning, in a three-dimensional hoisting system. Li et al. [46] present a simulation-based solution for a multicrane-scheduling problem derived from a steelmaking shop. e problem is modeled considering different objectives for the jobs and workload objective for the cranes. e hybrid approach solves the problem by a heuristic. Tamaki et al. [47] propose a simulation-based solution by adopting the metaheuristic methods to solve the cranescheduling problem in manufacturing systems of the jobshop type where semi-products are picked up and delivered by using cranes between the facilities. Zhang and Oliver [48] include the crane-scheduling problem into the production scheduling environment and combine them together to obtain an integrated schedule. A simulation-based optimization solves this integrated scheduling problem. A genetic algorithm is introduced to determine the allocation of machines and cranes. A simulation model referring to a queuing network is used to evaluate the crane and machine allocation results and provides the fitness value for the genetic algorithm. Diverse gaps in the current state of the art can be noticed based on the review mentioned above. e main steps in the proposed methods contain greedy procedures to get promising solutions. erefore, the performance of the Applied Computational Intelligence and Soft Computing aforementioned approaches is related to those mentioned procedures. As an example, the genetic algorithms build new solutions by evolutionary operators; however, those operators do not permit to have control in characterizing the solution space explicitly. en, in this research, an explicit probability distribution over the MHSP is offered to characterize the solution space explicitly. In addition, there are currently no publications that use an EDA for the MHSP. It is clear, from the exposed review, that the EDAs have still a gap to improve their performance, and the exponential distributions approach might be a useful way to improve the EDAs. In addition, from the exposed review, it is of interest to determine how much better EDAs can be of the recent algorithms. Finally, Table 4 depicts the state of the art on the MHSP with other related problems. Problem Statement e multihoist scheduling problem shares features and characteristics of a flexible jobshop configuration. Wang et al. [49] and Yan and Wang [50] explain the problem formulation for this configuration. e main constraints are detailed below: (i) For each job, the corresponding operations have to be processed in the given order, that is, the starting time for an operation must not be earlier than the point at which the preceding operation in the sequence of operations of the respective job is completed (ii) Moreover, each operation has to be assigned to exactly one tank (iii) Preemption is not allowed, i.e., each operation must be completed without interruption once it starts (iv) e operations assigned for each tank have to be subsequently established, that is, an operation is only allowed to be assigned to the sequence of a tank if the preceding position on the sequence is already established (v) If the operations i and j are assigned to the same tank k for consecutive positions p − 1 and p, then the starting time of operation j must not be earlier than the completion time of operation i in order to prevent overlapping Additional constraints involved by the MHSP are the following: (i) e processing time in each tank must respect a minimum and maximum limit, i.e., each processing time is bounded, and those limits must be strictly respected to ensure the quality of the products. (ii) A hoist can perform only one transport operation at a time, and a hoist must have enough time to move between two transport operations. e mathematical model is based on Pérez-Rodríguez et al. [11] detailed below. Let J(i) denote the job to which operation i belongs, and let P(i) be the position of operation i in the sequence of operations belonging to job J(i) starting with one, i.e., P(i) � 1 if the operation i is the first operation of a job. Furthermore, the index set I k defined by I k : � i ∈ O|k ∈ M i denotes the indices of operations i ∈ O that can be processed on tank k. Consequently, there are |I k | positions on tank k. In order to model the assignment of operations to tanks, assignment binary variables x i,k,p for all p k � 1, . . . , |I k |, k � 1, . . . , m, i ∈ O are introduced if x i,k,p � 1 means that the operation i is scheduled for position p on tank k. e processing time of the operation i on tank k is denoted by t i,k . Furthermore, S i is defined as the starting time for operation i. For each job, the corresponding operations have to be processed in the given order, that is, the starting time for an operation must not be earlier than the point at which the preceding operation in the sequence of operations of the respective job is completed. is constraint is imposed simultaneously on all appropriate pairs of operations, aggregated in the set of conjunctions C given by C∶ � (i, j)|i, j ∈ O: J(i) � J(j)∧P(j) � P(i) + 1}. Consequently, the precedence constraints are given by Moreover, each operation has to be assigned to exactly one position, which is ensured by In addition, only one operation can be assigned to each position, due to constraints i∈O x i,k,p ≤ 1, for all p � 1, . . . , I k , k � 1, . . . , m. (3) e positions on each tank have to be subsequently filled, that is, an operation is only allowed to be assigned to a position on a tank if the preceding position is already filled. In order to interconnect the tank position variables with the starting time variables and to enforce a feasible schedule, nonoverlapping constraints are defined by for all p � 2, . . . , I k , i ≠ j ∈ I k , k � 1, . . . , m. Each operation is bounded within a time window. is condition is ensured by e total time required to conclude all the operations scheduled, C max , is defined by the constraints To minimize, the C max is given by To ensure compliance with these constraints for each hoist and avoid any collision between hoists that share the same track, the Delmia-Quest® simulation language is preferred in this research. Delmia-Quest® avoids any collision between hoists in each simulation run. In addition, the simulation language is able to order each couple of hoist operations. A controller, in the simulation model, establishes the order of movements, in each simulation run, to satisfy the aforementioned constraints. e approach taken in this study combines the key advantages of both MHEDA and event-discrete simulation. Using the Delmia-Quest® simulation language, the main constraints related to the hoists are satisfied. Furthermore, considering Delmia-Quest®, the makespan and the workload of the most loaded machine are obtained directly from the simulation model, and the MHEDA is in charge of modeling the solution space distribution. MHEDA for the MHSP To understand the MHEDA methodology, a multihoist scenario is detailed below. Table 5 shows the jobs, operations, machines, and additional information about the multihoist layout configuration. Figure 3 depicts the layout. MHEDA contains differences and similarities with respect to a recent published algorithm, i.e., the MEDA (Mallows Estimation of Distribution Algorithm). e MEDA algorithm mentioned above can be consulted in Pérez-Rodríguez and Hernández-Aguirre [51]. e details of the MHEDA and also the differences and similarities between the algorithms, MHEDA and MEDA, are detailed below. Solution Representation. In this research, three vectors are used to represent a solution, i.e., a task sequence vector, a machine assignment vector, and a hoist assignment vector. e first vector, the task sequence vector, is a permutation-based representation. e task sequence vector models the processing sequence of operations. An example is depicted below; based on the Table 3, there are three jobs and ten operations, indexed from 1 to 10. A task sequence vector example is where the operation number four should be executed at the beginning; after that, the operation number one, the operation number seven, and so on. Although each element in the task sequence vector shown above can be located in any position along the solution vector, as any permutation-based representation, the precedence between operations of each job must be kept in the vector. For example, the operations 1 and 2, from job number one, must appear in the same order in the vector, from left to right. If the vector solution of the operation sequence satisfies the precedence mentioned above, it satisfies the corresponding original operation sequence from the job mentioned. is representation is based on Gen et al. [52]. As a fixed parameter, 1000 solution vectors are defined for the population. e second vector, the machine assignment vector: its length equals the total number of operations, where each element represents the corresponding selected machine for each operation. To explain the representation, an example is provided by considering the task sequence vector shown below: en, a feasible machine assignment vector can be Based on Table 3, it means that the machine number four must be used for the operation number one, the machine number three for the operation number two, and so on. As a fixed parameter, 1000 solution vectors are defined for the population. e third vector, the hoist assignment vector: its length equals the total number of operations, where each element represents the corresponding selected hoist for each operation. Based on Table 3, an example is depicted, by considering the task sequence vector shown below: with ten operations, and two hoists, indexed from 1 to 2, a feasible hoist assignment vector can be where the first operation should be executed with the hoist number two, after that, the second operation, with the hoist number one, and so on. As a fixed parameter, 1000 solution vectors are defined for the population. By using this representation, i.e., through three vectors, infeasible individuals are not generated. Fitness Computing. Two objectives to optimize are considered in this research, i.e., the makespan and the maximum workload. For each solution, the corresponding values are obtained from the simulation model, built on Delmia-Quest®. e main details of the simulation model are outlined below: (i) A conveyor system is used to move rack baskets through different hoists (ii) e conveyor system is only unidirectional (iii) When a rack has finished its sequence production, it goes through the conveyor system to exist the system (iv) Different hoists give service to any rack according to predefined sequence (v) Racks can receive service by different hoists based on the predefined sequence (vi) Putting racks in tanks is only possible by hoists (vii) Hoists are also used to put racks on the conveyor With all these features, the simulation model is able to integrate operation times and workflows. Finally, the fitness is used by the MHEDA to build the Pareto-front and to obtain the best solution at the end of the execution. Pareto-Front. All members of the population are used in order to build a Pareto-front based on Kacem et al.'s [53] research. Once a Pareto-front is built, the selection process of the best-candidate solutions is based on where the candidates are located on the Pareto-front. Only the members located in the first Pareto-front layer are preferable. Although the MEDA utilizes the Pareto-front for the selection process as the MHEDA, the MEDA requires a tournament process to select the corresponding candidates. In the MHEDA, the candidates are selected without tournament, i.e., the selected candidates must be located in the first Pareto-front layer. Figure 4 depicts a Pareto-optimality approach example. A nondominated set of solutions in each generation is found and used for building the probability model. Probability Model for Task Sequence Vectors. As in the MEDA, the MHEDA builds a probability model by the Mallows model for the selected task sequence vectors. Fligner and Verducci [9] formally establish the Mallows model as where θ is a spread parameter, D(σ, σ 0 ) is the distance from σ to the central permutation σ 0 , and ψ(θ) is a normalization constant. In the present work, the Kendall's τ is the distance metric with which the Mallows model is coupled. e Mallows model was initially proposed by Mallows [8] and later improved by Fligner and Verducci [9] through the generalized Mallows distribution (GMD). e GMD is given as follows: where θ j are dispersion parameters, ψ(θ) is a normalization constant, and V j (σ, σ 0 ) can be defined as an auxiliary vector, i.e., it represents the number of positions on the right side of j with values smaller than the current position in the permutation (σ, σ 0 ). Consider a task sequence vector with four tasks, n equals four, as an example. Let a central permutation be given by σ 0 � 1, 2, 3, 4 { }, i.e., task 1 is located in the first position, task 2 is located in the second position, and so on. Let a task sequence vector be given by σ � 4, 2, 3, 1 { }. erefore, V j (σ, σ 0 ) for the jth position is calculated as shown below: Applied Computational Intelligence and Soft Computing e corresponding Kendall's τ distance is five, i.e., D t � n−1 j�1 V j (σ, σ 0 ) � 5, i.e., 3 + 1 + 1 � 5. e procedure is carried out for each task sequence vector, i.e., V j (σ, σ 0 ) should be calculated for all the M vectors in the selected population. e next step consists of computing the dispersion parameters θ j , which is given by where V j � N i�1 V j (σ, σ 0 ). Equation (11) can be solved by the Newton-Raphson method. e probability distribution of the random variables in the auxiliary vector V j (σ, σ 0 ) can be written as is means that the possible values for V j (σ, σ 0 ) in the jth position are located between 0 and n − j, where n is the length of the selected task sequence vectors. For the operation scheduling decision, the offspring are obtained as follows. As an example, let V s (σ, σ 0 ) a sample vector obtained from equation (12). Specifically. Probability Model for the Machine Assignment Vectors. As in the MEDA, the MHEDA builds a probability model to determine an estimate of a distribution model to generate new offspring (machine assignment vectors) using the selected members. Again, as in the MEDA, the MHEDA obtains the estimation by the Univariate Marginal Distribution Algorithm (UMDA). e probability model for the selected machine assignment vectors selected previously can be represented by a probability matrix p, i.e., each p i value represents the amount of times where the machine i is elected for a specific position. Each element of the probability matrix p represents the probability that a position is processed on a machine. e value of each element indicates the rationality of a position processed on a certain machine. Based on Table 3, and as a short example, the vectors shown below are used to build the corresponding probability matrix p for the first position, i.e., p 1 (Table 6). e offspring are obtained as follows: for each position, generate a U[0, 1] value. en, the value mentioned above is interpolated in the cumulative probability matrix p to identify which machine should be selected. Probability Model for the Hoist Assignment Vectors. e MHEDA uses the UMDA algorithm to determine an estimate of a distribution model to generate new offspring (hoist assignment vectors) using the selected members. e probability model for the hoist assignment vectors selected previously can be represented by a probability matrix, q, i.e., each q i value represents the amount of times where the hoist i is observed for a specific task. Each element of the probability matrix q represents the probability that a task is processed on a hoist. e value of each element indicates the rationality of a task processed on a certain hoist. Again, as a short example, the vectors shown below are used to build the corresponding probability matrix q for the first task, i.e., q 1 (Table 7). erefore, the offspring are obtained with the same procedure as that of machine assignment vectors, i.e., for each task, generate a U[0, 1] value. en, the value mentioned above is interpolated in the cumulative probability matrix, q, to identify which hoist should be selected. In each generation, new candidate solutions are used for building the Pareto-front, and the probability model is updated with the information from the new members located in the first Pareto-front layer. Multihoist Simulation Model. e simulation model is built on Delmia-Quest®. is model includes several types of details that the multihoist process presents: set elements, set hoists and tracks, set jobs, set processes, load and unload processing, and transferring of jobs between tanks. ese situations are present in the given process. e model is able to handle any operation of each job that is scheduled. e model is able to identify the sequences of operations for each job. Figure 5 shows a global 3D layout of the production process. e reason to utilize a discreteevent simulation platform is due to the stochasticity of the underlying process. e main procedure to build the simulation model, with the features mentioned above, is detailed below, and the model is elaborated using batch commands provided by Delmia-Quest®. Conveyor Structure. e aforementioned process contains a conveyor system. It is used to transport any basket (job) through the shop floor. e conveyor system is built using 3D linear and arc segments provided by the platform. Each conveyor segment is positioned and connected according to the flow and the real distribution of the shop floor. Hoists. Each hoist is used to execute load and unload operations between tanks (machines). Each hoist attends to a specific group of tanks. If any basket requires a specific operation in a specific machine, then the assigned hoist (to that machine) executes the transport task. e hoist's movements, such as forward, return, park, load, and unload, are executed or controlled by logic commands provided by the platform. e aforementioned logic commands are also able to avoid collision between hoists. Source and Sink. A source is built in the model. It enables the baskets to enter the model. Also, a sink is built in the model to destroy all the baskets after the production process. Buffers. e buffers are used as load and unload locations. Once a basket enters the shop floor, it goes through the conveyor system and waits on the corresponding buffer until the hoist executes the movement. Tanks. e tanks are actually machines. ese are built and positioned on the shop floor according to the layout of the process. Baskets. e baskets are actually jobs. ese are created by the source. ese baskets go by different routes in the production process according to the requirements. Setting Processes. A process is an operation in the MHSP. A process is executed by a previously established tank. However, the process must be detailed, i.e., indicating which basket should be processed, and the processing time window of the process. Once a process is established, it should be linked to the corresponding tank. Setting Process Sequence. A process sequence must be established for each basket. is allows the model to verify if the basket has finished all its operations in the production process. If that is not the case, the basket continues in the shop floor by the conveyor system until all its operations have been concluded. Verification and Validation. e simulation model is run under different conditions to determine if its computer programming and implementation are correct as an application in verification technique, which is known as the fixed values test [55]. e throughput, as model result, is verified against data provided by the managers of the process. Figure 6 depicts previous descriptions of three months of real production. Furthermore, the validation of the simulation model is realized statistically. A comparison of the results derived by the simulation model with real production is done under the same initial conditions, satisfying statistical assumptions in the validation. Figure 7 depicts the information below. Finally, Table 8 depicts the main differences and similarities between the MHEDA and the MEDA for clarify. Results and Comparison In order to validate the relevance of this paper, a comparison of the MHEDA results with others is done. A set of standard benchmarking datasets is used for comparison. e Adams et al. [56] instances; the Fisher and ompson [57] instances; the Lawrence [58] instances; the Applegate and Cook [59] instances; the Storer et al. [60] instances; and the Yamada and Nakano [61] instances. For each instance, 30 trials are executed to account for the stochastic nature of the MHEDA. ree metrics are used to compare the performance of the algorithms. First, the mean absolute error (MAE) is where c i is the best hyper volume, from the Pareto-front, obtained after running each trial, and c + is the best hyper e MSE measures the amount of error between two datasets, that is, between the values that the algorithm returns and the values that it should obtain. ird, the relative percentage increase (RPI) is e RPI is used to compare two quantities while taking into account the "sizes" of the things being compared. e comparison is expressed as a ratio. Comparison with Other Estimations of Distribution Algorithms. As other previous works, some EDAs have been included as a benchmark for comparison with the MHEDA scheme; the MIMIC by De Bonet et al. [62]; the COMIT by Baluja and Davies [63], and the BOA by Pelikan et al. [64]. Figure 8 indicates the output by the algorithms using equation (13). rough box and whisper charts, the dispersion of the values obtained, using the MAE metric, is appreciated. e dispersion of the values, using the MAE, shows the results over the instances and over different runs. As we can see, the MHEDA obtains better results than the other algorithms. Based on the MAE, the MHEDA is more accurate with respect to the other algorithms. Figure 9 shows the output by the algorithms using equation (14). e dispersion of the results is similar to Figure 8. Based on the MSE metric, the MHEDA obtains the interval with the smallest error with respect to the other algorithms for the MHSP. Figure 10 depicts the results obtained by the algorithms using the equation (15). e MHEDA obtains better results FitD t−1 ⟵ Evaluate individuals(fitness)through Delmia Quest® Pareto t−1 ⟵ Select best individuals from FitD t−1 σ 0 ⟵ Central permutation computing from Pareto t−1 V t−1 ⟵ Distance computing from D t−1 and σ 0 ∅ ⟵ Spread parameter computing from V t−1 Ds t ⟵ Sampling from V t−1 p t−1 ⟵ p matrix computing from Pareto t−1 Dp t ⟵ Sampling from p t−1 q t−1 ⟵ q matrix computing from Pareto t−1 Dq t ⟵ Sampling from q t−1 D t ⟵ Replacement all old members with new offspring t: � t + 1 Until stopping criterion is met ALGORITHM 1: Pseudocode MHEDA framework. than the other algorithms. e MHEDA scheme outperforms all the previous results. Based on the results, the members located in the first Pareto-front layer contribute to improve the search process of the MHEDA scheme, against other algorithms. Although the performance of all the algorithms used in the comparison is outstanding, the MHEDA scheme can find the best value in all the trials. In addition, the MHEDA scheme is able to find the best hyper volume for all the instances used in the comparative. It always helps to find the best value in all the trails. e dispersion of MHEDA is less than other algorithms; it means that the solutions found by the MHEDA are more concentrated around the best value, than other algorithms, i.e., the average of solutions of the MHEDA converges better than other approaches to the best found value. Comparison with Other Multiobjective Algorithms. Furthermore, as other previous works, two multiobjective algorithms have been considered to evaluate the MHEDA performance, such as the NSGA by Srinivas and Deb [65] and the NSGA-II by Deb et al. [66]. e experiments are executed in the same computer and language specification. Figure 11 details the output for the algorithms using equation (13). e dispersion of the values, using the MAE metric, shows the results over the instances and over different runs. rough box and whisper charts, it is possible to identify that the behavior of the algorithms is similar than the output detailed above. Based on the MAE metric, the MHEDA is more accurate with respect to the other algorithms. Figure 12 indicates the output by the algorithms using equation (14). e dispersion of the results is similar to Figure 11. Based on the MSE metric, the MHEDA obtains the smallest median error with respect to the other algorithms for the MHSP. Figure 13 shows the results obtained for the algorithms using equation (15). e behavior is practically the same between the algorithms, i.e., the MHEDA scheme again outperforms all the previous results. Figure 13 includes the performance of the three algorithms: the NSGA, the NSGA-II, and the MHEDA after running all the instances. Based on equation (15), the MHEDA scheme outperforms all the algorithms used in the comparative. As we can see, the MHEDA is competitive in order to identify the best-candidate solutions. e performances of the algorithms used in the comparative are very similar than the previous comparison. e medians are 0.25% above the best found value. e MHEDA scheme again found the best value in all the trials. e MHEDA scheme is consistently able to find the best hyper volume for all the instances used in the comparative. Again, it 16 Applied Computational Intelligence and Soft Computing always helps to find the best value in all the trails. e dispersion of MHEDA is much less than other algorithms; it means that the solutions found by the MHEDA are more concentrated around the best value, than other algorithms, i.e., the average of solutions of the MHEDA converges better than other approaches to the best found value. Based on the results, the MHEDA scheme is able to identify the best solutions by a distance-based ranking model, i.e., the Mallows model. Comparison with Recent Algorithms for the MHSP. Based on the previous results, recent algorithms are proposed as a benchmark for comparison with the MHEDA scheme. e recent algorithms mentioned above are the mixed-integer programming model with heuristic presented by El Amraoui and Elhafsi [38]; the algorithm proposed by Xie et al. [41]; and the mathematical model with genetic algorithm detailed by Zeng et al. [44]. All the algorithms mentioned are considered recent algorithms for the MHSP. ese algorithms have been implemented by the authors. e experiments are executed with the same parameters and specifications detailed above. Figure 14 shows the output for the algorithms using equation (13). e dispersion of the values, using the MAE metric, depicts the results over the instances and over different runs. rough box and whisper charts, it is possible to identify that the behavior of all the algorithms are similar. Based on the MAE metric, the MHEDA is more accurate with respect to the other algorithms and outperforms all the algorithms used in the comparison. Figure 15 indicates the output by the algorithms using equation (14). e dispersion of the results is similar to Figure 14. Based on the MSE metric, the MHEDA obtains the smallest median error with respect to the other algorithms for the MHSP. Figure 16 presents the results obtained for the algorithms using equation (15). In this case, the MHEDA outperforms all the other recent algorithms. Practically, there exists significant difference based on Figure 17. e performance of recent algorithms and the MHEDA scheme is different. Based on Figure 16, the medians are 0.10% above the best found value. e MHEDA scheme again can find the best value in all the trials. e MHEDA scheme is consistently able to find the best hyper volume for all the instances used in the comparative. Consistently, it always helps to find the best value in all the trails. e dispersion of MHEDA is much less than other algorithms; it means that the solutions found by the MHEDA are more concentrated around the best value, than other algorithms, i.e., the average of solutions of the MHEDA converges better than other approaches to the best found value. Based on the results, the MHEDA scheme tackles the inconvenience of the EDAs, i.e., lack of diversity of the solutions and poor ability of exploitation. e proposed algorithm does not require evolutionary operators to get offspring, such as cross and mutation. e MHEDA makes use of the GMD process to establish a search direction. e EDA scheme considers population size, replacement (also known as generation gap), and selection strategy as key parameters. It is consistently with Grefenstette [67]. (i) e population size; in the current experiments, the population size ranged from 500 to 1000 solutions in increments of 500. (ii) e replacement; the current experiments allowed to vary the percentage of the population to be replaced during each generation between 50% and 100%, in increments of 50%. (iii) e selection; the experiments compared two ways to bubble, i.e., sorting by the makespan and sorting by the maximum workload. A design of experiment is built to identify the best parameter of each parameter. Parameter tuning is detailed in Table 9. Finally, the results of the parameter tuning are shown in Figure 18. ere is no statistically significant difference of any of the three controlled parameters (number of generations, initial population size, and selected population size). erefore, the parameters used are the same for all the algorithms. Conclusions and Future Research is paper considers the MHSP. It tries to determine the sequence of operations for each hoist to perform so that some performance metric is optimized. e MHEDA scheme is proposed for tackling the problem and simulating a solution. e aforementioned instances were used as input and test parameters in order to validate the Mallows model as a probability model for the MHSP. e hybridization between the Mallows model and the EDA proposed helps to identify an explicit distribution over a set of permutations. Traditional operators are not considered for building suitable sequences, i.e., operation sequence vectors. ese are obtained from the Mallows model. e proposed exponential model, i.e., the Mallows model, is able to tackle the inconvenience any EDA has with permutation-based problems. e MHEDA scheme does not require to be reconfigured for solving the MHSP. e Mallows model is considered as a specific probability model for this issue. Based on the results, the exponential model is more effective against other algorithms. In addition, the MHEDA scheme considers three probabilistic models instead of only one as almost all the EDAs reported in the literature. e results of the MHEDA are more concentrated around the best Pareto-front obtained in all the trails. Meanwhile the rest of the algorithms have more disperse results. It is consistent in all the experiments detailed above. Considering only the best solutions, in the selection process through a Pareto-front approach, is suitable to get a better estimation of the central permutation. A better estimate of the central permutation helps to improve the performance of the MHEDA scheme. Based on the results detailed above, a simulation model is useful to model the critical variables that influence the performance of the process. e simulation model should be considered in almost any proposed solution. Moreover, simulation optimization is an enabling tool to handle diverse manufacturing limitations such as the MHSP. e simulation approach helps to model the difference between the classical job-shop configurations and the MHSP. e simulation language is able to order each couple of hoist operations, and avoid any collision between hoists, to satisfy the aforementioned constrains related to operations of the hoists. In addition, the replacement step, in the MHEDA scheme, utilizes only the offspring to continue the evolutionary progress. It helps to reduce the dispersion of the results around the best solution. Finally, as other previous works, this research contributes using the MHEDA as an optimization method for working with any simulation language. Future research work could consider dynamic aspects such as failures of the hoists, shutdowns, jobs with priority, type of jobs, and type of hoists. erefore, the aforementioned dynamic issues should be integrated for any proposed algorithm. Other comparisons should be investigated, as future research, in order to know in which other permutation-based problems, the MHEDA has a competitive performance. Data Availability All data are included in the manuscript.
v3-fos-license
2017-10-22T21:25:08.152Z
2014-09-22T00:00:00.000
8026266
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=49913", "pdf_hash": "c19ff74512eed29dad2e0f4c363a4715c0114331", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2480", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "sha1": "c19ff74512eed29dad2e0f4c363a4715c0114331", "year": 2014 }
pes2o/s2orc
Scanning Electron Microscopy Observation of Adhesion Properties of Bifidobacterium longum W 11 and Chromatographic Analysis of Its Exopolysaccaride * Bifidobacterium spp. can produce cell-bound or released exopolysaccharide (EPS) that is a beneficial trait mediating commensal-host interactions. However differences in the physico-chemical characteristics of EPS produced by different strains of Bifidobacterium spp. are determinant in adhesion ability and modulation of immune response. The aim of this study was to investigate the in vitro adhesion characteristics of Bifidobacterium longum W11 to intestinal epithelial cell-line HT-29, by Scanning Electron Microscopy (SEM), and chemical characteristics of its exopolysaccharide using Thin-Layer Chromatography (TLC) analysis. SEM observation showed a good adhesion of B. longum W11 to the HT-29 monolayer that could be increased by the production of exocellular polymers formation. TLC analysis of the purified and hydrolyzed EPS showed that the cell-surface and extracellular polysaccharide were composed mainly of fructose and glucose. Moreover, other sugars were present in smaller quantities. Information from this study on physico-chemical characteristics of EPS of B. longum W11 could contribute to understanding the physiology of bifidobacteria and their interaction with the host. Scanning Electron Microscopy observation of adhesion properties of Bifidobacterium longum W11 and chromatographic analysis of its exopolysaccaride. Corresponding author. Introduction For probiotic bacteria, selected for commercial use in food and in therapeutics, the adhesion to human intestinal cells is considered a fundamental property.Adhesion enables probiotic strains to persist longer in the intestinal tract and, then, to stabilize the intestinal mucosal barrier, to provide competitive exclusion of pathogen bacteria and to better develop the metabolic and immunomodulatory activity [1].In vitro evaluation tests indicated that adhesion ability and modulation of host immunity depend mainly on the strain [2]- [4].Medina et al. (2007) compared the immunological properties of strains of Bifidobacterium longum and demonstrated that the different strains are potentially able to drive the immune response in the opposite direction in vitro, suggesting different immune potentials in clinical practice in vivo [2]. Several studies showed that some strains of Bifidobacterium spp.can produce cell-bound or released exopolysaccharides (EPSs) and that exopolysaccharides production is a beneficial trait mediating commensal-host interaction through immune modulation [5] [6].However differences in the physico-chemical characteristics of EPS produced by Bifidobacterium spp.could be determinant for probiotic functionality and immunomodulation capability [7] [8].Medina et al. (2007) studying the immunological properties of the structural cell components and the secreted molecules of the strains of Bifidobacterium longum did not investigate the production of exopolysaccharides [2].The composition of EPS of Bifidobacterium spp. is important in understanding their interaction with the host, but the polymers have been characterized for only a few of the commercialized probiotic strains. The strain Bifidobacterium longum W11 is widely employed in the formulation of most popular probiotic products and has received special attention for its probiotic effects and its long history of safe use in food supplement. The aim of this study was to investigate the in vitro adhesion characteristics of B. longum W11 to human intestinal epithelial cell-line HT-29 by Scanning Electron Microscopy (SEM) and to characterize the chemical composition of its exopolysaccharide using chromatographic methods. Bacterial Strain and Culture Conditions Bifidobacterium longum W11, a probiotic strain commercialized by Alfa Wasserman (Italy) and produced by Probiotical (Italy), was tested, using a pack purchased in December 2013.The strain was routinely grown in MRSC broth, composed of MRS (deMan, Rogosa & Sharpe; Sigma Aldrich, Italy) plus 0.25% L-cysteine (Sigma Aldrich, Italy) at 37˚C under anaerobic conditions (80% N 2 , 10% CO 2 and 10% H 2 ) using the AnaeroGen sachet (Oxoid, Italy) for 24 h.As standard procedure, the strain was cultivated in agar-MRSC and incubated at 37˚C in anaerobic atmosphere for 48 h. Cell line and Culture Conditions The human epithelial intestinal cell-line HT-29 (ATCC ® HTB38 TM ) was used in this study.The HT-29 cells were routinely grown in Dulbecco's modified Eagle's medium (DMEM, Sigma Aldrich, Italy), supplemented with 10% (v/v) heat-inactivated bovine fetal serum and 100 µg/ml of streptomycin (Sigma Aldrich, Italy), at 37˚C in a humidified atmosphere of 5% CO 2 . Bifidobacterium longum W11 Using Scanning Electron Microscopy For adhesion assays HT-29 cells (about 2 × 10 5 cells/ml) were seeded in 24-well tissue culture plates (Sigma Aldrich, Italy) on microscopy cover glasses in DMEM, and incubated at 37˚C in a humidified atmosphere of 5% CO 2 , until reaching 95 to 100% confluence (14 ± 1 days).Cell monolayers were washed twice with antibiotic free DMEM before bacteria cells were added.An overnight culture of Bifidobacterium longum W11 in MRSCbroth was centrifuged for 10 min at 3000 rpm, and the bacterial pellet was re-suspended in antibiotic free DMEM medium to a final concentration of about 1.5 × 10 8 CFU/ml, determined photometrically (OD 600 ) (Bio-Rad model 680).The wells with HT-29 cells and bacteria were incubated at different times (30, 60, 120 min) at 37˚C, in a humidified atmosphere of 5% CO 2. The observation of the adhesion on HT-29 cells and biopolymers formation of B. longum W11 by Scanning Electron Microscopy (SEM) was done using the method described by Ali et al. (2009) [9].After incubation, the HT-29 monolayer was washed three times for 5 min with Sodium Cacodylate Buffer 0.1 M, at pH 7.4, previously filtered (0.22 μm).Then, the cells were fixed with 4% (w/v) glutaraldehyde (Sigma Aldrich, Italy), in 0.1 M PBS (pH 7.2), for 12 h at 4˚C.The HT-29 monolayer, after washing three times with Sodium Cacodylate Buffer, was treated with 1% Osmium tetroxide (OsO 4 ) in Sodium Cacodylate Buffer 0.1 M (pH 7.4) for 1 h at 4˚C, washed again with 0.1 M Sodium Cacodylate Buffer twice for 10 min and dehydrated in a graded ethanol series (35% v/v, 50% v/v, 75% v/v, 100% v/v).Microscope cover glasses were then prepared for Critical Point Drying (CPD).The cells were dried and coated with gold.The microscope cover glasses were then examined with a Scanning Electron Microscope (SEM, Hitaki S 4000).During the experiment two wells containing only HT-29 cells were used as controls.Each assay was performed in duplicate to determine inter-assay variation. EPS Extraction The exopolysaccharide (EPS) produced by Bifidobacterium longum W11 was extracted by the method described by Ruas-Madiedo et al. (2006), [10].Cellular biomass was collected from two Petri Dishes (90 mm) with MRS agar using 2 ml of ultrapure water and a plastic L-shaped spreader.To release the polymer from the cell surface, 1 vol of 2 M NaOH was added to the cellular suspension and stirred overnight at room temperature.Afterwards the cells were removed by centrifugation (8400 g for 30 min) and EPS from the supernatant was precipitated for 3 days at 4˚C, using 2 vol. of absolute ethanol.The precipitated EPS fraction obtained after centrifugation was resuspended in ultrapure water and dialyzed (3 days at 4˚C) against the same daily-changed water using dialysis tubes (Sigma Aldrich) of 12 kDa molecular mass cutoff.The dialyzed EPS fractions were freeze-dried. Quantitative Analysis of EPS Quantitative analysis of the recovered EPS was carried out according to the method of Dubois et al. (1956) [11].Briefly, two ml of EPS solution were pipetted into a colorimetric tube, and 0.05 ml of 80% phenol were added.Then, 5 ml of concentrated sulfuric acid were rapidly added.After 10 min the tubes were shaken and placed for 10 to 20 min in a water bath at 25˚C -30˚C.Two ml of the solution were placed in a cuvette and measured by spectrophotometry (Agilent 8453).The absorbance of the characteristic yellow-orange color was measured at 490 µm for hexoses and 480 µm for pentose.The amount of EPS was determined by reference to the standard curve constructed using different concentrations (from 10 to 100 μg) of glucose solutions.All solutions are prepared in triplicate. Hydrolysis of EPS The monosaccharide composition of EPS was done modifying the method described by Yang et al. (2010) [12].Briefly, the purified EPS sample (2 mg) was hydrolyzed with 1 ml of 2 M trifluoroacetic acid (TFA) at 100˚C for 1 h.Then the TFA was evaporated under a stream of nitrogen at 60˚C and the hydrolyzed EPS was solubilized in methanol. Thin-Layer Chromatography (TLC) Analysis To identify the monosaccharides, the hydrolyzed EPS was analyzed by TLC using silica and cellulose as two different stationary phase supports.The silica plate was eluted using ethyl acetate-acetic acid-methanol-water/ (60:15:15:10).The spots of sugars were shown using, as chromogenic reagent, a solution of p-anisaldehyde and sulfuric acid (2:1) in acetic acid and heating the plate for 10 min at 100˚C.The cellulose plate was eluted using ethyl acetate-pyridine-water (40:20:30) (upper phase).In this case the spots of sugar were shown using, as chromogenic reagent, 0.1M p-anisidinephthalic acid in 96% ethanol and heating the plate for 10 min at 100˚C.As standard sugars, arabinose, fructose, fucose, galactose, glucose, mannose and xylose, were used. Results Scanning electron microscopy showed that adhesion of the strain to the HT-29 monolayer was already present after 30 min.Bacterial adhesivity further increased after 60 min and 120 min.Figure 1 shows the adhesion of Bifidobacterium longum W11 on HT-29 monolayer at 60 min and 120 min (magnification 5000×). The SEM observation also showed that B. longum W11, adhering to HT-29 monolayer, was able to produce biopolymers.Figure 2 shows that B. longum W11 and biopolymers formed a complex 3D structure, biofilm-like, at 60 min and at 120 min (magnification 5000× and 4000× respectively). The specific methodology used for EPS extraction confirmed the production of biopolymers observed by SEM.Using spectrophotometric methods, the absorbance values showed that the recovered EPS was about 72 µg, in comparison with the standard.Using silica plates the TLC analysis hydrolyzed and purified EPS revealed, by the retardation factor (Rf) values, the presence of fructose and glucose.These results were confirmed using cellulose plates.Using both methods, the TLC analysis showed presence of other sugars that are under examination, with different methodologies.In addition, non-carbohydrate constituents could be present. Discussion and Conclusion Some of the benefits attributed to bifidobacteria have been correlated with their capacity to mediate commensal-host interaction through immune modulation and pathogen protection [6].However, strains of some species of Bifidobacterium could elicit different in vitro responses upon interaction with human cells [7].Bifidobacterium longum strains can divert immune responses in vitro either towards a pro-inflammatory or a regulatory profile, suggesting that different strains may have different functional roles and application in different pathological conditions [2].Medina et al. (2007) [2] demonstrated that live cells of Bifidobacterium longum W11, strongly stimulated the production of T helper 1 cytokines (IL-2 and IFN-γ) and induced low levels of IL-10; moreover low production of IL-10 were also induced by cell-surface components.On the basis of their results the authors suggested that B. longum W11 could provide protection against the early stage of infection via Th 1 production.However, the authors did not characterize the structural cell components and the secreted molecules investigated for immunological properties.Many studies have demonstrated the ability of EPS of Bifidobacterium spp. to mediate communication processes with the host and to trigger both innate and adaptive immune responses [1] [5]- [10] [13]- [15].Some of the EPSs produced by Bifidobacterium spp.and other probiotic strains can contribute to the adhesion capacities to intestinal mucus and to the species-specific effects on immunocompetent cells [10].The EPS consists of secreted and extracellular polymerized glycans which can be covalently linked to the bacterial surface forming a capsule, no-covalently associated with the surface or be totally secreted into the surrounding environment as slime [2] [5] [6]. The results obtained with SEM observation indicated that B. longum W11 was able to adhere to the HT-29 cell line and that the production of the exocellular polymers could be one of the factors contributing to its adhesion properties.Moreover, other SEM observations of B. longum W11 did not show a capsule layer but an EPS no-covalently associated with the surface. Using silica and cellulose plates TLC analysis indicated by Rf value that the hydrolyzed EPS was mainly composed of fructose and glucose.The results suggested that the EPS produced by B. longum W11 could be a heteropolysaccharide formed by repeating units containing fructose and glucose.Further our studies are underway, using new methodologies, to better characterize the chemical composition of the EPS of B. longum W11. As suggested by Hidalgo-Cantabrana et al. (2014) [15] studies connecting physicochemical characteristics of the EPSs of different strains of Bifidobacterium spp. with genetic information, could give further insight in understanding the physiology of bifidobacteria and their interaction with the host. Our continuing research is trying to determine which of the chemical and physical characteristics of the EPS of B. longum W11 are responsible for the ability of the strain to interact with the host and of the immunological properties, suggested by Medina et al. (2007) [2].
v3-fos-license
2023-12-22T05:13:12.697Z
2023-12-20T00:00:00.000
266425152
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://eurjmedres.biomedcentral.com/counter/pdf/10.1186/s40001-023-01584-8", "pdf_hash": "ab8163378927d68efbdc78d799bf44f9882b3899", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2481", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ab8163378927d68efbdc78d799bf44f9882b3899", "year": 2023 }
pes2o/s2orc
Low body temperature and mortality in critically ill patients with coronary heart disease: a retrospective analysis from MIMIC-IV database Background This study was aimed to investigate the correlation between low body temperature and outcomes in critically ill patients with coronary heart disease (CHD). Methods Participants from the Medical Information Mart for Intensive Care (MIMIC)-IV were divided into three groups (≤ 36.5 ℃, 36.6–37.4 ℃, ≥ 37.5 ℃) in accordance with body temperature measured orally in ICU. In-hospital, 28-day and 90-day mortality were the major outcomes. Multivariable Cox regression, decision curve analysis (DCA), restricted cubic splines (RCS), Kaplan–Meier curves (with or without propensity score matching), and subgroup analyses were used to investigate the association between body temperature and outcomes. Results A total of 8577 patients (65% men) were included. The in-hospital, 28-day, 90-day, and 1-year overall mortality rate were 10.9%, 16.7%, 21.5%, and 30.4%, respectively. Multivariable Cox proportional hazards regression analyses indicated that patients with hypothermia compared to the patients with normothermia were at higher risk of in-hospital [adjusted hazard ratios (HR) 1.23, 95% confidence interval (CI) 1.01–1.49], 28-day (1.38, 1.19–1.61), and 90-day (1.36, 1.19–1.56) overall mortality. For every 1 ℃ decrease in body temperature, adjusted survival rates were likely to eliminate 14.6% during the 1-year follow-up. The DCA suggested the applicability of the model 3 in clinical practice and the RCS revealed a consistent higher mortality in hypothermia group. Conclusions Low body temperature was associated with increased mortality in critically ill patients with coronary heart disease. Supplementary Information The online version contains supplementary material available at 10.1186/s40001-023-01584-8. cardiovascular disease within 1 year after admission was found to be 16.1%, making it the second highest cause of death following malignant tumors [2].Therefore, the role of predictive factors, biomarkers, and scores in the development and prognosis of CHD has been widely acknowledged [3][4][5][6][7].Although these factors are of good efficiency in predicting, some of them are too complicated to be routinely used and postponed awaiting laboratory tests. Body temperature, a crucial physiological parameter frequently measured in ICU, affects inflammation and immune function, related to various diagnoses caused by infectious or not [8,9].In the operating room setting, rectal, bladder, esophageal, and nasopharyngeal probes are preferred for monitoring core temperature, while the invasive measurement of temperature in the pulmonary artery, considered the gold standard, is rarely employed.The skin temperature detected by infrared thermometers, which is highly susceptible to changes in ambient air temperature, thereby is not commonly used in ICU [10].Compared with that, alternative peripheral temperature measurements, such as oral temperature, demonstrate comparable precision to the nasopharyngeal one (P = 1.00) with better acceptance in ICU [11].An evidence-based guideline [12] also suggested that the study involving patients with CHD measuring peripheral temperature to predict morbid cardiac events by multivariate analysis was of good quality.In addition to that, another guideline [13] from The American Society of PeriAnesthesia Nurses (ASPAN) showed strong evidence for oral temperature measurements, with a recommendation class of Class I, Level B. Whether temperature abnormalities have an influence on CHD patients remains unknown yet, so we hypothesized low body temperature was linked with worse outcome based on similar investigations aiming at patients undergoing coronary artery bypass grafting (CABG) [14,15] and other cardiac surgeries [16].And we tested this in the Medical Information Mart for Intensive Care (MIMIC)-IV database in a pre-specified manner. Study population The Medical Information Mart for Intensive Care (MIMIC)-IV database (version 2.2) was the data source of the present retrospective observational study, containing 73,181 ICU admission records from critically ill patients in the Beth Israel Deaconess Medical Center (BIDMC) from 2008 to 2019 [17][18][19].The access to the database was obtained based on both the training named 'CITI Data or Specimens Only Research' passed (record ID: 57,385,572) and the application for credentialed access of PhysioNet Clinical Databases approved.Moreover, the waiver of informed consent was granted by the Institutional Review Board at the Beth Israel Deaconess Medical Center and the data in the database were de-identified. We enrolled 8577 ICU patients diagnosed with coronary heart disease (CHD) based on the International Classification of Diseases (ICD)-10 codes from I20 to I25.Patients with censored body temperature records had been excluded, and the overall study population was divided into three groups in accordance with the prior investigation [20] and the distribution of body temperature a priori. Definitions of body temperatures and outcomes The body temperature was defined as an average body temperature (derived from sources of oral thermometer) within 24 h after ICU admission and the follow-up of mortality kicked off on the date of discharge. Statistical analysis Categorical variables are compared using the Chi-Square test, illustrated in the form of number and percentage and continuous variables using one-way analysis of variance (ANOVA) reported as mean ± standard deviation.To evaluate the independent association of body temperature and in-hospital, 28-day and 90-day mortality, multiple models were under adjustment of confounding factors which had been analyzed in univariate analysis models with P < 0.05: Model 1: adjusted for age, gender, ethnicity; Model 2: adjusted for model 1 plus types of coronary heart disease, admission type, white blood cell, hemoglobin, hematocrit, serum creatinine, glucose, blood urea nitrogen, INR, potassium, systolic blood pressure, diastolic blood pressure, mean blood pressure, heart rate, respiratory rate, SOFA and SAPS II; Model 3: adjusted for model 2 plus atrial fibrillation, chronic kidney disease, chronic obstructive pulmonary disease or pulmonary hypertension, heart arrest, cardiogenic shock and vasoactive drugs (dobutamine, dopamine, epinephrine, milrinone, norepinephrine, vasopressin and phenylephrine).Decision curve analysis (DCA) was performed to evaluate the predictive effect between three multiple models.Kaplan-Meier survival curves were compared among three body temperature groups using the log-tank test and the body temperature was also regarded as a continuous variable to scope a probable non-linear relationship with mortality using restricted cubic splines (RCS).Subgroup analysis was performed in terms of age, gender, ethnicity, heart arrest and cardiogenic shock to figure out if the interactions between body temperature as a continuous variable and these variables function. Results After screening of critically ill patients diagnosed as CHD with complete records of body temperature during ICU stay and other admission data, 8577 patients were eventually enrolled in the present analysis cohort with the mean age of 69 years.Of these, 65% were men.Enrolled patients were distributed into three groups (℃): hypothermia (31.8-36.5),normothermia (36.6-37.4),and hyperthermia (37.5-39.6).The distribution of temperature was illustrated (Additional File 1: Figure S1).The in-hospital, 28-day, 90-day, 180-day and 1-year overall mortality rate were 10.9%, 16.7%, 21.5% and 30.4%, respectively. Baseline characteristics Baseline characteristics of the study population are showcased in Table 1.Compared with the other two groups, patients with hypothermia presented to own a higher proportion of older individuals, white, with atrial fibrillation, chronic kidney disease, cardiogenic shock, acute coronary syndrome (ACS, definition comes from ICD-10 codes [22]) and several vasoactive drugs (dobutamine, dopamine, milrinone and phenylephrine).The mortality of this group kept its leading role during 1-year follow-up with a steady increase in comparison with normothermia group. Association of body temperature with clinical outcomes and evaluation of predictive model The Cox proportional hazards regression model was adopted to analyze the association between low body temperature and clinical outcomes among critically ill patients with CHD (Table 2).On the whole, patients with hypothermia from in-hospital mortality to 90-day mortality were at a statistically higher risk than the normothermia group no matter for any multivariable model.Full-variable model 3 also indicated that there is no statistically difference between hyperthermia and normothermia. The full model (model 3) demonstrated a higher net benefit than model 1 and model 2 in assessing the prognosis of CHD patients as the risk threshold ranging from less than 0.05 to more than 0.75 for in-hospital, 28-day and 90-day mortality (Fig. 1).This was clinically adoptable due to its permission for a wide range of critically ill population with mortality rates ranging from a large scale. To obtain a clearer grasp of how body temperature worked with outcomes, we regarded it as a continuous variable and utilized restricted cubic splines (RCS) to explore whether the duo had linear correlations (Fig. 2).There is a "U-type" relationship between body temperature and outcomes for in-hospital mortality (Fig. 2A), 28-day (Fig. 2B) and 90-day mortality (Fig. 2C).Further adjusting for confounding factors in model 3 (Fig. 2D-2F), the right-hand side of the curve ramped down to the baseline showcasing no statistically increased mortality; while the left hand of that stood a statistically higher mortality though slight as it was.The risk of death went up in the hypothermia group with the decrease of body temperature and elucidated the result of Cox regression analysis a step further. Study outcomes Kaplan-Meier curves (KM curves) was performed before and after propensity score matching (PSA) (Additional File 1: Table S2).The KM curves portrayed a significantly higher risk of death over 1 year in hypothermia and hyperthermia group (log-tank test P < 0.0001, Fig. 3A).A dramatic drop occurred in the first 28 days (mostly during the hospital stay) in all groups and turned to a smooth decrease afterward.After PSM, there is no statistically difference among three groups (P = 0.053, Fig. 3B). Subgroup analysis Further assessment of the risk stratification value of body temperature as a continuous variable was performed in subgroups consisting of age, gender, ethnicity, heart arrest and cardiogenic shock (Table 3).Our results showed a negative association between body temperature and 28-day or 90-day mortality in subgroups of those aged ≥ 65 years (with adjusted HR ranged from 0.75 to 0.77) and those aged < 65 years (with adjusted HR ranged from 0.66 to 0.72).So they were, for 90-day mortality with those male, white, without heart arrest and with cardiogenic shock as well.Whereas, it was found that the association between body temperature and mortality had been affected by cardiogenic shock (with P value for interaction < 0.01). Discussion To our knowledge, this is the first study to report the association of low body temperature and worse mortality in critically ill patients with coronary heart disease. In this retrospective cohort study, we analyzed 8577 patients and divided them into three groups: hypothermia (31.8-36.5),normothermia (36.6-37.4),and hyperthermia (37.5-39.6).The methods employed in this study to presage the disease outcomes are based on previous articles [23,24].Observing that the hypothermia group had a higher mortality risk over two other groups, we further perceived body temperature as a continuous variable and found a negative relationship between temperatures and worse outcomes.The predictive power of the full model has been examined based on the DCA model, and it was found that body temperature could serve as a convenient outcome predictor for critically ill patients. Regarding of the intergroup heterogeneity showcased in baseline characteristics table which may cause confounding bias to KM curves, we further adjusted that through PSM.Though post-PSM results correct the probable fallacious outcome in hyperthermia group to some extent (the protective effect of hyperthermia for patients admitted to ICU had been described in former paper [25]), they did not present a statistically higher mortality in hypothermia group. Intriguingly, in subgroup analysis, we found interactions between body temperature and age, heart arrest or cardiogenic shock.However, only those with cardiogenic shock had both statistically significant simple effect and statistically significant interaction effect in Fig. 1 The decision curve analysis (DCA) to evaluate the predictive power of multiple models.A The DCA for in-hospital mortality; B the DCA for 28-day mortality; C the DCA for 90-day mortality Fig. 2 Association between body temperature and outcomes of critically ill patients with CHD.Restricted cubic spline for unadjusted in-hospital mortality (A), 28-day mortality (B), 90-day mortality (C) and adjusted in-hospital mortality (D), 28-day mortality (E), 90-day mortality (F).CI, confidence interval; HR, hazard ratio terms of 90-day mortality.Thus, higher proportion of patients with cardiogenic shock history was one of the crucial reasons of why hypothermia group experienced increased mortality especially in long term (the same result illustrated in the adjusted KM curves). Studies have demonstrated that hypothermia happened due to the changes of thermoregulation system under anesthesia (including abolished behavioral responses, compromised homeostasis and reduced thresholds of vasoconstriction and shivering) [26].Apart from this, refrigerated liquid drugs used and excessive blood loss in operations and poor physical quality at an older age can lead to hypothermia as well [27].Systemic hemodynamic depression realized by catecholamine usage in ICU worked together with the thermoregulatory system to cause a lower temperature [28]. It has been investigated that body temperature at admission was related to the outcome and hypothermia appeared to be a significant and independent indicator of increased mortality rates both during ICU stays and over the long term [28].In addition to that, numerous previous clinical studies have been carried on to research into the association of low body temperature with mortality and morbidity of cardiac patients.According to DeFoe et al. [14], regardless of any sites of core body temperature (nasopharyngeal, esophageal, bladder or rectal), patients undergoing isolated on-pump coronary artery bypass grafting (CABG) surgery have consistently higher in-hospital mortality rates with increasing colder temperatures; moreover, lower temperature groups were found to exhibit greater myocardial injury as assessed by myocardial markers.Likewise, Nam et al. [20] reported that the all-cause mortality of moderate-to-severe hypothermia was more than two times of that in normothermia for off-pump CABG patients and even mild hypothermia (no less than 35.5℃) was found an unsatisfied outcome during the follow-up of 47 months.However, another study that enrolled isolated off-pump CABG patients showed no statistically difference for in-hospital mortality between hypothermia group and normothermia group neither before nor after propensity score matching, while a distinction in postoperative transfusion of red cell concentrates, duration of intubation and ICU stay [15].Unexpectedly, there was a higher rate of inhospital mortality in the normothermia group than in the hypothermia group after pairing, though this difference was not statistically significant (P = 0.975 vs. P = 0.244). Extending to all sorts of cardiac surgeries, the results of multivariable regression analysis showed higher mortality in hypothermia group (body temperature < 36 ℃) during the 1-year follow-up [16].To figure out the reason for unstable results of in-hospital mortality, Karalapillai et al. [29] differentiated hypothermia into transient type and persistent type in the multi-center observational study, finding that in-hospital mortality was statistically associated with persistent hypothermia but not transient hypothermia.This may shed light on variations of results through different proportions of persistent hypothermia in those studies.Moreover, low body temperature was recognized as an independent marker of poor cardiovascular mortality and rehospitalization in patients admitted with worsening heart failure and reduced ejection fraction [30].Therapeutic hypothermia triggered by all sorts of cooling strategies, lowering the temperature between 32 ℃ and 35 ℃ for at least 24 h, has been shown to be increasingly used in the post-resuscitation care for post-cardiac arrest patients [31].Nevertheless, studies showed that no improvement in mortality or neurologic outcome was discovered in the therapeutic hypothermia group over normothermia group [32] and even higher mortality was noticed in patients without cardiac arrest [31].This reflects distinct causes of low body temperature (no matter for anesthesia-induced or cooling strategy-induced) may have the same tendency on worse outcomes.Similarly, lower ambient temperature, including during cold spells, could elevate mortality and morbidity of cardiovascular disease [33]. Current data about associations between body temperature and patients with coronary heart disease, especially critically ill ones, are limited.In this specific cohort of ICU patients with CHD, we found that low body temperature was an efficient and convenient independent predictor of greater mortality in these patients. However, this study has several limitations.First, selection bias cannot be evitable due to its retrospective study nature.Since this is a single-center study for confined regions and populations, external validation should be examined by more prospective cohort studies in the future.Second, important indicators such as LVEF and cholesterol levels were not sufficient (less than 15%) in the database.Third, the possible implementation of therapeutic hypothermia in this study may decrease the hazard ratios of mortality for patients in lower temperatures and conceal the probable damage in patients with abnormally high temperature. Conclusions The study showcased critically ill CHD patients with hypothermia after ICU admission had a higher risk of mortality.Measuring body temperature may provide practical evidence for risk stratification and further research is required to testify this. Table 1 Baseline characteristics according to different groups of body temperature among critically ill patients with coronary heart disease VariablesHypothermia (N = 1064) Normothermia (N = 7076) Hyperthermia (N = 437) P value Table 2 Cox proportional hazard ratios (HR) for outcomes of critically ill patients with coronary heart disease Table 3 Adjusted analysis of association with in-hospital, 28-and 90-day mortality for body temperature
v3-fos-license
2019-03-22T16:58:37.615Z
2019-03-21T00:00:00.000
84846681
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-019-41478-6.pdf", "pdf_hash": "4caa577aac050283af65f45c2e47c4d6a3032014", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2483", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4caa577aac050283af65f45c2e47c4d6a3032014", "year": 2019 }
pes2o/s2orc
IL-27 promotes NK cell effector functions via Maf-Nrf2 pathway during influenza infection Influenza virus targets epithelial cells in the upper respiratory tract. Natural Killer (NK) cell-mediated early innate defense responses to influenza infection include the killing of infected epithelial cells and generation of anti-viral cytokines including interferon gamma (IFN-γ). To date, it is unclear how the underlying cytokine milieu during infection regulates NK cell effector functions. Our data show during influenza infection myeloid cell-derived IL-27 regulates the early-phase effector functions of NK cells in the bronchioalveolar and lung tissue. Lack of IL-27R (Il27ra−/−) or IL-27 (Ebi3−/−) resulted in impaired NK cell effector functions including the generation of anti-viral IFN-γ responses. We identify CD27+CD11b+ NK cells as the primary subset that expresses IL-27R, which predominantly produces IFN-γ within the upper respiratory tract of the infected mice. IL-27 alone was incapable of altering the effector functions of NK cells. However, IL-27 sensitizes NK cells to augment both in vitro and in vivo responses mediated via the NKG2D receptor. This ‘priming’ function of IL-27 is mediated partly via transcriptional pathways regulated by Mafs and Nrf2 transcriptionally regulating TFAM and CPT1. Our data for the first time establishes a novel role for IL-27 in regulating early-phase effector functions of NK cells during influenza infection. Each year thousands of people are hospitalized due to complication related to influenza virus infections. Innate and adaptive immune cells mediate the host immune responses to influenza virus infections. NK cells provide the first line of innate defense against influenza virus by killing infected epithelial cells and by producing anti-viral cytokine interferon (IFN)-γ 1,2 . NK cells express the multiple activating and inhibitory receptors to execute anti-viral or anti-tumor effector functions 3 . Virally-infected cells express H60, Rae, and Mult1 or Hemagglutinin (HA) ligands for NK cells activating receptor NKG2D and NCR1, respectively 4 . Recognition of ligands by NKG2D or NCR1 results in lysis of infected/tumor cells and the generation of IFN-γ from NK cells 5,6 . NK cells constitutively express or upregulate the expression of activating receptors to mount anti-viral responses; however, virally-infected/tumor cells evade NK cell-mediated recognition through various mechanisms. Virus down regulates ligands for NK cell-activating receptor or enhances engaging inhibitory receptors 4,7,8 . Effect of cytokines in modulating NK cell responses has been an area of intense research. The common gamma receptor (γcR)-interacting cytokines IL-2, IL-7, IL-15, and IL-21 have been used to expand NK cells for adoptive transfer experiments in the clinical setting 9 . Unique α-chains define the receptors for these cytokines. IL-2 and IL-15 share a β-chain and the γcR along with cytokine-specific IL-2Rα and IL-15Rα, respectively 10,11 . Historically, IL-2 has been extensively used to expand murine and human NK cells 12,13 . IL-15 activates PI(3)K-mediated mTORC1 pathway 14,15 . IL-12 is a heterodimeric cytokine consists of p35 and p40 subunits, and it binds to the IL-12 receptor (IL-12Rβ1 and IL-12Rβ2) 16,17 . IL-18 belongs to an IL-1 family that interacts with a heterodimeric receptor composed of IL-18Rα and IL-18Rβ 18,19 . IL-12 and IL-18 enhance NK cell effector functions including IFN-γ production 20,21 . However, IL-12 or IL-18 responses are acute and independent of NK cell activating and inhibitory receptors 22 . IL-23 is another heterodimeric cytokine composed of p19 and p40 subunits, and its receptor is made up of IL-23Rα and IL-12Rβ1 23 . IL-23 activates NK cells to produce IL-22 24,25 . IL-35 contains p35 and EBI3 subunits, and its recently defined receptor consists of IL-12Rβ2 and gp130 [26][27][28] . gp130 is the shared receptor subunit of an IL-6 family of cytokine receptors 29 . IL-27 is another heterodimeric cytokine that belongs to the IL-12 family and consists of p28 and Epstein-Barr virus-induced gene 3 (EBI3) 30 . Receptor for IL-27 is composed of gp130 and WSX1 31 . IL-27 and has been shown to modulate NK cells anti-tumor cytotoxicity responses [32][33][34][35] . These studies demonstrate that IL-27 augments NK cells cytotoxic responses to a variety of tumor cell lines in perforin, granzyme, TRAIL, and Fc-γR-III-dependent mechanisms 32,33,[36][37][38][39] . The role of IL-27 in NK cell-mediated anti-tumor immunity has been defined 39 . However, the underlying molecular mechanism is not well-defined. Notably, the mechanism by which IL-27 regulate NK cells effector functions during viral infections is yet to be fully understood. In this study, we determined the role of IL-27 signaling in regulating NK cells effector responses during influenza infection as well as dissecting molecular mechanism of its action. Our data show that NK cells upregulate IL-27R following influenza infection. IL-27 but not IL-12 or IL-35 is obligatory for promoting the early NK cell-mediated responses. Ebi3 −/− and Il27ra −/− mice exhibited significantly reduced NK cell effector functions (IFN-γ and cytotoxicity) during influenza infection. Our in vivo and in vitro findings strongly suggest that defect in effector responses were NK cells intrinsic and involve CD27 + CD11b + subset. Mechanistically, IL-27 regulates NK cells effector functions via small Maf-F and Nrf2. Expressions of γ-glutamylcysteine ligase catalytic (GCLC), mitochondrial transcription factor A (TFAM), and carnitine palmitoyltransferase 1 (CPT1) were significantly reduced in NK cells derived from Ebi3 −/− mice, demonstrating a unique and exclusive functional role for IL-27. These findings provide a novel insight into how IL-27 plays a central role in containing viral infections by sensitizing NK cells to recognize infected cells and provide protective immunity to the host. Influenza infection leads to IL-27 generation and induction of IL-27R. Activation of innate and adaptive immune responses is natural host defense against influenza infection. The innate immune response includes infiltration of myeloid cells and NK cells in the upper respiratory tract. To determine the functional relevance of NK cells during influenza infection, we intranasally infected C57BL/6 (wild-type, WT) mice with 500 PFU of mouse-adapted human A/PR/8/34 H1N1 (PR8) influenza virus. The lumen side of the trachea consists of a thick layer of columnar epithelial cells that are exposed to and are the targets of influenza virus. Our earlier study has shown that effector lymphocytes including NK cells infiltrate into the epithelial layer via the basement membrane 40 . We found only fewer NK cells were present on days post-infection (DPI) 0 and their number steadily increased on DPI 4 and 7. Our earlier work has shown that NK cells are abundantly present within the alveolar space and conducting airways of the lungs during influenza infection 5,[40][41][42] . To define the functional relevance, we analyzed the production of IFN-γ by NK cells on DPI 0, 2, 4, 7, and 10. Percentages of IFN-γ + NK cells considerably increased on DPI 2 and DPI 4 ( Fig. 1A. Expression of CD107a (LAMP1), which is a surrogate marker for cytotoxicity peaked on DPI 4 (Fig. 1B). Inflammatory cytokine milieu within the infected trachea and alveolar space regulate the production of IFN-γ from NK cells during influenza infection. To identify the relative contribution of IL-12, IL-27, and IL-23, we infected WT mice with influenza virus. Single cell suspensions from bronchoalveolar lavage (BAL) and the lung tissues were used to quantify the transcript levels of Il12p35, Il12p40, Il23p19, and Il27p28. Cells from both BAL and lung tissues contained transcripts encoding these cytokines. We found Il27p28 transcripts consistently appeared earlier (DPI 4) than Il12p35 and Il12p40 or Il23p19 (DPI 7) in both BAL and lung tissues (Fig. 1C). We next analyzed the expression of the IL-27 receptor (IL-27R) temporally during influenza infection. The IL-27R is composed of IL-27Rα (Wsx1) and the shared IL-6 family receptor subunit, gp130. To determine its expression, we used an antibody specific for IL-27Rα (Wsx1). We performed confocal analyses of NK cells from lung tissues ( Fig. 2A) and flow cytometry analyses (Fig. 2B) of NK cells from BAL, lung tissues, and spleen of infected mice from different DPIs. Our data reveal expression of IL-27Rα in NK cells peaked on DPI 4 coinciding with the production of IL-27. Next, we characterized the cell type that produces the IL-27. Cells from both BAL and lung tissues were stained define Ly6G + cells myeloid population as the predominant producer of IL-27 during this early period of influenza infection (Supplementary Fig. 1A and B). Consistent with published data, production of IL-27 by Ly6G-positive cells temporally aligned with the expression of IL-27R on NK cells during influenza infection 43 . IL-27 augments NKG2D but not IL-12-mediated IFN-γ production. Proinflammatory cytokines IL-12 and IL-18 can stimulate the production of IFN-γ. IL-23 is primarily responsible for the production of IL-22 from effector lymphocytes including NK cells 40,44,45 . Generation of IL-12 during the early phase of influenza infection indicates that these cytokines may play a role in regulating the IFN-γ gene transcription in NK cells during the early phase of infection. Influenza infection also leads to the expression of inducible stress proteins such as H60, Rae-1, and Mult1 in infected cells that are the cognate ligands for the activation receptor NKG2D 46 . Infected epithelial cells also express viral haemagglutinin, which is one of the defined ligands of the activating NK cell receptor, NCR1 (Nkp46). To distinguish the role of IL-27 on activation receptor (such as NKG2D or Ly49D) and cytokine receptors (such as IL-12 or IL-18), we stimulated IL-2-cultured NK cells with combinations of stimuli (Fig. 3). Recombinant IL-27 (rIL-27) augmented only NKG2D-but not IL-12 or IL-18-mediated activation (Fig. 3A). Although IL-12 is a potent stimulator of IFN-γ, the presence of exogenous IL-27 did not augment IFN-γ production. In contrast, the presence of IL-18 along with IL-12 resulted in the maximal production of intracellular IFN-γ (Fig. 3B). These observations were further validated for other cytokines and chemokines including GM-CSF, RANTES, and MIP1-α by testing the culture supernatants via multiplex assays (Fig. 3C). To further corroborate these findings, we stimulated IL-2-cultured splenic NK cells from WT and Il27ra −/− mice with anti-NKG2D or anti-Ly49D mAbs in the presence or absence of recombinant IL-27. IL-27 alone was not able to induce the generation of IFN-γ (Fig. 3D). However, IL-27 significantly augmented anti-NKG2D mAb-mediated production of cytokines and chemokines in NK cells from WT mice. Role of exogenous IL-27 on anti-Ly49D mAb-mediated generation of GM-CSF and RANTES was significant; however, its effect on IFN-γ and MIP-1α was only moderate in NK cells from the WT. Importantly, the addition of rIL-27 did not have any effect on NK cells from Il27ra −/− mice confirming the specific role of IL-27 on activation receptor-mediated cytokine and chemokine production (Fig. 3D). IL-12 and IL-18 together promote optimal production of IFN-γ. rIL-27 along with IL-12 did not significantly augment the production of IFN-γ. Therefore, we next tested whether the combination of IL-12, IL-18, and IL-27 has an additive effect on NK cells. There were no significant differences between IL-12 and IL-18 or the combination of all three cytokines in the production of IFN-γ production (Fig. 3E). These observations confirm that IL-27 plays a central role in the co-stimulation activation receptor (such as NKG2D)-mediated NK cell effector functions. IL27R and IL-35R belong to the IL-6R family, and they utilize a common receptor chain, gp130 (Ref). Also, to distinguish the unique role of IL-27 and its receptor IL-27R (IL-27Rα/gp130) from IL-6R in NK cells, we stimulated IL-2-cultured NK cells from WT mice with anti-NKG2D mAb in the presence of increasing concentrations of recombinant IL-6 protein (rIL-6). We found that in contrast to rIL-27, rIL-6 was unable to augment anti-NKG2D mAb-mediated activation and production of IFN-γ even at a higher concentration of 10 ng/ml (Fig. 3F). IL www.nature.com/scientificreports www.nature.com/scientificreports/ To eliminate the possibility of an NK cell extrinsic defect in the observed effector functions in mice lacking IL-27R (Il27ra −/− ) mice, we performed adoptive transfer experiments. Splenocytes from WT (B6.SJL, CD45.1 + ) and Il27ra −/− (CD45.2 + ) mice were isolated, and a mixture (1:1) of these cells were adoptively transferred into lymphocytes-deficient Rag2 −/− γc −/− mice on (Fig. 4C). Host mice were infected with influenza. On DPI 2, we were able to detect NCR1 + WT and Il27ra −/− NK cells in the lung tissue of Rag2 −/− γc −/− mice (Fig. 4D). The percentages of NK cells transferred from Il27ra −/− mice that produced IFN-γ were significantly lower compared to that of B6.SJL (Fig. 4E,F). However, the overall percentages of LAMP1 + NK cells did not vary between the B6.SJL (CD45.1) and Il27ra −/− CD45.2) mice. Collectively these data suggest IL-27 plays a primary role in the production of IFN-γ from NK cells during the early phase of influenza infection in vivo. IL-27 regulates CD27 + CD11b + effector NK cell population in vivo. Expression of CD27 on T cells along with CD44 marks the onset of early effector memory phenotype 47 . The absence of CD27 on CD44 + CD62L − T cells define the late effector memory and effector T cell subsets. In the case of NK cells, expression of CD27 and CD11b define functional subsets of NK cells 48,49 . CD27 + CD11b + and CD27 + CD11b − NK cells are specialized to secrete cytokines, whereas CD27 − CD11b + NK cells are more cytotoxic 48,49 . To identify the primary subset of NK cells that produces IFN-γ during the early phase of influenza infection, we analyzed the lungs of the WT mice on DPI 2. Lung-derived NK cells were gated based on their CD27/CD11b positivity (Fig. 5A). We found that the majority of NK cells that produced IFN-γ were the CD27 + CD11b + compared to the other two subsets. Next, we gated the NK cells from the lung of the infected mice (DPI 2) based on the expression of IL-27R into NCR1 + IL-27R + or NCR1 + IL-27R − cells. These two subsets were further divided into CD27 − CD11b + , CD27 + CD11b + , and CD27 + CD11b − NK cells and the production of IFN-γ examined. Among these, IL-27R + CD27 + CD11b + NK cells www.nature.com/scientificreports www.nature.com/scientificreports/ were proportionately most positive for intracellular IFN-γ compared any other subsets (Fig. 5B). Also, among the IL-27R − NK cells, CD27 + CD11b + NK cells were predominant in producing IFN-γ (Fig. 5B). Our data suggest that the CD27 + CD11b + NK cell subset may have an inherent maturation or recruitment defect in Il27ra −/− mice. Distinct stages of development and acquisition of unique functional receptors characterize NK cell maturation in the BM. CD27 + CD11b − subset represents the earliest stage of functional maturation of NK cell development in the BM 50,51 . These single positive subset transition into the CD27 + CD11b + intermediate stage and eventually into CD27 + CD11b − NK cells. CD27 − CD11b + NK cells were predominant in circulation as well as in the lung tissue 49 . Furthermore, CD27 + CD11b + , as well as CD27 + CD11b − NK cells, have the more proliferative capacity 49,52 . Irrespective of this knowledge, it is not clear if NK cells subset specification changes under inflammatory condition. To define this, we analyzed the BAL and the lung tissues of influenza-infected WT and Il27ra −/− mice on DPI 0, 4, and 7 (Fig. 5C,D). We found a significant reduction in the percentages of CD27 + CD11b + NK cell effector population in the alveolar space of Il27ra −/− but not WT in influenza-infected mice. Also, there was a proportionate and a concomitant increase in the percentages of CD27 − CD11b + NK cell subset at an earlier stage of influenza infection. The changes in the percentages of these subsets were not due to a change in the absolute number of NK cells in the BAL of the infected mice. Absolute number of NK cells per million lymphocytes in the BAL ranged between 17,400 to 35,550 in the WT on DPI4. In mice that lacked IL-27Rα, it ranged between 11,500 to 20,000 NK cells. On DPI7, it was between 62,400 to 123,250 in the WT mice while in Il27ra −/− mice between 128,250 to 179,550. CD27+ CD11b−, CD27+ CD11b+, CD27− CD11b+ NK cells were distributed as shown with the percentages. Thus, in fact the absolute number of NK cells in the BAL during influenza infection between the WT and the Il27ra −/− mice were either comparable or moderately increased in the Il27ra −/− mice. We next examined the bone marrow (BM) of naïve mice for these different NK cell subsets. Interestingly, Il27ra −/− mice have significantly less CD27 + CD11b + NK cells in the BM, suggesting IL-27 may have a role in NK cells development or maturation or proliferation ( Supplementary Fig. 4A and B). Our data further show a similar level of various inhibitory and activating NK cell receptors in naïve BM-derived WT and Il27ra −/− mice, suggesting a more specific role of IL-27 in regulating CD27 + NK cell subsets ( Supplementary Fig. 4C-E). Lack of EBI3 reduces the ability of NK cells to produce IFN-γ during influenza infection. IL-27 cytokine consists of two subunits, EBI3, and IL-27p28. To further confirm the role of IL-27 in regulating NK cells www.nature.com/scientificreports www.nature.com/scientificreports/ effector functions in vivo, we infected WT and Ebi3 −/− (the critical subunit of both IL-27 and IL-35 cytokines) mice with influenza virus. NK cells upregulate IL-27R in WT mice in response to influenza infection ( Fig. 2A,B). Our data suggests Ebi3 −/− mice are more susceptible to influenza infection as revealed by weight loss (Fig. 6A), mortality curve (Fig. 6B) and increased collagen deposition in the lung tissue as detected by Masson's trichrome staining (Fig. 6C). Interestingly, lymphocytes (Fig. 6D) and specifically NK and T cells (Fig. 6E) recruitment to the bronchoalveolar space (BAL), but not lung tissue/parenchyma were significantly reduced in Ebi3 −/− mice early point of infection. In line with these observations, NK cells in the bronchoalveolar space were not capable of mounting early-phase anti-viral effector functions as revealed by reduced intracellular IFN-γ in Ebi3 −/− mice at the early time point of infection (Fig. 6F). Ebi3 −/− mice displayed reduced IFN-γ production at the transcriptional level (Fig. 6G), suggesting IL-27 regulates NK cells responses at an early phase of influenza infection. IL-27 regulates NKG2D-dependent cytotoxicity. Our data suggest IL-27 enhance NKG2D-mediated cytokines generation. Earlier work suggested that IL-27-augmented cytotoxicity of NK cells is dependent on the upregulation of NKG2D ligands on the epithelial tumor and via ADCC 53 . We next examined the requirement of IL-27 in regulating NKG2D-mediated NK cells cytotoxicity. We used two stable EL4 cell lines (induced-self), RMA-S (missing-self) and YAC1 (non-self) in cytotoxicity assays. IL-2-cultured NK cells from Ebi3 −/− or Il27ra −/− mice have shown similar cytotoxicity towards EL4-H60 low , EL4-H60 high , RMA-S, and YAC1 cell lines as compared to WT mice, suggesting there is no inherent defect in the cytotoxic potentials of NK cells ( Supplementary Fig. 5A). We found that recombinant IL-27 enhances the cytotoxic capacity of NKG2D-mediated WT and Ebi3 −/− but not Il27ra −/− mice IL-2-cultured NK cells (Supplementary Fig. 5B). IL-27-dependent activation of NF-κB, T-bet, MafF, and Nrf2 regulates NK cells effector function. Our data strongly suggest a critical role of IL-27 in regulating NK cells effector functions in vivo and in vitro. We www.nature.com/scientificreports www.nature.com/scientificreports/ next investigated the molecular mechanisms by which IL-27 regulate the effector functions of NK cells. Towards this, we purified NK cells from WT, and Ebi3 −/− mice on DPI 4 following influenza infection, purified mRNA, and performed gene array analyses for a panel of 48 transcription factors using Biomark chips. We found transcript levels of Ahr (Aryl hydrocarbon receptor), Ccnd1 (Cyclin D1), Elk1 (ETS domain-containing protein), and cKit (CD117) were considerably increased (Fig. 7A). In contrast, Foxo1 (Forkhead box 1), Foxo4 (Forkhead box 4), Irf4 (Interferon regulatory factor 4), Myc (bHLH transcription factor), Nfatc1 (NF-AT), cRel (NF-κB subunit), Rela (NF-κB subunit), and Tbx21 (T-bet) were substantially reduced (Fig. 7A). Among these, transcription factors such as c-Rel, Rel-a, T-bet are known to play a direct role in the transcription of IFN-γ and thus providing a mechanistic explanation of how IL-27 regulates cytokine production in NK cells. Also, our data showed a reduction in the expression of v-Maf musculoaponeurotic fibrosarcoma oncogene (Maf) homolog F (MafF) but not c-Maf in NK cells from Ebi3 −/− mice (Fig. 7A). Nuclear factor E2-related factor 2 (Nrf2) binds to the ARE sequence as a heterodimer with one of the small bZIP proteins, Mafs, and activates specific gene transcriptions. We confirmed gene array data using RT-qPCR analyses of c-Maf, MafF, and MafK expression in sorted NK cells from influenza-infected WT and Ebi3 −/− mice (Fig. 7B). The gene array data also show that the expressions Nrf1 and Nrf2 were significantly reduced in Ebi3 −/− mice sorted NK cells (Fig. 7B), suggesting IL-27 stimulate NK cells through MafF/MafG-Nrf2 pathways. Nrf2 plays an indispensable role in augmenting the expression of Phase-II detoxifying and the anti-oxidant enzymes. Therefore, we investigated the expressions of quinone oxidoreductase (NQO1) and γ-glutamylcysteine ligase catalytic (GCLC). Although the transcript levels of Nqo1 did not change, levels of Gclc was significantly reduced in NK cells from Ebi3 −/− mice (Fig. 7C). In addition, we found that transcripts encoding both mitochondrial transcription factor A (TFAM) and carnitine palmitoyltransferase 1 (CPT1) were also significantly reduced in NK cells from Ebi3 −/− mice compared to that of WT (Fig. 7C). Our study demonstrates that IL-27 has an augmenting www.nature.com/scientificreports www.nature.com/scientificreports/ effect on NKG2D-mediated IFN-γ production from NK cells (Fig. 3). To define whether Nrf2 gene was a direct target of this combined activation, we stimulated NK cells with anti-NKG2D mAb in the presence or absence of IL-27. Presence of IL-27 significantly augmented the expression of Nrf2 (Fig. 7D). Collectively, these findings provide a novel insight into the stimulatory functions of IL-27 on NK cells in the trachea and alveolar space of the lungs during influenza infections. Role of these changes should be further explored and could partially be responsible for the reduction in the production of IFN-γ in NK cells from Ebi3 −/− mice. Consistent with Ebi3 −/− mice data, NK cells from Il27ra −/− mice ( Fig. 8A) but not Il12a −/− mice (Fig. 8B) also have reduced expression of MafF. Nrf2 can form a heterodimer with small Mafs, specifically MafF and MafG 54 . To further explore the relevance of reductions in Nrf2 and MafF, we stimulated NK cells using plate-bound anti-NKG2D mAb along with small molecule compounds AI-1 (activates Nrf2 by covalently modifying Keap1, a negative regulator of Nrf2 and Oltipraz that activate anti-oxidant response element, Nrf2 55 . Presence of either AI-1 or Oltipraz enhanced the IFN-γ generation following NKG2D stimulation (Fig. 8C) validating the ability of IL-27 activating a proinflammatory cascade via MafF/MafG-Nrf2 pathways. Discussion NK cells provide the early line of defense against viral infections [56][57][58] . NK cells execute effector functions through their non-clonotypic activating receptors such as NKG2D, NCR1, and Ly49D. Cytokines are required to regulate the initiation, amplification, and the maintenance of transcriptional memory of an immune response 59 . Cytokines also play a crucial role in establishing 'immunological priming' . This phenomenon has been well-documented for over two decades for B, and T cells that form the basis for the adaptive immunity 60,61 . However, the occurrence of similar functional adaptations for NK cells is not yet fully understood. Myeloid cell-derived cytokines such as IL-12, IL-15, IL-18, and IL-27 coordinate the receptor-mediated activation of NK cells [62][63][64] . While the intricate temporal relationship between these cytokines and activation receptor-mediated stimulation of NK cells, in vivo, remains a paradox. Whether and how the IL-12 family of cytokines plays an essential role in transcriptionally-priming NK cells are of active investigations 39 . IL-12 transcriptionally-prime both T and NK cells 65,66 . In T cells, cytokines function as a 'third signal' along with the activation receptor (primary signal) and co-stimulation (second signal) 67 www.nature.com/scientificreports www.nature.com/scientificreports/ coordinating the activation via receptors such as NKG2D or NCR1 is not clear. In this study, we utilized an influenza infection model and examined the role of IL-12 cytokine family in coordinating the activation of NK cells, which trafficked into the alveolar space (BAL) and lung tissue within 2-4 days post-infection. Entry of NK cells into the lumen-side of the trachea coincided with the destruction of epithelial cell layer 40,45 . Importantly, a significant percentage of NK cells in the lung tissue produced IFN-γ compared to that of spleens between DPIs 2-4, within the same mice. However, expression of CD107a (Lamp1) a surrogate marker for the release of the enzyme, granzyme B, from the cytotoxic vesicles peaked between DPIs 4-7. Through our earlier work, we showed that this increase in cytotoxic granules coincided with significant damage to the epithelial cell layer between DPIs 4-7 40 . Damage to the ciliary structure of the epithelial cells and an injury to the epithelial layer can make patients susceptible to secondary bacterial infections leading to developing pneumonia and death. Therefore, an efficient immune response to clear the pathogens and rapid regeneration of the epithelial layer is essential to re-establish this protective barrier. Analyses of transcripts encoding Il12p35, Il12p40, Il23p19, and Il27p28 indicated that IL-27 was produced during the early phase of influenza infection between DPIs 4-7 both in the alveolar space and lung tissues. IL-27 is one of the IL-12 family members and a proinflammatory cytokine [70][71][72] . Further analyses demonstrated that a Ly6G + myeloid population was the predominant cell type that produced IL-27 as described earlier 43 . We predict that these Ly6G + myeloid cells are neutrophils and the support for this notion is coming from the findings that human neutrophils along with monocytes/macrophages in patients with melioidosis, a severe form of septicemia caused by the gram-negative bacterium, Burkholderia pseudomallei 73 . Production of IL-27 concurred with the expression of IL-27Rα (Wsx1) in NK cells on DPI 4 both in the alveolar space (BAL) and within the lung tissues. Earlier studies have demonstrated that Il-27 reduced lung inflammation by dampening neutrophil recruitment and T H 1 and T H 17 functions during influenza infection 43 . However, compared our study that focuses on the early phase of infection, this study explored the role of IL-27-induced IL-10 on T cells at DPI 8 43 . Myeloid populations including dendritic cells produce IL-12, IL-18, and IL-27 that can function as the 'third' signal to prime effector lymphocytes 74,75 . In CD8 + T cells, IL-12 can open Ifng gene open along with select few hundred genes by chromatin remodeling, relieving gene repression, and allowing continued transcription by promoting augmented histone acetylation 74 . IL-12 along with IL-18 can promote the production of IFN-γ in T and NK cells [76][77][78] . However, the role of these cytokines in transcriptionally-priming NK cells and to augment NKG2D-dependent effector functions has not been investigated. Our data strongly suggest the presence of IL-27 during the activation of NK cells via NKG2D or Ly49D significantly augmented the production of IFN-γ, GM-CSF, RANTES, and MIP-1α. However, a similar additive effect was not observed when NK cells were stimulated with IL-27 along with IL-12 or IL-18, demonstrating a unique transcriptional and functional role of IL-27. Similarly, IL-6, another proinflammatory cytokine that shares its receptor subunit with the IL-27 receptor, did not augment IFN-γ production either alone or along with anti-NKG2D mAb-mediated activation of NK cells. Thus, it www.nature.com/scientificreports www.nature.com/scientificreports/ is possible that during an acute viral infection, neutrophils and monocytes/macrophages may provide the 'third' signal to NK cells at the site of infection through the production of IL-27. This function may be distinct from the role of dendritic cells that produce both IL-12 and IL-18 within the secondary lymphoid organs (draining lymph nodes) that is to provide a full-fledged activation of NK cells to mediate effector functions. Detailed transcriptomic and genomic analyses are required to further define the temporal and independent roles of these cytokines and their producers. Earlier reports have suggested that IL-27 can selectively regulate NK cell subsets in human 33 . Through this study, we have found that IL-27 regulates the development of CD27 + CD11b + NK cell subset. Based on CD27 and CD11b expression, different organs of the body has unique NK cells subset repertoire 49 . Lung NK cells are largely CD27 − CD11b + , whereas BM NK cells are mainly CD27 + CD11b − . These subsets define maturation stages of NK cells 49,79,80 . CD27 high CD11b low define the earliest stage of NK cells maturation. Our data show that after influenza infection CD27 + CD11b + effector NK cells appear in the alveolar space and lung tissue of WT mice but not in Il27ra −/− mice, suggesting a critical role for IL-27 in regulating this NK subset. We found a defect in CD27 + CD11b + population; however, no changes were seen in the expression of NK cells activating or inhibitory receptor in the BM of naïve Il27ra −/− mice. It is possible that myeloid cells in the BM constitutively produce a low level of IL-27 to regulate NK cells maturation, thereby modulates NK effector function. It could also be possible that CD27 + CD11b + subset may be recruited to lung tissue from BM after influenza infection. It has also been shown that CD27 expression on NK cells determines migratory capacity and thus IL-27 may play a selective role in promoting the trafficking of a subset of NK cells to the site of infection 49,81 . Lack of IL-27Rα (Wsx1) or IL-27 subunit EBI3 resulted in impaired NK cell-mediated effector functions and ineffective clearance of influenza infection. In line with earlier work using influenza infection on Il27ra −/− mice 43 , we found similar weight loss and mortality trend in Ebi3 −/− mice. Mixed chimera experiments using splenocytes from WT (CD45.1) and Il27ra −/− (CD45.2) further provide strong support that the reduced production of IFN-γ in NK cells from Il27ra −/− mice are cell-intrinsic. Our study strongly suggests that IL-27 plays a crucial role during the early phase of influenza infection (DPI 1-4). After DPI 5, the requirement for IL-27 decreased. On DPI 7, the absolute numbers of NK cells and their effector responses were comparable in WT and Ebi3 −/− or Il27ra −/− mice, suggesting a possible role of other cytokines/factors in regulating NK cells functions. It is also important to note, the adaptive immune response by T cells against infected epithelial cells and the B cells producing neutralizing antibody start to occur after DAP 7. Indeed, IL-27 is known to regulate cytoprotective IL-10 generation from T cells 43,82 . NK cells from CD27 −/− mice develop phenotypically normal with a similar or enhanced expression of activating or inhibitory receptor 81 . This observation is in line with our Il27ra −/− mice data, where there is no defect in NK cells activating or inhibitory receptor expression. Thus, IL-27 plays an essential role in the early NK cell-mediated functions as well as regulates adaptive immune response determining the pathophysiological outcome of influenza infection. Transcriptional priming of effector lymphocytes by cytokines lead to genomic imprinting, resulting in unique cell fate decisions, and distinct functional outcomes. IL-12 mediates such prototypical chromatin modifications on signature cytokine genes such as Ifng and Il4 in T cells primarily via signal transducer and activator of transcription 4 (STAT4)-dependent chromatin remodeling 83,84 . Earlier studies have shown that the interaction of IL-27 with IL-27Rα/gp130 heterodimer leads to the recruitment of Jak1, Jak2, Tyk2 triggering the phosphorylation of STAT1, STAT2, STAT3, and STAT5 in CD4 + T cells 85,86 . STAT1/STAT3 heterodimers translocate into the nucleus, binds to Tbx21 (T-bet) promoter, increases T-bet levels, which in turn initiates the transcription of Ifng gene. Ifng gene promoter is known to contain at least three T-bet, two NF-AT, two NF-κB, one AP-1, and one CREB/ATF-2 binding sites. Thus, a substantial reduction in NF-AT, NF-κB, and T-bet can account for the significant reduction in IFN-γ production in NK cells from Ebi3 −/− or Il27ra −/− mice. Given these premier roles of STAT proteins in the transcription of signature cytokines are well-established, we next focused on the role of IL-27 on Maf proteins. We found that NK cells from Ebi3 −/− as well as Il27ra −/− mice have reduced MafF expression with the highest fold change during influenza infection. Maf-F belongs to small Maf family which are basic-region and basic-leucine zipper (bZIP)-type transcription factors. Nuclear factor E2related factor 2 (Nrf2) heterodimerizes with small bZIP protein, Maf-F, and activates specific gene transcriptions. This notion is confirmed by our findings that the expression Nrf2 is also reduced in Ebi3 −/− mice, suggesting IL-27 is also functioning through these transcription factors. These findings are further validated by the earlier observation that Nrf2 indeed interacts with Maf-F and Maf-G 54 . In summary, our findings demonstrate the crucial role played by IL-27 during the early phase of influenza infection. IL-27 plays a central role in augmenting the stimulation via NKG2D and NCR1. Future studies are warranted to define the temporal and functional relationship of IL-27 with IL-12, IL-18, and IL-15. Methods Mice and stable cell lines. C57BL/6, B6.SJL (H-2 b CD45.1 + ), Il12a −/− mice and Ebi3 tm1Rsb (Ebi3 −/− ) mice were purchased from Jackson Laboratory (Bar Harbor, ME) and maintained in pathogen-free conditions at the Biological Resource Center at the Medical College of Wisconsin. Rag2 −/− γc −/− mice were purchased from Jackson Laboratory (Bar Harbor, ME). Generation of Il27ra −/− gene knockout mice has been described earlier 87 . Female and male mice between the ages of 6 and 12 weeks were used. EL4 (ATCC, Manassas, VA), EL4 H60 (a derivative of EL4 and was generated by our laboratory), RMA/S (a kind gift from Dr Nilabh Shastri, UC Berkeley), and YAC1 (ATCC, Manassas, VA) cells were maintained in RPMI-1640 medium containing 10% heat-inactivated FBS (Life Technologies, Carlsbad, CA). Those cell lines were periodically tested to exclude the possibility of mycoplasma contamination. The generation of stable H60-expressing EL4 cell lines has been described 88 . Ethics Statement. All animal experiments in this study were conducted in accordance with the guidelines of the US Government Animal Welfare Act (AWA) 7 U.S.C. § 2131. Institutional Animal Care approved all animal www.nature.com/scientificreports www.nature.com/scientificreports/ protocols and Use Committees of the IACUC at the Medical College of Wisconsin, Milwaukee, WI. Medical College of Wisconsin is formally accredited by AAALAC and all the animal care and use-protocols used in this study fully adhere to the specified guidelines of AAALAC. The unique animal protocols that are approved by the IACUC and used in this study are AUA1500 and AUA1512. In vivo infection. WT mice 6-8 weeks of age were deeply anesthetized and were intranasally challenged with 500 plaque-forming unit of PR8 virus in sterile phosphate-buffered saline in a total volume of 30 μl through one nostril as described 40 . Mock infections were carried out using only sterile phosphate-buffered saline without the virus. After infections, mice were observed for weight loss and mortality for two weeks. BAL fluid collection. Mice were euthanized, and thoracic cavity was cut open, and a 1 cm incision was made parallel to the trachea through the fur of the mouse to expose it. A midline incision was made on the ventral aspect of the trachea slightly above the thoracic inlet 0.3 ml of phosphate-buffered saline-1% bovine serum albumin was infused into the lung through the thoracic inlet using a sterile 1 ml syringe. Lavage fluid was aspirated, aliquoted, and frozen until use. Adoptive transfer. Single-cell suspensions from the splenocytes of B6.SJL (H-2 b CD45.1 + ) and Il27ra −/− (H-2 b , CD45.2 + ) mice were mixed 1:1 and adoptively transferred into Rag2 −/− γc −/− (CD45.2 + ) intravenously as described earlier 6 . Donor CD45.1 + and CD45.2 + NK cells were detected, and the generation of IFN-γ was analyzed by flow cytometry following day two post adoptive transfer.
v3-fos-license
2024-03-12T15:06:14.241Z
2024-03-01T00:00:00.000
268328440
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/232125/20240310-6508-j5myzj.pdf", "pdf_hash": "6c39ecfae13a740fb00ca4f15968161027798ebf", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2484", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "b095bb1f2ccd2e19b8a5f1620e0880a913d90897", "year": 2024 }
pes2o/s2orc
The Prevalence of Nephrolithiasis and Associated Risk Factors Among the Population of the Riyadh Province, Saudi Arabia Background and objective Kidney stones, also referred to as nephrolithiasis or renal calculi, is a condition where crystal depositions are formed within the kidney and ideally excreted from the body via the urethra with no pain; however, larger calculi may cause significant pain and require further medical assistance. The vast majority of patients who develop renal calculi form calcium stones, which are either a composition of calcium oxalate or calcium phosphate. Other types include uric acid, struvite, and cysteine. While kidney stones are one of the most significant diseases among the Saudi population, which require an acute emergency intervention to prevent serious long-term complications, there are limited studies published regarding this condition in Saudi communities. In light of this, we performed this study to assess the prevalence, incidence, and risk factors of kidney stones among the population of Riyadh, Saudi Arabia. Methods This was a cross-sectional study conducted in Riyadh, Saudi Arabia between August and October 2023, aiming to estimate the prevalence and risk factors of nephrolithiasis among residents of the Riyadh province. Data were collected through an electronic questionnaire in both Arabic and English and distributed via social media in addition to barcode handouts in various selected venues in Riyadh. The questionnaire involved 12 questions categorized into three sections. The first section obtained demographical information while the second section collected data about the past medical history of the participants. Lastly, the third section aimed to assess the prevalence of nephrolithiasis among participants or any history of the condition among their families. Results A total of 1,043 participants were surveyed, of whom 533 were males (51.1%). The prevalence of kidney stones was reported in 98 individuals (9.4%) overall. Individuals in the age groups of 36-50, 51-60, and >60 years showed significantly more renal stone prevalence than those in younger age groups (p<0.001). The prevalence was found to be higher in participants who were smokers, diabetic, hypertensive, and those who suffered from inflammatory bowel disease (IBD), gout, chronic kidney disease (CKD), hyperthyroidism, and hyperparathyroidism. Participants who took calcium supplements or had a positive family history of renal stones were found to have a higher prevalence of renal stones as well. However, only hypertension, gout, and family history showed any statistical significance (p<0.05). Conclusions A direct correlation was observed between hypertension, gout, positive family history, and aging and an increased prevalence of kidney stones among the inhabitants of the Riyadh province. Therefore, we encourage the local authorities to raise awareness of kidney stones and their related risk factors among the general public. Moreover, further local studies need to be conducted to gain deeper insights into kidney stone prevalence, especially pertaining to associated comorbidities and the pattern of the disease itself. Introduction Kidney stones, also known as nephrolithiasis or renal calculi, is a condition characterized by the formation of crystal depositions within the kidney, which are ideally excreted from the body via the urethra without causing any pain; however, larger calculi may cause significant pain and necessitate medical treatment [1]. The vast majority of patients who develop renal calculi form calcium stones (80%).Furthermore, calcium stones are either a composition of calcium oxalate or calcium phosphate [1].Other types of kidney stones include uric acid, struvite, and cysteine stones [1].Stones form within the kidney in the setting of urine supersaturation, during which solutes begin to precipitate in the urinary tract, leading to crystal formation [2].In addition, many other factors such as the pH level of the urine affect the formation of crystals within the kidney [2].Patients with nephrolithiasis may not report any symptoms early in the course of the disease; however, later on, they may present with flank pain, hematuria, urinary tract infection (UTI), blockage of urine flow, obstructive uropathy, and hydronephrosis [1]. A study from the United States involving 10,521 participants and focusing on the prevalence and incidence of kidney stones reported a prevalence rate of 1,157 (11%) and a 12-month incidence rate of 2.1% (n=221) [3].Moreover, another US study assessing the prevalence of kidney stones among 12,110 participants reported them in 1,066 (8.8%) participants [4].Additionally, the study found that kidney stone prevalence was higher in obese men compared to other participants [4].Locally, in Saudi Arabia, a retrospective study assessing the characteristics and types of kidney stones in the Eastern Region found that among 235 reviewed patients with a mean age of 48.52 years, 175 (74.5%) had renal calculi, with calcium oxalate being the most common type (n=133, 76%) [5].The study reported a male predominance in the patient population. Risk factors related to kidney stones vary among subsets of populations, and environmental factors play a crucial role in the pathogenesis of the condition.Furthermore, extensive research has revealed that the incidence of nephrolithiasis can be correlated with gender, ethnicity, geographical location, occupation, hot climate, and an unhealthy diet, such as those involving excessive consumption of caffeine, salt, dairy products, animal proteins, and fat [6].In addition, individuals with a prior history of kidney stones are at a higher risk of developing new kidney stones [7] by 15% during the first year, and 50% in the next 10 years [8].Also, low fluid intake is highly associated with the incidence and recurrence of kidney stones [9], with studies showing that high fluid intake minimizes the risk of developing kidney stones [9].Comorbidities such as metabolic syndrome have been strongly linked to the formation of kidney stones [10].Thus, for instance, a patient who has three or more metabolic syndrome traits is more prone to develop kidney stones [10]. Kidney stones are one of the most significant diseases among the Saudi population, which require an acute emergency intervention to prevent serious long-term complications; however, there are limited studies published regarding this disease in the Saudi population.Hence, this study aimed to examine the prevalence, incidence, and risk factors of kidney stones among the population of Riyadh, Saudi Arabia. Study design and setting This was a cross-sectional study conducted in Riyadh, Saudi Arabia after obtaining IRB approval from the King Abdullah International Medical Center (approval no.IRB-1878-23).The study was conducted between August 2023 and October 2023.Data were collected through an electronic questionnaire in both Arabic and English languages distributed via social media in addition to barcode handouts in various selected venues in Riyadh.Furthermore, information was kept private per Google's privacy policy.Participation was strictly voluntary. Sample size The sample size was estimated to be 385 with a confidence interval of 95% and a margin of error of 5% based on calculations performed on the Raosoft online calculator (Raosoft, Inc., Seattle, WA) where the population of Riyadh province was estimated to be 8.5 million according to data extracted from General Authority for Statistics (GASTAT) [11].The inclusion criteria were any resident in Riyadh province aged 18 years or more.Individuals not residing in the Riyadh province and those aged less than 18 years were excluded from the study.A total of 1208 respondents completed the survey, of which 164 were excluded for not meeting the inclusion criteria. Development and application of the questionnaire The research team contacted the corresponding author of a study that was conducted in Hail City about obtaining and using the questionnaire in our research [12].The questionnaire involved 12 questions categorized into three sections.The first section obtained demographical information such as age, gender, height, weight, education level, and occupation.The second section collected the past medical history of the participants, including chronic diseases, use of medication, and whether they smoked or not.The third section assessed the prevalence of nephrolithiasis among participants or any pertinent history among their families. Data management and statistical analysis Data collected were entered into a Microsoft Excel sheet and tabulated.Data analysis was performed on SPSS Statistics version 23 (IBM Corp., Armonk, NY) by an independent biostatistician.Descriptive statistics were presented in the form of frequencies and percentages.Mean and standard deviation (SD) were used for representing the continuous variables.Pearson's chi-square test was used to assess the relationship between categorical variables.A multivariate regression model was performed to analyze the risk factors for renal stones.A p-value less than 0.05 was considered statistically significant. The prevalence of renal stones based on different sociodemographic characteristics is presented in Table 2.No significant differences were seen between male and female participants regarding renal stone prevalence (p=0.209).The age groups of 36-50, 51-60, and >60 years showed significantly more renal stone prevalence than younger age groups (p<0.001).Nationality, educational level, occupation, and BMI did not show statistically significant differences regarding renal stone prevalence (p>0.05).The prevalence of renal stones was found to be higher in participants who were smokers; those with IBD, hypertension, diabetes mellitus, gout, CKD, hyperthyroidism, and hyperparathyroidism; those who took calcium supplements; and those with a family history of renal stones.However, only hypertension, gout, and family history showed statistical significance (p<0.05)(Table 3). Discussion Kidney stones remain one of the most common conditions encountered in acute care settings [13].Recent studies have shown an increase in the prevalence of kidney stones globally [14].Similarly, recent research done in Saudi Arabia to estimate the prevalence of kidney stones in Jeddah and Riyadh has shown that the disease prevalence is on the rise [15].Therefore, gaining deeper insights into this topic is important not only for treatment guidance but also for identifying the associated risk factors and further modifying them accordingly.Moreover, an assessment of the current prevalence of this disease in our region is pivotal as it reflects the burden on the community, quality of life, and financial costs.In this study, renal stones were reported in 98 participants (9.4%) in the Riyadh province.Another study conducted in Saudi Arabia reported renal stones among 64 patients (9.1%), which is slightly lower than in our study [16].Another study from Riyadh and Jeddah found that the prevalence in Riyadh city alone was 14.8% (n=56), which is higher than the outcome observed in our study; however, this might be attributed to the lower sample size in the study from Riyadh province [15]. Regarding age, the highest occurrence of kidney stones was among those aged 36 and older, and the prevalence was noted to increase with advancing age.These findings are similar to another study conducted in Bisha, which found that those aged 51 and older are more likely to have kidney stones [17].These findings can be attributed to the continuous defects in urine ammoniagenesis, which is considered the main factor behind low urine pH that leads to the formation of kidney stones [18].Concerning gender, our study did not show any statistically significant difference.However, the results revealed a slightly higher male predominance (n=533, 10.5%) when compared to females (n=510, 8.2%).These findings align with another study from Saudi Arabia [16].However, another study found that the male predominance was much higher when compared to our results [5].These observations can be linked to the role of estrogen, which plays a protective role in decreasing the concentration of urinary calcium and calcium oxalate [19].On the other hand, a study done by Safdar et al. and Bokhari et al. showed that the female gender is at a higher risk of developing kidney stones [12,15].Furthermore, recent studies have pointed towards a noticeable narrowing in the gap between the gender ratio [20]. As for BMI, a BMI of over 30 kg/m 2 was found to be independently associated with urolithiasis in this study.Furthermore, a study that evaluated the relationship between BMI and the risk of kidney stone formation found a significant trend between high BMI and the formation of kidney stones [21].These findings can be explained by the role of insulin resistance in patients with a higher BMI.Subsequently, insulin resistance leads to a decrease in urinary pH, which in turn influences the formation of uric acid stones [21].With regard to smoking, this study showed that the prevalence of kidney stones is higher in smokers.This is consistent with a meta-analysis published in 2023, which showed that smokers were at almost nine times higher risk of developing kidney stones when compared to those who had never smoked [22].This can be attributed to the increase in vasopressin levels, which can lead to urinary retention, further contributing to a higher risk of stone formation [22]. Concerning diabetes, our study found that the prevalence of nephrolithiasis was higher among diabetic patients; however, diabetes as an independent risk factor did not show any statistical significance regarding stone formation.Nevertheless, a meta-analysis conducted in 2015 involving seven studies showed that diabetes correlates with a significant increase in the risk of kidney stone formation compared to those without the disease [23].The study further reported that BMI, hypertension, and cigarette smoking are factors that affect the association between diabetes and kidney stone formation [23].Moreover, diabetes is significantly associated with uric acid stones; this can be explained by the fact that diabetics have increased insulin resistance, leading to a lower urine pH [23]. Regarding comorbidities, this study showed a strong correlation between hypertension and gout and the incidence of urolithiasis.These results are in line with another study done in Japan to assess the prevalence of kidney stones in patients with gout [24].Furthermore, another study focusing on the population of renal stone patients found a positive association between hypertension and the formation of renal stones [25].A possible hypothesis for this correlation is hypercalciuria, given that most hypertensive patients have an increased urinary calcium excretion, which leads to the formation of calcium-containing stones such as calcium oxalate and calcium phosphate [26].Regarding gout patients, the predisposition toward renal stones could be attributed to low urinary pH and decreased fractional excretion of uric acid [27]. Other parameters in our study, such as the use of vitamin D, calcium tablets, and diuretics, were not significantly associated with urolithiasis.In general, diuretics play a protective role in the formation of renal stones regardless of whether loop or thiazide diuretics are used [28].Concerning family history, this study found that a positive family history of kidney stones increases the risk of developing nephrolithiasis.These observations align with other studies that investigated the relationship between family history and the risk of developing kidney stones, which have found that a positive family history increases the risk of kidney stone formation [29].This can be explained by several genetic factors that are believed to contribute to the formation of calcium oxalate stones and consequently lead to abnormal excretion of calcium oxalate, citrate, and uric acid promotors or suppressors [29]. As for occupation, this study compared healthcare workers with non-healthcare workers concerning renal stone prevalence.We concluded that there is no significant difference in the prevalence of renal stones between healthcare workers and non-healthcare workers.In contrast, another study conducted in 2016 found that physicians had lower rates of nephrolithiasis than the general population and other healthcare workers [30].The same study found that pharmacists, nurses, and other healthcare workers showed no significant difference regarding this condition when compared to the general population [30].Therefore, we can establish that these discrepancies are due to certain limitations in our study, such as the failure to specify the precise role of healthcare workers, and may also be attributed to the difference in sample size between the two populations. Limitations Since the survey was distributed via social media, most respondents were from the younger age groups, and this may have affected the outcomes in terms of the true prevalence of the disease in the community.Additionally, we relied on self-reporting for data collection, which also may have affected the results regarding the actual prevalence of kidney stones.Moreover, our sample size was insufficient to assess the potential effects of comorbidities such as IBD, CKD, hypothyroidism, hyperthyroidism, and hyperparathyroidism. Conclusions This study revealed that the prevalence of kidney stones among the residents of Riyadh province is high, with a clear association with increasing age, positive family history, hypertension, and gout.We urge local authorities to spread awareness about kidney stones and their related risk factors.Moreover, more local studies with larger sample sizes need to be conducted to determine the prevalence of kidney stones in Saudi Arabia on a broader scale, especially by factoring in other elements such as associated comorbidities and the pattern of the disease itself. TABLE 1 : Sociodemographic characteristics of the study population BMI: body mass index TABLE 2 : Prevalence of renal stones based on the sociodemographic characteristics of the participants *Statistically significant (p<0.05)BMI: body mass index TABLE 4 : Logistic regression model to assess the risk factors for renal stones *Statistically significant (p<0.05)CI: confidence interval; BMI: body mass index; SE: standard error; df: degrees of freedom
v3-fos-license
2018-01-01T15:16:06.345Z
2017-11-24T00:00:00.000
20535204
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4425/8/12/343/pdf", "pdf_hash": "38f4c50dbb5f87da9b9419f1d977b8796a133660", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2486", "s2fieldsofstudy": [ "Biology" ], "sha1": "38f4c50dbb5f87da9b9419f1d977b8796a133660", "year": 2017 }
pes2o/s2orc
Epigenetic Basis of Cellular Senescence and Its Implications in Aging Cellular senescence is a tumor suppressive response that has become recognized as a major contributor of tissue aging. Senescent cells undergo a stable proliferative arrest that protects against neoplastic transformation, but acquire a secretory phenotype that has long-term deleterious effects. Studies are still unraveling the effector mechanisms that underlie these senescence responses with the goal to identify therapeutic interventions. Such effector mechanisms have been linked to the dramatic remodeling in the epigenetic and chromatin landscape that accompany cellular senescence. We discuss these senescence-associated epigenetic changes and their impact on the senescence phenotypes, notably the proliferative arrest and senescence associated secretory phenotype (SASP). We also explore possible epigenetic targets to suppress the deleterious effects of senescent cells that contribute towards aging. Cellular Senescence: A Double-Edged Sword against Cancer Cellular senescence was originally identified as a cellular aging phenomenon, but is now recognized as an intrinsic tumor suppressive mechanism that is accompanied by distinct phenotypic changes, such as epigenetic and chromatin remodeling of nuclear architecture. Hayflick and Moorhead first identified cellular senescence when they found that primary human fibroblasts undergo a finite number of divisions before entering a stable proliferative arrest in culture, now termed replicative senescence [1,2]. Replicative senescence was eventually identified as a consequence of telomere attrition following repeated cell divisions, which is thought to reflect aging at the cellular level [3][4][5]. Cellular senescence was shown to apply in vivo with tissue aging after senescent cells were identified in aged tissue and at sites of age-related pathologies [6]. Cellular senescence can also be induced by a variety of potentially tumorigenic stimuli, including genotoxic stress, oncogene activation, and oxidative stress. These stressors cause persistent DNA damage and activate the DNA damage response [7]. Besides DNA damage, studies have found other stressors can initiate cellular senescence, such as in the case of metabolic stress-induced senescence [8]. Studies have indicated cellular senescence as a barrier for tumorigenesis in vivo by the presence of senescent cells at pre-malignant lesions, but not at malignant tumors [9]. Collectively, these findings established the hypothesis that cellular senescence serves a tumor suppressive role and prevents the proliferation of stressed cells harboring oncogenic potential. Cellular senescence has long-term deleterious effects that mediate tissue aging, despite mounting a protective tumor suppressive response. Senescent cells are thought to exert these deleterious effects through cell-autonomous and non-autonomous mechanisms. To avoid neoplastic transformation, senescent cells undergo a cell-autonomous proliferative arrest, which is maintained by the tumor suppressive p53 and cyclin dependent kinases 4 and 6 (CDK4/CDK6) inhibitor p16 (p16 INK4A ) pathways. The p53 pathway is predominantly activated through the DNA damage response following genomic damage, including double strand DNA breaks and telomere dysfunction. DNA damage is sensed through the ataxia-telangiectasia mutated (ATM) kinase that signals to stabilize and activate p53. p53 relays the signal of genomic stress by transcriptionally upregulating p21 CIP1/WAF1 [10,11]. p21 CIP1/WAF1 functions in concert with p16 INK4A to promote hypophosphorylation of retinoblastoma and stably arrest the cell cycle at the G1 phase [12]. Unlike p21, studies have not identified a principal activator of p16 INK4A but found that it is activated by several stress-related pathways, including the p38 MAP kinase pathway [13]. Under the growth-arrested state, senescent cells resist cell death and persist for prolonged periods of time [14]. Consequently, senescent cells accumulate in tissue and are suspected of exhausting tissue of proliferation-competent cells and renewable stem cells over time, diminishing homeostasis and regenerative capacity of the tissue [15][16][17]. This idea is supported by evidence that p16 INK4A expression is elevated and associated with reduced regeneration in multiple stem cell compartments in mice, including in the bone marrow, pancreas, and brain [18][19][20]. Senescent cells can also disrupt tissue homeostasis in a cell-non-autonomous fashion by acquiring a senescence-associated secretory phenotype (SASP). The SASP encompasses a wide range of factors that promote different aspects of the senescence phenotype. These factors include pro-inflammatory cytokines and chemokines, proangiogenic factors such as matrix metalloproteinases and vascular endothelial growth factor (VEGF), and other soluble factors that assist the senescence phenotype, including plasminogen activator inhibitor and prostaglandin E2 [21]. The SASP has been shown to have autocrine and paracrine effects. For instance, interleukin (IL)-6 and 8 have been shown to function in an autocrine manner to promote the DNA damage response and proliferative arrest of senescent cells [22][23][24]. Additionally, transforming growth factor beta has been recently shown to promote cellular senescence of neighboring cells [25]. However, the most profound cell non-autonomous effects are mediated by paracrine signaling that stimulates chronic inflammation [26]. Chronic tissue inflammation is not only a major contributor of age-related degeneration, but also cancer phenotypes by creating a pro-tumorigenic microenvironment [7,26,27]. Thus, targeting the proinflammatory effects of senescent cells is a strategy to suppress the aging process and the development of a myriad of pathologies. The deleterious role senescent cells play during aging and disease was demonstrated using transgenic INK-ATTAC mice that allow for the selective elimination of senescent cells, which entailed apoptosis induction of high p16 INK4A expressing cells. Using this system, studies showed that selectively eliminating senescent cells improved the healthspan and suppressed the pathologic features of progeriod and naturally-aged mice [15,28]. The success of these studies led many to research pharmacological drugs that induce apoptosis of senescent cells, which has been termed senolytic compounds [16]. Such senolytic compounds have been shown to eliminate senescent cells by inhibiting resistance to apoptosis that is integral to the program of cellular senescence [29,30]. However, not all senescent cell types respond to senolytic compounds, which has been found in the case of senescent preadipocytes [31]. Although the selectivity for senescent cells is still being improved, studies using senolytic compounds have shown overall positive effects on attenuating disease pathology and restoring tissue function [32,33]. Senescent cells undergo distinct phenotypic changes, including an enlargement in cellular morphology and elevated lysosomal β-galactosidase activity, which is detected by histochemical staining [34,35]. It is important to note that cellular senescence is not identified by a single characteristic, but a combination of markers with the most common being proliferation arrest and elevated p16 expression and β-galactosidase activity [16]. Thus, the lack of a single universal marker of cellular senescence has posed an obstacle for the detection of senescent cells in vivo. Senescent cells also undergo nuclear phenotypic changes wherein the epigenetic and chromatin landscape undergo widespread alterations. Such changes have been linked to the altered gene expression and effector mechanisms that play a role in the proliferative arrest and acquisition of the SASP during cellular senescence. Recent studies have indicated the prospect of therapeutically targeting epigenetic changes to suppress the deleterious effects of senescent cells. Here, we will review these possibilities and the impact of chromatin and epigenetic changes in regulating cellular senescence and susceptibility for aging. Heterochromatin Changes Accompany Cellular Senescence The most striking nuclear phenotype of cellular senescence is the formation of facultative heterochromatin domains, termed senescence-associated heterochromatic foci (SAHF). The SAHF are easily visible following diamidino-2-phenylindole (DAPI) staining as distinct DNA foci [36]. The foci reflect compacted chromatin and are enriched in a variety of heterochromatic markers, such as hypoacetylated histones, histone H3 lysine 9 and 27 trimethylation (H3K9me3 and H3K27me3), heterochromatin protein 1 (HP1) family proteins, and the histone variant macroH2A. These epigenetic marks repress the transcription of key proliferation-related genes, linking SAHF to the tumor suppressive proliferative arrest of senescent cells [36,37]. The SAHF can also recruit effectors to aid in the senescence arrest, which has found to be the case in the recruitment of the chromatin remodeling enzyme ATRX by H3K9me3 and HP1γ following therapy-induced senescence [38]. Several lines of evidence indicate the appearance of these epigenetic marks that reflect the SAHF during tissue aging and cellular senescence in vivo [17,[39][40][41][42]. The SAHF has additional roles in addition to supporting cell cycle arrest during cellular senescence. For instance, the SAHF has been shown to protect oncogene-induced senescent cells from apoptosis by dampening the DNA damage response. Consequently, apoptosis can be activated following disruption of the SAHF by treatment with a histone deacetylase inhibitor [43]. However, the outcome of inhibiting the SAHF may be context specific. For example, disrupting a key histone methyltransferase Suv39h1 that contributes to the SAHF promotes tumor progression, particularly T cell lymphomagenesis, in a neuroblastoma Ras viral oncogene homolog (N-ras) transgenic mouse model [44]. Formation of the SAHF does not merely depend on the redistribution of repressive epigenetic marks, but requires key molecular and chromatin changes. Such a molecular change that promotes SAHF formation is activation the p16 INK4A retinoblastoma pathway and, therefore, strongly correlates with cell cycle arrest [36,45,46]. However, several studies have challenged the requirement of the SAHF for cellular senescence and demonstrated that it is not universal among all senescent cell types and senescence-inducing stimuli. For instance, the SAHF are most prominently observed during oncogene-induced senescence, but also form during replicative senescence [36,47]. However, fibroblasts derived from patients afflicted with Hutchinson-Gilford progeria syndrome (HGPS) that are intrinsically prone to cellular senescence do not develop features of the SAHF [47,48]. Studies have used high-throughput whole-genome conformation capture methods to interrogate formation of the SAHF during cellular senescence [49,50]. Following senescence induction, chromatin become reorganized causing heterochromatin to dissociate from the nuclear periphery. Therefore, the heterochromatin that is incorporated into the SAHF is not newly formed, but a result of heterochromatin redistribution [36]. The dissociation of heterochromatin from the nuclear periphery is mediated by loss of lamin B1, a nuclear lamin protein that associates with lamina-associated domains within heterochromatin [46,51]. Interestingly, lamin B1 abundance is decreased in multiple senescent cell types and in mice tissue following irradiation-induced senescence [52]. Lamin B1 appears to be downregulated at the transcriptional level during the induction of cellular senescence, and silencing lamin B1 is also sufficient to induce cellular senescence [53]. In addition to being downregulated, lamin B1 preferentially binds H3K27me3-enriched sites that are associated with transcriptional repression and the SAHF during cellular senescence [46]. Loss of lamin B1 also appears to cause a redistribution of histone marks, including an enrichment in H3K27me3 and H3K4me marks within lamin B1-associated domains and depletion in H3K27me3 marks in enhancers and genes. Interestingly, the particular genes lacking H3K27me3 marks that were upregulated were found to be senescence-related genes, including SASP genes [51]. Following loss of lamin B1, heterochromatin undergoes the subsequent steps of decondensation and spatial clustering to form the SAHF [50]. Although the mechanism behind spatial heterochromatin clustering to form the SAHF remains unknown, the identification of heterochromatin decondensation during cellular senescence is consistent with the progressive loss of heterochromatin that is observed during aging and in diseases of premature aging, including HGPS and Werner syndrome [54][55][56]. It is also unknown if the progressive loss of heterochromatin is linked to the reduction in histone biosynthesis that appears to influence the redistribution in epigenetic marks during replicative senescence [57]. Such deregulations in heterochromatin structure may allow for expression of retrotransposable elements that have been shown to drive genomic instability during cellular senescence and tissue aging [58][59][60]. Senescence-Associated Distention of Satellites Is a Senescence-Associated Heterochromatic Foci -Independent Epigenetic Change Epigenetic changes are not limited to facultative heterochromatin regions that form the SAHF. The SAHF are actually distinct from constitutive heterochromatin that are present in telomeres and pericentromeres [61]. However, pericentric satellite DNA has been shown to undergo a dramatic decondendation during cellular senescence, independently from SAHF formation [58]. This nuclear phenotype has been termed senescence-associated distention of satellites (SADS) and appears to be exclusively formed in senescent cells and not in non-senescent cells and cancer cell lines [62]. Unlike SAHF formation, SADS formation is conserved among senescent cell types and senescence-inducing stimuli, including HGPS fibroblasts. Additionally, SADS formation has been identified in vivo and suspected to promote tissue aging [47,63]. Attempts to identify the role for SADS formation showed that it is an early event during senescence induction and precedes other nuclear changes, including nuclear enlargement and SAHF formation [62,64]. Additionally, SADS formation may be linked to hypomethylation and expression of pericentric satellite DNA that has been observed during cellular senescence [65,66]. However, the exact function of SADS during cellular senescence remains unknown [62]. Interestingly, pericentric satellite transcripts can promote mitotic errors and genomic instability, leading to the induction of cellular senescence. This particular study showed that sirtuin-6 (SIRT6) prevents these genomic stressors by silencing pericentric satellite transcripts and protects against cellular senescence [67]. It is possible that the expression of pericentric satellite transcripts is an early event during senescence induction that promotes genomic instability to help activate cell cycle arrest. The Senescence-Associated Heterochromatic Foci and High Mobility Group Proteins Cooperate for the Senescence Phenotype The SAHF appears to function in concert with and, in some cases, rely on other epigenetic effectors during cellular senescence. Key examples are proteins of the high mobility group (HMG) family, particularly HMGA1 and HMGB2. The HMG proteins are non-histone chromatin-binding proteins that remodel chromatin architecture, resulting in altered gene expression [68]. The HMGA proteins consist of HMGA1 and HMGA2. These have been shown to accumulate in the chromatin of senescent cells, binding the same site and causing displacement of linker histone H1, and structurally support the SAHF [69]. In this manner, HMGA proteins cooperate with p16 INK4A to maintain the proliferative arrest of the senescent cells [70]. This tumor suppressive function of the HMGA proteins was surprising considering that the HMGA proteins were previously associated with gene activation under proliferative states, such as embryogenesis and cancer [68,71]. Unlike members of the HMGA proteins, studies have found members of the HMGB proteins to have dissimilar mechanisms of action during cellular senescence. The HMGB1 protein is secreted in a p53-dependent manner and functions as an extracellular alarmin that activates nuclear factor-κB (NF-κB) and proinflammatory signaling pathways [72]. The HMGB2 protein also has a proinflammatory role during cellular senescence, but through a mechanism that is distinct from HMGB1. The HMGB2 protein binds the loci of key SASP genes and prevents their incorporation into transcriptionally repressive SAHF regions, providing a chromatin landscape that is conducive for SASP gene expression. Interestingly, inhibition of HMGB2 limits the SASP without affecting the senescence proliferative arrest. These results indicate that the deleterious, pro-tumorigenic, SASP can be uncoupled from the SAHF, which is associated with the beneficial tumor suppressive proliferative arrest of the senescent cells [73]. Epigenetic Regulators of the Senescence Associated Secretory Phenotype Similar to what has been found following inhibition of HMGB2, several studies have found it is possible to target epigenetic mechanisms that specifically drive the SASP, as a therapeutic means [73]. It has become clear that senescent cells can transcriptionally regulate the SASP through other epigenetic mechanisms and effectors in addition to the HMGB2. For instance, super-enhancers are formed adjacent to key SASP genes following remodeling in the enhancer chromatin landscape during cellular senescence. These super-enhancers are enriched in H3K27 acetylation and recruit the bromodomain and extra-terminal domain (BET) protein BRD4 to promote SASP gene expression. An important aspect of this study is that inhibition of BRD4 suppressed SASP gene expression without affecting the proliferative arrest of the senescent cells. Moreover, BRD4 inhibition was also shown to have therapeutic efficacy against senescent cells in vivo by suppressing the SASP along with its subsequent immune surveillance response [74]. Expression of the SASP is also directly regulated by the histone variant macroH2A1, a component of the SAHF. Interestingly, macroH2A1 was found to be not only required for SASP gene expression, but also DNA damage response signaling during cellular senescence. This led to the identification of a negative feedback loop whereby macroH2A1 activates DNA damage response signaling that leads to the removal of macroH2A1 from SASP gene loci, resulting in a dampened SASP [75]. Expression of the SASP was also found to be negatively regulated by sirtuin-1 (SIRT1) in a direct manner. In the event of SIRT1 knockdown or decreased expression, which is observed during cellular senescence, this study found acetylation of H3K9 and H4K16 is increased at the promoters of IL-6 and IL-8, resulting in the transcriptional activation of these cytokines. Therefore, SIRT1 was proposed to deacetylate H3K9 and H4K16 in the promoter regions of the SASP factors IL-6 and IL-8, causing their transcriptional repression under normal non-senescent conditions [76]. Epigenetic factors can also activate pro-inflammatory signaling that underlies activation of the SASP, as opposed to directly modifying SASP gene loci as discussed above. This was found in the case of the methyltransferase mixed-lineage leukemia 1 (MLL1) during cellular senescence. The MLL1 protein activates the expression of proliferation-related cell cycle genes during senescence induction, causing hyper-replicative stress that triggers the DNA damage response. This results in activation of the NF-κB pro-inflammatory signaling pathway that drives SASP gene expression. Importantly, this study showed that inhibiting MLL1 suppresses SASP gene expression without causing senescent cells to escape the proliferative arrest, indicating the therapeutic potential to intervene with MLL1. This point was further highlighted by the fact that MLL1 inhibition was able to suppress the SASP and inflammation associated with cancer in vivo [77]. Separate from activating proinflammatory signaling, the DNA damage response has been shown to induce epigenetic changes that activate SASP gene expression during cellular senescence. Mechanistically, the DNA damage response activates proteasomal degradation of the histone methyltransferases G9a and G9a-like protein (GLP). This results in a decrease in transcriptionally repressive H3K9 dimethylation marks at key SASP gene promoters, leading to enhanced gene expression [78]. Interestingly, the DNA damage response can also become activated and promote the SASP following chromatin remodeling, independent of physical breaks in the DNA. In particular, inhibition of histone deacetylase 1 (HDAC1), which leads to hyperacetylation of histone and non-histone proteins, has been shown to increase the expression of a key proinflammatory SASP factor, osteopontin. Consequently, HDAC1 inhibition promotes a protumorigenic microenvironment and tumor growth in vivo [79]. Conclusions Senescent cells undergo distinct epigenetic changes that serve several effector functions, which are summarized in Figure 1 and Table 1. Such examples are the SAHF and SADS, which are formed following the remodeling in facultative and constitutive heterochromatin, respectively. The SAHF are formed following exit from the cell cycle and aid in senescence induction by stabilizing the proliferative arrest [36,45]. The SAHF may also ensure cell survival during senescence induction by suppressing apoptosis [43]. It appears that senescent cells develop SADS prior to SAHF formation, although the link between these two epigenetic events is not completely understood. A possible role for the SADS may be to upregulate the expression of pericentric satellite transcripts that promote genomic instability and activate the DNA damage response to arrest proliferation [62]. Thus, SADS and SAHF formation appear to serve tumor suppressive roles of senescent cells. However, it may be beneficial to augment the function of the SAHF during cellular senescence. For instance, the SAHF may be augmented to silence SASP genes in addition to proliferation-related genes following inhibition of HMGB2 [73]. It is also possible that inhibition of HMGB1 along with HMGB2 may yield a greater suppression of the SASP, considering the distinct pro-inflammatory role of HMGB1 [72]. In support of this notion, a small molecule inhibitor of HMGB1 and HMGB2 was shown to have anti-inflammatory effects in the context of microglia-mediated neuroinflammation [80]. However, the effects of such a compound remain to be determined under different contexts. Figure 1. Overview of the epigenetic events and effectors during cellular senescence. Key epigenetic changes are development of the SADS and SAHF. The formation of SADS is an early epigenetic change that may aid in the growth arrest by promoting genomic instability. The SAHF collaborates with other epigenetic effectors and has several functions. The HMGA proteins structurally support the SAHF and aid in the repression of proliferation-promoting genes, resulting in the proliferative arrest. The HMGB2 protein prevents the incorporation of SASP gene loci into the transcriptionally repressive SAHF, thereby promoting the SASP. Transcriptionally repressive marks to SASP gene loci can be made by SIRT1 and G9a/GLP. The expression of the SASP is promoted by proinflammatory signaling mediated by MLL1, macroH2A1, HDAC inhibition, and HMGB1. Senescent cells also undergo remodeling in the enhancer landscape that promotes the expression of the SASP, which is mediated by BRD4. These epigenetic mechanisms support the proliferative arrest of senescent cells, which accumulate and impair tissue function, leading to aging. Another long-term deleterious effect of senescent cells is the SASP that activates chronic inflammation and promotes both aging and cancer. SAHF: Senescence-associated heterochromatic foci; SADS: Senescence-associated distention of satellites; SASP: Senescence-associated secretory phenotype; HMGA1/2: High mobility group A 1/2; HMGB1/2: High mobility group B 1/2; MLL1: Mixed-lineage leukemia 1; BRD4: Bromodomain-containing 4; SIRT1: Sirtuin-1; HDAC1: Histone deacetylase 1; GLP: G9a-like protein; DDR: DNA damage response. Several epigenetic mechanisms mediate the SASP during cellular senescence. Interestingly, many of the epigenetic effectors associated with these mechanisms have previously been shown to play a role in tumorigenesis and may lie at the interface between cancer and aging. For instance, many of the HMG proteins are overexpressed and support transcriptional reprogramming in different cancer cell types, and are associated with a poor prognosis [68,81,82]. The MLL1 protein undergoes chromosomal translocations and aberrantly upregulates genes related to development and the cell cycle, promoting tumorigenesis of several leukemias [83,84]. The BRD4 protein plays several roles in cancer by upregulating oncogenes, such as c-myc, and genes related to proliferation, apoptosis suppression, and inflammation [85,86]. Clinical trials using inhibitors against BRD4 and MLL1 in cancer are still underway and, if successful, will be interesting to determine if attenuation of the protumorigenic effects of senescent cells is part of the success, such as inhibition of the SASP. Additionally, the targeting of senescent cells may be enhanced by combining epigenetic inhibitors with senolytic compounds, which has yet to be explored. Moreover, these types of studies will raise the possibility of targeting epigenetic mechanisms of cellular senescence not only to treat cancer, but also pathologic states associated with tissue aging. Acknowledgments: This work was supported by grants from the National Institutes of Health (NIH)/National Cancer Institute (NCI) (T32CA009171 to T. Nacarelli; R01CA160331 to R. Zhang). Conflicts of Interest: The authors declare no conflicts of interest.
v3-fos-license
2019-09-09T18:39:14.658Z
2019-09-09T00:00:00.000
201949693
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2019.00867/pdf", "pdf_hash": "73deef60ada9c21e6e3b113f50d4f48da7e91ea6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2487", "s2fieldsofstudy": [ "Biology" ], "sha1": "adc1aa3c7d927a89d3b20781a521e1c670be314f", "year": 2019 }
pes2o/s2orc
Towards a Better Understanding of Cohesin Mutations in AML Classical driver mutations in acute myeloid leukemia (AML) typically affect regulators of cell proliferation, differentiation, and survival. The selective advantage of increased proliferation, improved survival, and reduced differentiation on leukemia progression is immediately obvious. Recent large-scale sequencing efforts have uncovered numerous novel AML-associated mutations. Interestingly, a substantial fraction of the most frequently mutated genes encode general regulators of transcription and chromatin state. Understanding the selective advantage conferred by these mutations remains a major challenge. A striking example are mutations in genes of the cohesin complex, a major regulator of three-dimensional genome organization. Several landmark studies have shown that cohesin mutations perturb the balance between self-renewal and differentiation of hematopoietic stem and progenitor cells (HSPC). Emerging data now begin to uncover the molecular mechanisms that underpin this phenotype. Among these mechanisms is a role for cohesin in the control of inflammatory responses in HSPCs and myeloid cells. Inflammatory signals limit HSPC self-renewal and drive HSPC differentiation. Consistent with this, cohesin mutations promote resistance to inflammatory signals, and may provide a selective advantage for AML progression. In this review, we discuss recent progress in understanding cohesin mutations in AML, and speculate whether vulnerabilities associated with these mutations could be exploited therapeutically. INTRODUCTION Hematopoietic homeostasis requires tight regulation to ensure production of sufficient numbers of blood cells at all stages of differentiation. This is achieved by a complex network of signaling pathways and gene regulatory mechanisms that control cell proliferation, differentiation, and survival of hematopoietic stem and progenitor cells (HSPC) and their progeny. Skewing of this balance in favor of excessive differentiation results in stem cell depletion, exhaustion and eventually, inability to replenish mature blood cells. In contrast, uncontrolled self-renewal, increased survival, and failure to differentiate are hallmarks of leukemia. The homeostatic balance between self-renewal and differentiation of HSPC is sensitive to a broad range of perturbations. Mutations that disrupt it not only provide classifiers of clinical disease, but also offer insights into the molecular control of self-renewal, differentiation, and cell proliferation. Many AML-associated mutations are clearly linked to one of these categories, such as constitutive activation of RAS proteins or FLT3, that drive uncontrolled proliferation (1)(2)(3), mutations that prevent cell cycle arrest and apoptosis such as TP53 (4), and mutations that hinder differentiation such as in the transcription factors RUNX1 or C/EBPα (5,6). The clonal advantage conferred by such mutations is immediately obvious. Recent large-scale sequencing studies have shown that the mutational landscape of AML is highly enriched for mutations in general transcriptional regulators and chromatin modifiers, which are found in ∼70% of patients (7). Examples of this group include mutations in proteins involved in chromatin modifications (ASXL1, EZH2), DNA methylation (DNMT3A, TET2), or transcriptional splicing (SRSF2, U2AF1) (7)(8)(9). Although we understand the biological functions of many of these molecules and pathways in exquisite detail, their selective advantage for AML cells remains largely unknown (10). Understanding the link between these novel AML mutations and the molecular mechanisms of self-renewal, differentiation and cell survival is critical for understanding the pathophysiology of AML, and for the identification of new therapeutic approaches to cancer (Figure 1). A striking example are mutations in the subunits of the cohesin protein complex (SMC1, SMC3, RAD21, and STAG1/2). Cohesin forms a ring-shaped structure that can encircle DNA and hold sister chromatids together. This function of cohesin is essential for DNA replication (11)(12)(13), DNA repair (14)(15)(16)(17), and chromosome segregation in mitosis (18)(19)(20). Despite this essential role in cell cycle progression, heterozygous or hypomorphic cohesin mutations are compatible with cell proliferation (21). This explains how leukemic cells can tolerate cohesin mutations, but fails to explain why cohesin mutations occur with high frequencies in AML. In addition to essential functions in the cell cycle, cohesin has a major role in three-dimensional genome organization (22). Cohesin cooperates with the DNA-binding protein CTCF in the formation of topologically associated domains (TADs), which facilitate preferential interactions between genes and enhancers within the same CTCF-demarcated domain (23)(24)(25)(26). Impaired formation of these structures randomizes the threedimensional topology across single cells (27), thus exposing genes and enhancers to illegitimate interactions (28). Here we review recent progress that links impaired cohesin function to the regulation of inflammatory gene expression, self-renewal, and differentiation of hematopoietic progenitors (29)(30)(31)(32)(33)(34), revealing potential explanations for why cohesin is recurrently mutated in AML. We speculate about the role of inflammatory gene expression in AML and its potential therapeutic implications. The majority of RAD21 and STAG2 mutations cause nonsense, frame-shift, or splice-site changes, presumably leading to protein truncation or exon skipping. On the other hand, mutations in SMC1A and SMC3 are missense, causing amino acid substitutions in different protein domains (38). The effect of each of these mutations on the formation of the cohesin complex is still largely unexplored. Some of the mutant transcripts can give rise to dominant-negative proteins in cord blood progenitors (32) while others result in the degradation of the mutant transcript (38). Most cohesin mutations are heterozygous, consistent with the idea that complete loss of the complex is incompatible with cell cycle progression. This has been confirmed by studies showing that partial cohesin loss in AML cells is not linked to increased aneuploidy (29,36,38,40). However, since the Stag2 and Smc1a genes are on the X chromosome, male cases with mutations in these genes are not heterozygous. In the case of STAG2 mutant cells it has been shown that STAG1 becomes essential (41), suggesting that loss of STAG2 can be at least partially compensated by STAG1. Cohesin mutations in patients appear to be mutually exclusive, indicating that a mutation in just one member of the complex is sufficient to reduce cohesin activity to the point where it provides a clonal advantage. Cohesin mutations often co-occur with mutations in other genes, such as NPM1, TET2, ASXL1, and EZH2 (36,38). Nonetheless, it is thought that the majority of cohesin mutations are early events in leukemogenesis (38,42). The prognostic significance of cohesin mutations in myeloid malignancies is not yet fully clear. In MDS, STAG2 mutations are associated with significantly reduced survival (37). However, no significant association between cohesin mutations and survival in AML was found in an early study (36) while a more recent study reports a significant association with increased overall survival and disease-free survival (43). THE ROLE OF COHESIN EARLY HEMATOPOIESIS The frequency of cohesin mutations in AML prompted several groups to investigate the contribution of cohesin to early hematopoiesis and myeloid differentiation ( Table 1). A mouse model of conditional Smc3 heterozygosity (31) presented an altered composition of the hematopoietic stem cell (HSC) compartment. Short-term HSCs and multipotent progenitor populations (MPP) were increased, while in long-term HSCs were decreased (31). In competitive repopulation assays, cohesindeficient cells outcompeted wild-type cells. An important aspect of this study was the demonstration that Smc3 heterozygosity on its own is not sufficient to trigger leukemic transformation. However, the combination of Smc3 heterozygosity and an internal duplication in the FLT3 receptor (one of the most common mutations in AML) induced acute myeloid leukemia in mice. This indicates that SMC3 mutations must cooperate with other mutations to cause leukemia in this model. However, a mouse model of conditional Stag2 deletion presented features of myeloid dysplasia (44). Also, these mice had increased frequencies of both long-term and short-term HSC, indicating that mutations in different cohesin subunits do not always cause the same phenotypes. Mouse models of shRNA-mediated knock-down of different cohesin subunits developed similar, but not identical alterations in stem cell compartments (30). In the bone marrow there was a marked increase in granulocyte-macrophage progenitors (GMP), accompanied by a decrease in long-term and short-term HSCs. These models of cohesin deficiency did not develop acute myeloid leukemia. However, the mice displayed several features resembling a myeloid disorder, including splenomegaly and myeloid hyperplasia. In addition, cohesin-mutated mouse cells * indicates nonsense mutations. Frontiers in Oncology | www.frontiersin.org acquire increased self-renewal capacity in in vitro methylcellulose colony formation assays. Importantly, similar results were obtained with human cells (32,33). Cohesin-deficient cord blood progenitors or AML cell lines displayed reduced sensitivity to the differentiationinducing effects of cytokines. The same effect was observed by over-expressing cohesin genes carrying mutations identified in AML, indicating that these can act as dominant-negative mutants. These cells were also characterized by a higher frequency of CD34 + progenitors and increased self-renewal capacity in methylcellulose (32). In line with these findings, cohesin-deficient human blood progenitors have increased in vivo reconstitution capacity after transplantation into immunodeficient mice (33). TRANSCRIPTIONAL CONSEQUENCES OF COHESIN MUTATIONS IN HSPCS Given that AML-associated cohesin mutations do not affect genome integrity (38), the observed resistance to differentiation has been ascribed to the gene regulatory role of cohesin. In all models tested, transcriptional changes were mild (31,32). This is expected, as even complete removal of cohesin only changes the expression of 10% of genes (23). Previous reports suggested a role for cohesin in facilitating chromatin remodeling in human and mouse cells (46,47). Consistent with this, cohesin-deficient HSPCs present genomewide alterations in chromatin accessibility (30)(31)(32)(33)45). These changes broadly correlate with altered gene expression. Therefore, it has been proposed that defective chromatin accessibility impacts on normal dynamics of transcription factor binding, which leads to transcriptional deregulation and abnormal differentiation. An extreme case of chromatin alterations was observed in human cord blood cells, where dominant negative cohesin mutations reduced chromatin accessibility genome-wide (32). Interestingly, a minority of sites displayed increased accessibility, specifically binding sites for the transcription factors GATA2 and RUNX1. This has been proposed to result in an upregulation of HSPC transcriptional programs and obstruct differentiation. A role for cohesin in regulating RUNX1 expression has also been described in model organisms (48). How is cohesin linked to chromatin accessibility? Cohesin binding sites are highly accessible (49). In yeast, cohesin cooperates with the chromatin structure remodeling complex (RSC) to actively evict nucleosomes and generate nucleosomefree DNA (50)(51)(52), which is required for cohesin loading (53). In mouse embryonic stem cells, depletion of a member of the PBAF complex (a vertebrate ortholog of RSC) results in sisterchromatid cohesion defects (54). In humans, cohesin is found in a complex with the ATP-dependent chromatin remodeling enzyme SNF2H (55). SNF2H, but not cohesin, is required for the establishment of arrays of phased nucleosomes around CTCF binding sites (56). Reduced cohesin dosage can also alter the frequency of chromatin interactions near transcription factor genes. One example is the transcriptional regulation of the lymphoid transcription factor Ebf1, that has four STAG2 binding sites in hematopoietic progenitors. In Stag2 −/− mice, cis-interactions at this locus are lost, leading to abrogation of Ebf1 expression and failure to differentiate into lymphoid progenitors (44). Another mechanism that has been proposed to explain the increased self-renewal capacity of cohesin-deficient HSPCs is the derepression of the self-renewal transcription factor HOXA9 (29,57). HOXA9 is normally silenced by the Polycomb complex, which represses Hox loci in HSPCs by H3K27 trimethylation. In cohesin-deficient mouse HSPCs, this repressive chromatin mark is lost and Hoxa9 is upregulated, leading to increased self-renewal. This finding suggests that cohesin cooperates with Polycomb to silence Hox genes in HSPCs. Consistently, in mouse embryonic stem cells, cohesin complexes containing STAG2 (but not STAG1) contribute to the maintenance of chromatin interactions within Polycomb domains (58). COHESIN IN THE CONTROL OF THE INFLAMMATORY RESPONSE As discussed in the previous section, all studies that have compared wild-type and cohesin-deficient HSPCs found clear changes in chromatin accessibility and gene expression. These changes may indicate a direct effect of cohesin, or, alternatively, they may reflect the less mature state of cohesindeficient progenitor populations. In order to rule out this possibility and determine what genes are directly controlled by cohesin in myeloid cells, a recent study used terminally differentiated macrophages to allow a like-for-like comparison of transcriptional and chromatin state between wild-type and cohesin-deficient cells (34). This strategy uncovered a key role for cohesin in the regulation of inflammatory gene expression. Consistent with a body of knowledge demonstrating that cohesin is required for interactions within topological domains (23)(24)(25)(26), interactions between upstream key transcriptional regulators of the inflammatory response and their surrounding enhancers were decreased after acute cohesin depletion. As the organization of the inflammatory response is hierarchical, reduced levels of upstream regulators impact on the network, and deregulation spreads to the majority of inducible genes. Importantly, reanalysis of HSPC gene expression data showed that cohesin also controls inflammatory gene expression in progenitor cells (34). Inflammatory signals not only mediate cross-talk between immune cells to coordinate the immune response, but also regulate the balance between HSPC self-renewal and differentiation. This function, known as emergency hematopoiesis, is normally activated during infection in order to regenerate mature myeloid cell populations (59). Several inflammatory cytokines and ligands, including interferons, are involved in the activation of emergency hematopoiesis. Type I interferon induces HSC exit from quiescence, entry into the cell cycle and differentiation. Importantly, chronic exposure to type I interferon is detrimental to HSCs (60,61). Type II interferonor IFNγ-also regulates HSC activity both in homeostasis and during infection (62). IFNγ acts on a subset of HSCs to induce myeloid differentiation by activating transcription factors like C/EBPβ (63). The interleukin IL-1 brings about myeloid differentiation through activation of a NF-κB-PU.1 axis (64). Activation of Toll-like receptor (TLR) signaling in HSPCs, which activates both the NF-κB and the interferon pathways, also promotes myeloid differentiation (65)(66)(67)(68)(69). As in the case of chronic interferon exposure, sustained TLR activation becomes detrimental and impairs the repopulating capacity of HSPCs. As cohesin is required to induce expression of inflammatory response genes, cohesin-deficient HSPCs are less prone to differentiate in inflammatory conditions [(34, 45); Figure 2]. This acquired resistance to differentiation allows increased proliferation of immature progenitors, providing a possible explanation for some of the phenotypes displayed by cohesindeficient mice. Therefore, cohesin mutations in AML illustrate a mechanistic connection between the control of transcriptional regulation and the responsiveness to differentiation-inducing stimuli in myeloid cells. The selective advantage of mutations in other transcriptional regulators may potentially be explained by similar mechanisms involving the control of inflammatory signaling. INFLAMMATORY GENE EXPRESSION IN AML Consistent with the finding that cohesin regulates inflammatory gene expression in hematopoietic progenitors and myeloid cells, AML patient cells with cohesin mutations show a striking reduction of inflammatory and interferon pathways. This is the case when comparing AML with and without cohesin mutations across all samples in The Cancer Genome Atlas (TCGA), as well as within a specific histological subtype (34). These data suggest that the same mechanism that favors self-renewal in cohesindeficient HSPCs through impaired sensitivity to inflammatory signals may operate in cohesin-deficient AMLs. The implication is that in settings with increased inflammatory signaling, cohesin mutations could confer resistance to inflammatory signals and increased self-renewal and clonal expansion. Constitutively increased inflammatory signals are a hallmark of aging. Basal levels of pro-inflammatory cytokines such as IL6 or TNFα increase with age in healthy individuals (70). This leads to alterations in hematopoietic differentiation, which are reminiscent of emergency hematopoiesis: myelopoiesis-biased differentiation and reduced HSC self-renewal (71,72). Consistent with its role in conferring resistance to inflammatory signals, competitive assays show that cohesin mutant HSCs become dominant over wild-type HSCs in aged mice (45). As clonal hematopoiesis is a feature of aging (73), it has been suggested that cohesin mutations could be positively selected during aging, eventually promoting a pre-leukemic state (45). This is in line with a report showing that cohesin mutations are early events, considered to be pre-leukemic (42). However, cohesin subunits are not among the top frequently mutated genes in cases where clones of hematopoietic cells carrying somatic mutations are found in the absence of any hematologic dysplasia, known as clonal hematopoiesis of indeterminate potential (CHIP) (73). Therefore, the emergence of cohesin mutations associated to aging may immediately lead to pre-leukemic dysplasias rather than CHIP. Further investigation is required to understand the role of cohesin mutations during aging. A number of previous studies have reported altered expression of cytokines and other inflammatory mediators in myeloid disorders (74)(75)(76). For example, FLT3-ITD + AML show increased expression of microRNA miR-155, which is known FIGURE 2 | Cohesin regulates the balance between self-renewal and differentiation. Cohesin controls expression of pro-inflammatory genes that promote HSPC differentiation. In cohesin-mutant AML, inflammatory gene expression is downregulated, increasing resistance to differentiation and favoring HSPC self-renewal. for its anti-inflammatory effects, its ability to inhibit interferon signaling, and to increase HSPC self-renewal in mouse models (77). Sensitivity of human AML cells to IFNγ is inversely related to RIP1/3 signaling. High levels of RIP1/3 signaling stabilize SOCS1, and SOCS1 antagonizes IFNγ signaling, effectively protecting AML cells from the differentiation-inducing effects of IFNγ (78). On the other hand, reduced RIPK3 expression is thought to reduce differentiation and TNFR-driven death of AML cells (79). Interleukin-1 (IL-1) inhibits growth of normal hematopoietic progenitors, but promotes expansion of AML cells by increasing p38 MAPK phosphorylation. This effect can be reversed by blocking IL-1 with p38 MAPK inhibitors (80). As an illustration of the complexity of the pathways involved in the regulation of inflammation and differentiation, TNF activates NFκB via JNK in AML cells (81) and can dampen interferon signaling via SOCS1 (78). NFκB is constitutively activated in CD34 + CD38 − AML cells (82), promoting leukemia stem cell survival and proliferation (77). Although exogenous interferon reduces in vitro self-renewal induced by RUNX1-ETO and RUNX1-ETO9a, interferon and interferon-stimulated genes are elevated by RUNX1-ETO in human and in murine models (83). Finally, the chromatin modifier TET2 is required for emergency myelopoiesis (84), and TET2 mutants show greater fitness in inflammatory environments partially due to increased resistance to TNF, which triggers IL6 overproduction and activation of an anti-apoptotic lncRNA (85,86). Taken together, these studies link myeloid disorders with inflammation and indicate that AML may use a spectrum of different strategies for managing inflammatory signals. INTERFERON TREATMENT IN AML The interferon pathway is central to the inflammatory gene expression network, and it is heavily deregulated in cohesindeficient macrophages (34). The deregulation of upstream interferon regulators like STAT1 and IRF7 disrupts basal interferon secretion, which maintains anti-viral transcriptional responsiveness by auto-and paracrine feed-forward signaling (87,88). In the absence of cohesin, STAT and IRF-dependent enhancers fail to be induced, and consequently most interferoninduced genes are deregulated. Importantly, both enhancer activation and constitutive interferon gene expression can be partially rescued with exogenous interferon (34). These findings provide grounds to speculate that cohesin-mutated AMLs could be particularly vulnerable to interferon treatment. Mechanistically, supplying exogenous interferon could partially rescue expression of upstream transcription factors and regulators of the pathway, enabling normal enhancer activation, and transcription of downstream effectors. This would in turn increase the inflammatory responsiveness of cohesin-mutant AML cells, potentially restoring the balance between self-renewal and differentiation and restricting their selective advantage. There is a long history of using type I interferons to treat hematological malignancies, with varying degrees of success. While early studies were hampered by treatment limiting side-effects, more recent recombinant and pegylated preparations are tolerated much better and can be used with feasible dosing regimens. There are currently established roles for interferon in the myeloproliferative disorders (89,90), hypereosinophilic syndromes (91), and chronic myeloid leukemia (CML) (92). Intriguingly in CML, interferon treatment appears to preferentially target the leukemic stem cell population, and can induce cytogenetic remissions, some of which are durable upon treatment withdrawal, suggesting that it can cure some patients (92). Similar effects are observed in the JAK2 myeloproliferative disorders, with reduction or clearance of the mutant clones in up to 50% of patients (93). In acute leukemia, interferon treatment impairs proliferation of AML cell lines in vitro, and has anti-leukemic effects in patient-derived xenograft models (PDX) in a dose dependent manner (94). This has been explained by cell intrinsic effects of interferon on leukemic blasts (reduced proliferation, increased apoptosis, and reduced secretion of growth-promoting cytokines), increased immunogenicity of interferon-treated leukemic blasts, as well as immunomodulatory effects on the residual normal hematopoietic cells, and increased clearance by the host immune system. However, despite the encouraging preclinical data, the clinical outcomes in interferon trials in AML have been disappointing, with durable responses seen in only small percentage of patients. While patients with secondary AML arising from a myeloproliferative disorder seem most susceptible, this is not exclusively the case. However, much of the clinical experience pre-dates the availability of current sequencing technologies and so stratification of AML by mutation may reveal genetic susceptibilities to interferon treatment. CONCLUSIONS Many recently identified AML mutations are in genes encoding regulators of transcription and chromatin state. Understanding how these mutations are beneficial to cancer cell fitness is a major challenge (10). Regulators of transcription and chromatin state usually regulate the expression of hundreds or thousands of genes, which complicates the task of pinpointing the target genes that are responsible for the increased fitness of mutated cells. Mutations in subunits of the cohesin complex result in clear alterations in the hematopoietic stem cell compartment and in HSPC function (29)(30)(31)(32)(33)45). However, the specificity of cohesin control on HSPC gene expression has been difficult to accommodate with current models of cohesin function. The transcriptional control of inducible gene expression provides a possible explanation for the high frequency of cohesin mutations in myeloid malignancies. Inflammatory signaling promotes the differentiation of HSPCs toward a myeloid fate (59), and cohesin-deficient cells show increased resistance to these differentiation-inducing stimuli (34,45). In bone marrow microenvironments with alterations in cytokine levels such as those found in aging (70), myelodysplastic syndrome (MDS) (95) or leukemia (75,76), mutations that confer reduced responsiveness to differentiation-inducing signals are likely to be positively selected and clonally expanded (45). This is consistent with observations that cohesin mutations appear early in the history of AML (42), and that cohesin mutations by themselves alter the composition of the HSPC compartment but are insufficient to trigger AML (31). AML has an inherently poor prognosis, and even with intensive chemotherapy or hematopoietic stem cell transplantation the risk of relapse remains high. AML is a highly heterogeneous disease by morphological, clinical, and genetic criteria, underlining the need for targeted approaches. For a subset of recurrent mutations, such as FLT3, specific inhibitors are in clinical use (2). For others, like cohesin mutations, greater understanding of the molecular circuitries involved in the increased fitness of mutated cells is necessary to find vulnerabilities and new therapeutic approaches.
v3-fos-license
2017-05-05T02:46:06.051Z
2014-12-23T00:00:00.000
17701079
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2014.00712/pdf", "pdf_hash": "6b6cd8ed6d5422df84701fbfd7fe53930e1b4213", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2488", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "319bee225fa61b854801873ab3c743d16653f9ff", "year": 2014 }
pes2o/s2orc
The poplar Phi class glutathione transferase: expression, activity and structure of GSTF1 Glutathione transferases (GSTs) constitute a superfamily of enzymes with essential roles in cellular detoxification and secondary metabolism in plants as in other organisms. Several plant GSTs, including those of the Phi class (GSTFs), require a conserved catalytic serine residue to perform glutathione (GSH)-conjugation reactions. Genomic analyses revealed that terrestrial plants have around ten GSTFs, eight in the Populus trichocarpa genome, but their physiological functions and substrates are mostly unknown. Transcript expression analyses showed a predominant expression of all genes both in reproductive (female flowers, fruits, floral buds) and vegetative organs (leaves, petioles). Here, we show that the recombinant poplar GSTF1 (PttGSTF1) possesses peroxidase activity toward cumene hydroperoxide and GSH-conjugation activity toward model substrates such as 2,4-dinitrochlorobenzene, benzyl and phenetyl isothiocyanate, 4-nitrophenyl butyrate and 4-hydroxy-2-nonenal but interestingly not on previously identified GSTF-class substrates. In accordance with analytical gel filtration data, crystal structure of PttGSTF1 showed a canonical dimeric organization with bound GSH or 2-(N-morpholino)ethanesulfonic acid molecules. The structure of these protein-substrate complexes allowed delineating the residues contributing to both the G and H sites that form the active site cavity. In sum, the presence of GSTF1 transcripts and proteins in most poplar organs especially those rich in secondary metabolites such as flowers and fruits, together with its GSH-conjugation activity and its documented stress-responsive expression suggest that its function is associated with the catalytic transformation of metabolites and/or peroxide removal rather than with ligandin properties as previously reported for other GSTFs. Glutathione transferases (GSTs) constitute a superfamily of enzymes with essential roles in cellular detoxification and secondary metabolism in plants as in other organisms. Several plant GSTs, including those of the Phi class (GSTFs), require a conserved catalytic serine residue to perform glutathione (GSH)-conjugation reactions. Genomic analyses revealed that terrestrial plants have around ten GSTFs, eight in the Populus trichocarpa genome, but their physiological functions and substrates are mostly unknown. Transcript expression analyses showed a predominant expression of all genes both in reproductive (female flowers, fruits, floral buds) and vegetative organs (leaves, petioles). Here, we show that the recombinant poplar GSTF1 (PttGSTF1) possesses peroxidase activity toward cumene hydroperoxide and GSH-conjugation activity toward model substrates such as 2,4-dinitrochlorobenzene, benzyl and phenetyl isothiocyanate, 4-nitrophenyl butyrate and 4-hydroxy-2-nonenal but interestingly not on previously identified GSTF-class substrates. In accordance with analytical gel filtration data, crystal structure of PttGSTF1 showed a canonical dimeric organization with bound GSH or 2-(N-morpholino)ethanesulfonic acid molecules. The structure of these protein-substrate complexes allowed delineating the residues contributing to both the G and H sites that form the active site cavity. In sum, the presence of GSTF1 transcripts and proteins in most poplar organs especially those rich in secondary metabolites such as flowers and fruits, together with its GSH-conjugation activity and its documented stress-responsive expression suggest that its function is associated with the catalytic transformation of metabolites and/or peroxide removal rather than with ligandin properties as previously reported for other GSTFs. Along with GSTUs, plant GSTFs have been extensively studied for their involvement in herbicide detoxification and for this reason they could be considered as the counterparts of the mammalian drug metabolizing GSTs. By catalyzing GSH-conjugation reactions of electrophilic molecules that are subsequently recognized by vacuolar ABC transporters, GSTFs participate to the vacuolar sequestration and thus detoxification of exogenous compounds. However, other biochemical activities can account for the observed increased herbicide resistance. For instance, it was shown that the GSTF1 from the black grass Alopecurus myosuroides, a weed of cereals, possesses a glutathione peroxidase activity which lowers the levels of hydroperoxides produced in response to herbicides (Cummins et al., 1999). Arabidopsis thaliana transgenic plants expressing this GSTF1 acquire multiple herbicide resistance and accumulate protective flavonoids as initially observed in the black grass (Cummins et al., 2009(Cummins et al., , 2013. Another facet of GSTs is their involvement in secondary metabolism, in stress response and in their associated signaling. For instance, A. thaliana GSTF6 is required for the synthesis of the defense compound camalexin, by catalyzing the conjugation of glutathione onto indole-3-acetonitrile (Su et al., 2011) whereas A. thaliana GSTF2 binds tightly to camalexin and might be required for its transport (Dixon et al., 2011). On the other hand, A. thaliana GSTF8 catalyzes glutathione conjugation to prostaglandin 12-oxophytodienoic acids and A 1 -phytoprostanes, two stress signaling molecules (Mueller et al., 2008). Consistent with these functions, the expression of GST gene belonging to all classes is often highly induced in response to biotic and abiotic stresses or to hormone treatments, and this often correlated with an increase in the protein amount. For instance, the expression of several GSTF genes is enhanced in response to plant hormones such as ethylene, methyl jasmonate, salicylic acid and auxin, to herbicides and to herbicide safeners, to pathogen infection, and more generally to treatments leading to oxidative stress (Deridder et al., 2002;Wagner et al., 2002;Lieberherr et al., 2003;Smith et al., 2003Smith et al., , 2004Sappl et al., 2004Sappl et al., , 2009. Interestingly, previous biochemical analyses have shown that GSTFs can bind to metabolites for non-catalytic functions. The best characterized example of carrier/transport functions for a Phi GST concerns the requirement of A. thaliana transparent testa 19 (tt19)/AtGSTF12 and of the petunia ortholog AN9 for the correct vacuolar localization of anthocyanins and proanthocyanidins (Alfenito et al., 1998;Kitamura et al., 2004). While it was initially thought that these GSTFs could catalyze GSH-conjugation reactions, it was determined that they serve as flavonoid carrier proteins (Mueller et al., 2000). Moreover, photoaffinity-labeling experiments or competition activity assays pointed to the capacity of GSTFs to bind plant hormones such as gibberellic acid (Axarli et al., 2004), cytokinin and auxin (Bilang et al., 1993;Bilang and Sturm, 1995;Gonneau et al., 2001). A screen for metabolites able to bind to A. thaliana GSTF2, either from pure molecules or from plant or Escherichia coli extracts also identified other interacting molecules. Besides camalexin, flavonoids (quercetin, quercetin-3-O-rhamnoside and kaempferol) and other heterocyclic compounds structurally close to flavonoids (harmane, norharmane, indole-3-aldehyde, and lumichrome) have been shown to bind to AtGSTF2 (Smith et al., 2003;Dixon et al., 2011). The absence of GSH-conjugation activity with these compounds indicated that AtGSTF2 functions as a carrier protein. Moreover, competition binding experiments or activity assays in the presence of several of these binding molecules showed that they either did not alter AtGSTF2 conjugating activity or even increased it, hinting the existence of multiple ligand/substrate binding sites. At the structural level, GSTFs exist as homodimers, the dimerization interface involving mainly hydrophobic surface patches (Armstrong, 1997). Each monomer comprises an active site region formed by a glutathione binding pocket (G-site) primarily involving residues from the conserved N-terminal thioredoxin domain and an hydrophobic pocket (H-site) primarily involving residues from the less conserved C-terminal domain (Prade et al., 1998). In their active sites, most GSTFs present a serine residue that is located in the N-terminal end of the α1 helix which promotes the formation of the active thiolate anion on the sulphydryl group of the cysteine of GSH that is required for catalysis. However, the non-catalytic functions observed for some GSTFs suggested the existence of a ligandin site (L-site) but structural details of the latter are still lacking. From mutagenesis experiments performed on Zea mays GST-I, the L-site is likely overlapping with the G-and H-sites (Axarli et al., 2004). In this study, the transcript levels of the eight poplar GSTFs have been analyzed in various organs. Then, the biochemical and structural properties of the stress-responsive GSTF1 have been further characterized, examining the enzymatic properties of recombinant proteins (WT protein and variants mutated for the catalytic serine) and solving the 3D structure of the protein in complex with substrates/ligands. GENOMIC AND PHYLOGENETIC ANALYSES In order to identify all poplar GSTF genes, homology searches with the BLAST algorithm have been performed on the different versions of the P. trichocarpa genome including the version 3.0 available on the phytozome v10 portal (http://phytozome. jgi.doe.gov/pz/portal.html). Genome analyses for other terrestrial plants have been also performed on the phytozome v10 portal whereas cyanobacterial and algal genomes have been analyzed from cyanobase (http://genome.microbedb.jp/cyanobase) and the JGI genome portal (http://genome.jgi.doe.gov) respectively. The protein sequences and corresponding accession numbers can be found as Supplementary Table 1. When possible, GSTF sequences were corrected on the basis of available ESTs. RT-PCR EXPERIMENTS Total RNAs were extracted from 150 mg of P. trichocarpa stamens, male flowers, female flowers, fruits, petioles, leaves, buds, and roots using the RNeasy Plant Mini Kit (Qiagen) according to the Manufacturer's instructions with minor modifications described before (Lallement et al., 2014b). Then, mRNAs were reverse-transcribed to obtain cDNAs by using the iScript cDNA Synthesis kit (Bio-Rad) following the manufacturer's instructions. PCR amplifications were performed for 25, 30, or 35 cycles using Go-Taq polymerase (Promega). Specific forward and reverse primers (Supplementary Table 2) have been designed to amplify ca 300 bp fragments of each GSTF gene. The ubiquitin gene (Potri.015G013600) was used as a control of the cDNA concentration used for PCR amplification and incidentally of cDNA integrity (Lallement et al., 2014b). The PCR products have been separated by electrophoresis on 1% agarose gel and visualized by ethidium bromide staining. PROTEIN EXTRACTION AND WESTERN-BLOT ANALYSIS Extraction of soluble proteins from leaves, petioles, stems, roots, fruits, stamens, and buds or from rust-infected leaves was performed as previously described (Vieira Dos Santos et al., 2005). The proteins were separated by 15% SDS-PAGE and electro-transferred onto nitrocellulose membranes (LI-COR Biosciences). After rinsing in 13.7 mM NaCl, 0.27 mM KCl, 10 mM Na 2 HPO 4 , and 0.2 mM KH 2 PO 4 buffer (phosphate buffered saline: PBS), membranes were blocked during 45 min at room temperature using the Odyssey blocking buffer (LI-COR Biosciences). Then, membranes were incubated with rabbit polyclonal antibodies (diluted 1:1000, synthesis by Genecust) raised against PttGSTF1 for 30 min in the presence of 0.05% of tween 20. After several washing steps with a PBS buffer supplemented with 0.05% tween 20 (PBST), membranes were incubated for 30 min with IRDye 800 CW goat or donkey anti-rabbit secondary antibodies (LI-COR Biosciences) diluted 1:5000 in the Odyssey blocking buffer supplemented with 0.05% tween 20 and 0.01% SDS. After extensive washes with PBST and PBS, immunodetection of proteins on the membrane was performed by exciting the IRDye with an Odyssey Infrared Imager (LI-COR Biosciences). PCR CLONING AND SITE-DIRECTED MUTAGENESIS The sequence coding for GSTF1 was amplified by PCR from Populus tremula × P. tremuloides leaf cDNAs using specific forward and reverse primers (Supplementary Table 2) and cloned into pET-3d between NcoI and BamHI restriction sites. Hence, the sequence is subsequently referred to as PttGSTF1. PttGSTF1 S13C and PttGSTF1 S13A variants where the serine found at position 13 is substituted into cysteine or alanine were generated by site-directed mutagenesis using two complementary mutagenic primers (Supplementary Table 2). Two overlapping mutated fragments were generated in a first PCR reaction and were subsequently used in a second PCR to generate the full-length mutated sequences which have been then cloned into pET-3d. HETEROLOGOUS EXPRESSION IN E. COLI AND PURIFICATION PttGSTF1 expression was performed in an E. coli BL21 (DE3) strain (Novagen) containing the pSBET plasmid upon transformation with the recombinant pET-3d plasmids. Bacteria were cultivated at 37 • C in LB medium containing kanamycin (50 μg/ml) and ampicillin (50 μg/ml). When the cell culture reached an OD 600nm of 0.7, PttGSTF1 expression was induced by the addition of 0.1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG) and cells were further grown for 4 h. Cells were harvested by centrifugation, resuspended in a 30 mM Tris-HCl pH 8.0, 1 mM EDTA, 200 mM NaCl buffer and stored at −80 • C. Cell lysis was achieved by two rounds of 1 min sonication. The cell extract was then centrifuged at 40,000 g for 30 min at 4 • C to remove cellular debris and aggregated proteins. The fraction precipitating between 40 and 80% of the saturation in ammonium sulfate was subjected to a size-exclusion chromatography by loading the protein extract on an Ultrogel® ACA44 (5 × 75 cm, Biosepra) column equilibrated with 30 mM Tris-HCl pH 8.0, 200 mM NaCl buffer. The fractions containing the recombinant protein were then pooled, dialyzed by ultrafiltration in Amicon cells using a YM10 membrane (Millipore) and loaded onto a DEAE-cellulose column (Sigma Aldrich) equilibrated in 30 mM Tris-HCl pH 8.0. The proteins were eluted using a 0-400 mM NaCl gradient, concentrated by ultrafiltration and stored in 30 mM Tris-HCl pH 8.0, 200 mM NaCl buffer. The protein purity was then analyzed by 15% SDS-PAGE and protein concentration was determined after measuring the absorbance at 280 nm using a theoretical molar absorption coefficient of 33,982 M −1 cm −1 for PttGSTF1, PttGSTF1 S13C, and PttGSTF1 S13A. DETERMINATION OF THE MOLECULAR MASS AND OLIGOMERIZATION STATE OF PURIFIED RECOMBINANT PROTEINS The molecular masses of purified recombinant proteins were analyzed using a Bruker microTOF-Q spectrometer (Bruker Daltonics, Bremen, Germany) equipped with an Apollo II electrospray ionization source as described previously (Couturier et al., 2011). The oligomerization state of purified recombinant proteins was analyzed on a Superdex 200 10/300 column equilibrated in 30 mM Tris-HCl pH 8.0, 200 mM NaCl and connected to an Akta purifier system (GE Healthcare) by injecting 100 μg of purified recombinant proteins at a flow rate of 0.5 ml/min. The column was calibrated using the molecular weight standards (6500-700,000 Da) from Sigma. For all these assays, reactions were started by the addition of the enzyme and protein concentrations used were within the linear response range. The measured velocities were corrected by subtracting the rate of spontaneous non-enzymatic reaction and three independent experiments were performed at each substrate concentration. Changes in absorbance were followed with a Cary 50 spectrophotometer (Agilent Technologies). The kinetic parameters (k cat and apparent K m ) were obtained by fitting the data to the non-linear regression Michaelis-Menten model in GraphPad Prism 5 software. The k cat values are expressed as μmol of substrate oxidized per second per μmol of enzyme (i.e., the turnover number in s −1 ), using specific molar absorption coefficients of 6220 M −1 cm −1 at 340 nm for NADPH, 8890 M −1 cm −1 at 274 nm for PITC, 9250 M −1 cm −1 at 274 nm for BITC, 9600 M −1 cm −1 at 340 nm for CDNB, 17700 M −1 cm −1 at 412 nm for PNP-butyrate and 13750 M −1 cm −1 at 224 nm for HNE. CRYSTALLIZATION AND STRUCTURE DETERMINATION OF PttGSTF1 AND PttGSTF1 S13C Initial screening of crystallization conditions was carried out by the microbatch-under-oil method. Sitting drops were set up using 1 μl of a 1:1 mixture of protein and crystallization solutions (672 different commercially available conditions) in Terasaki microbatch multiwell plates (Molecular Dimensions). The crystallization plates were stored at 4 • C. Single crystals of sufficient size were obtained using Jena Bioscience 2D1 condition (30% w/v PEG 4000, 100 mM 2-(N-morpholino)ethanesulfonic acid (MES) sodium salt, pH 6.5). Best crystals were obtained with a protein concentration of 14 mg/ml for PttGSTF1 and of 10 mg/ml for PttGSTF1 S13C. The single crystals were flash-cooled in liquid nitrogen using a mixture of the crystallization solution and 20% glycerol as cryoprotectant. For PttGSTF1, before crystallization, the protein (ca 1 mL at 600 μM) was treated with 10 mM GSH for 30 min, desalted on G25 columns and concentrated to the indicated concentration using Amicon Ultra centrifugal filters, Ultracel 10 K Membrane from Millipore. PttGSTF1 X-ray diffraction data were collected on beamline EMBL-X11 at the DORIS storage ring (DESY, Hamburg, Germany) and PttGSTF1 S13C X-ray diffraction data were collected on beamline BM30A at synchrotron ESRF (Grenoble, France). PttGSTF1 and PttGSTF1 S13C diffraction images were integrated with the program HKL2000 (Otwinowski and Minor, 1997) and the program XDS (Kabsch, 2010), respectively. Crystallographic calculations were carried out with programs from the CCP4 program suite (Winn et al., 2011). The structure of PttGSTF1 was solved by the molecular replacement method with the program Molrep (Vagin and Teplyakov, 2010) using A. thaliana GSTF2 as a template (PDB code: 1GNW). PttGSTF1 and PttGSTF1 S13C structures were refined by alternate cycles of restrained maximum-likelihood refinement with the program Phenix (Adams et al., 2010) and manual adjustments were made to the models with Coot (Emsley et al., 2010). The crystal parameters, data statistics, and final refinement parameters are shown in Table 1. All structural figures were generated with PyMol Molecular Graphics System (Schrödinger, LLC). The atomic coordinates and structure factors (codes 4RI6 and 4RI7 for PttGSTF1 and PttGSTF1 S13C, respectively) have been deposited in the Protein Data Bank, Research Collaboratory for Structural Bioinformatics, Rutgers University, New Brunswick, NJ (http:// www.rcsb.org/). PHYLOGENETIC AND SEQUENCE ANALYSES OF P. TRICHOCARPA GSTFs In silico analysis of the various versions of P. trichocarpa genome led to the identification of eight genes coding for GSTFs. All the P. trichocarpa GSTF genes encode predicted proteins with a size ranging from 213 to 218 amino acids (Figure 1). None of these sequences exhibits a targeting sequence, suggesting a cytosolic localization. Based on sequence similarities and phylogenetic analysis, four subgroups can be distinguished in poplar: PtGSTF1/2, PtGSTF3/7, PtGSTF4/5/6, and PtGSTF8 (Figures 1, 2). In terms of sequence similarity, the percentage identity within a subgroup ranges from 65 to 98% whereas it is comprised between 40 and 48% between subgroups. The protein similarity somehow reflects the gene arrangement since PtGSTF1 and PtGSTF2 genes are present in tandem on chromosome 2, PtGSTF4, 5, 6, and 7 genes cluster on the scaffold 36, whereas PtGSTF3 and PtGSTF8 are found at isolated loci on the chromosomes 14 and 17, respectively. Hence, the only peculiarity is the genomic association of PtGSTF7 with PtGSTF4, 5, 6 whereas the sequence proximity to PtGSTF3 suggested a common origin. The sequence differences between members of each group are also visible by looking to the four amino acid signature typical of proteins of the thioredoxin superfamily and containing the catalytic serine. Indeed, PtGSTF1 and PtGSTF2 display a STAV active site motif, PtGSTF3 and 7 a STCT motif and PtGSTF4, 5, and 6 display STAA or STNT motifs (Figure 1). PtGSTF8 is clearly particular since it has an alanine (AVCP motif) instead of the catalytic serine. This is not specific to the poplar isoform as this particularity is found in several plant orthologs, including the petunia AN9 protein for example. An exhaustive search of GSTF homologs in available genomes from photosynthetic organisms indicated that GSTF genes are absent in cyanobacteria and green algae. On the other hand, there are considerable variations in the number of genes in terrestrial plants since there is only one gene in Selaginella moellendorffii but 27 predicted genes in Aquilegia coerulea (Supplementary Table 1). However, the average number of genes is close to 10. A phylogenetic tree constructed using the 400 retrieved sequences (Figure 2) shows several distinct clades that can be distinguished according to the protein active site signature even though some groups can be also differentiated on the basis of the presence of C-terminal or N-terminal extensions. The sequences identified in P. patens and S. moellendorffii, which are supposed to represent the ancestral versions, form an isolated clade and do not display a clear consensus active site motif. The four subgroups observed for poplar GSTFs are found again in the phylogenetic tree and fell within separate clades. It is worth noting that PtGSTF8 stands out within a clade containing proteins lacking the catalytic serine but displaying a conserved cysteine residue two residues away (AxC motif). Interestingly, this cysteine is also found in PtGSTF3 and PtGSTF7 and in all orthologs of the same clade whereas the catalytic serine is present. Overall this indicates that numerous species-specific duplication events occurred during evolution and this raises the question of the appearance of the GSTF group in photosynthetic organisms since it appears to be an innovation specifically found in terrestrial plants. Moreover, the divergences observed in the active site signatures suggest that the proteins may have different properties. TRANSCRIPT EXPRESSION OF GSTFs IN POPLAR ORGANS In order to determine whether the expression territories could allow discriminating GSTF genes, RT-PCR experiments were performed from different tissues of an adult, naturally-growing P. trichocarpa individual. Experiments were performed with 25, 30, or 35 amplification cycles in order to examine gene expression in the linear range of PCR amplification. The best detection was obtained after 30 cycles as transcripts were barely detected at 25 cycles whereas the signal for some genes was saturated at 35 cycles. All GSTF transcripts were weakly detected in roots and in the male reproductive organ either as whole (male flower) or in stamen, whereas they were all detected in female flowers, fruits, petioles, leaves, and buds (Figure 3). Moreover, the GSTF1, F2, F5, F6, and F7 genes are globally more expressed than GSTF3, F4, and F8 genes. Comparing the expression of duplicated genes, we observed that they generally have the same expression profiles although variations in transcript abundance can sometimes be observed. A difference between PtGSTF1 and PtGSTF2 transcripts is the presence of PtGSTF2 in male flowers. In the PtGSTF4/F5/F6 subgroup and incidentally among all GSTFs tested, PtGSTF5 is the most expressed in male flowers/stamen and in roots together with PtGSTF7. FIGURE 2 | Unrooted phylogenetic tree of GSTFs from terrestrial plants. The alignment was performed with PROMALS3D using 1BYE, 1AXD, 1AW9, 1GNW, and 1BX9 protein structure models as templates. The alignment was subsequently manually adjusted by using Seaview software. Phylogenetic tree was built with BioNJ and edited with Figtree software (http://tree.bio.ed. ac.uk/software/figtree/). Five hundred bootstrap replicates were performed in order to test the robustness of the tree. The scale marker represents 0.05 substitutions per residue. Sequence names have been removed for clarity but all sequences used are available as Supplementary Table S1. For each major branch, the consensus active site signature containing the catalytic residue is indicated, x is used when the variability is too high. P. trichocarpa isoforms have been indicated by an arrow on the tree (F1-F8). PtGSTF1 PROTEIN ACCUMULATES IN ALL ORGANS ANALYZED BUT ITS LEVEL IS NOT AFFECTED IN LEAVES INFECTED BY THE RUST FUNGAL PATHOGEN MELAMPSORA LARICI-POPULINA In the subsequent parts, we focused our analysis on poplar GSTF1 since several studies showed that it is regulated in many stress conditions. For instance, it is up-regulated in poplar leaves exposed to the tent caterpillar Malacosoma disstria (Ralph et al., 2006), in root apices of drought-sensitive (Soligo) and tolerant (Carpaccio) poplar cultivars and in leaves of Carpaccio cultivar subjected to a water deficit (Cohen et al., 2010) and in leaves of 2 monthold P. trichocarpa cuttings treated with CDNB or H 2 O 2 (Lan et al., 2009). Contrasting results have been obtained in the case of poplar infection by rust fungi, GSTF1 was found to be upregulated in some (Miranda et al., 2007) but not all studies (Rinaldi et al., 2007;Azaiez et al., 2009). Besides, proteomic studies pointed to an increased GSTF1 protein level in roots of Populus tremula exposed to a cadmium stress (Kieffer et al., 2009) and in leaves of Populus cathayana male cuttings exposed to chilling or salt stresses (Chen et al., 2011;Zhang et al., 2012). Hence, taking advantage of the production of the recombinant protein (see below), we have raised an antibody against GSTF1 first to investigate its protein level in several poplar organs, i.e., leaves, petioles, stems, roots, fruits, stamens, and buds by Western Blotting (Figure 4A). A major band around 25 kDa likely corresponding to GSTF1 was detected in protein extracts from various organs, indicating that the protein is present in many tissues, though a higher protein amount was found in leaves, petioles, stems, roots and stamens. Considering that GSTF2 is a close paralog, it is possible that the detected signal represents the sum of both GSTFs. Next, considering the discrepancy observed at the transcript level as detailed above, we sought to evaluate GSTF1 protein abundance in a poplar-rust pathosystem. The model used is P. trichocarpa × P. deltoides leaves either untreated or inoculated by two M. larici-populina isolates, virulent, or avirulent, leading to compatible and incompatible reactions respectively ( Figure 4B). However, no significant variation in protein abundance was detected over a 7-day time-course infection which represents a whole asexual cycle from spore germination to urediniospore formation. This result suggests that GSTF1 protein levels are not affected by M. larici-populina infections. POPLAR GSTF1 IS A HOMODIMERIC PROTEIN WITH GSH-CONJUGATING ACTIVITIES In order to investigate the biochemical and structural properties of GSTF1, the mature form was expressed in E. coli as well as single mutated protein variants, the catalytic serine of which was replaced by a cysteine or an alanine residue. Having used a P. tremula × P. tremuloides leaf cDNA library, the amplified coding sequence, which is perfectly similar to the DN500362 EST sequence, is slightly different from the GSTF1 version found in the P. trichocarpa reference genome. Hence, the sequence will be referred to as PttGSTF1 in the following parts for P. tremula × P. tremuloides GSTF1. At the protein level, two very conservative changes are present, Ile33 is replaced by a Val and Lys86 by an Arg. After purification, around 30 mg of protein was obtained per liter of culture. The purified proteins have been first analyzed by mass spectrometry. A single species was detected for each protein with molecular masses of 24192, 24511, and 24172 Da for PttGSTF1, PttGSTF1 S13C, and PttGSTF1 S13A respectively (Supplementary Table 3). Compared to theoretical masses, these values are compatible with proteins where the N-terminal methionine is cleaved, which was expected form the presence of an alanine as the second residue. A mass increment of 305 Da was specifically present in PttGSTF1 S13C, which suggested that a glutathione molecule is covalently bound to the newly introduced cysteine residue via a disulfide bridge. Accordingly, PttGSTF1 S13C is not retained on GSH Sepharose columns contrary to PttGSTF1 and PttGSTF1 S13A. Then, the oligomeric state of wild-type and mutated proteins was estimated using calibrated www.frontiersin.org December 2014 | Volume 5 | Article 712 | 7 size exclusion chromatography. All purified proteins eluted as a single peak whose estimated mass (45-47 kDa) is consistent with a dimeric arrangement (Supplementary Table 3) as reported for example for Arabidopsis GSTF2 or maize GST-I proteins (Reinemer et al., 1996;Neuefeind et al., 1997a). Next, in order to characterize the enzymatic properties of PttGSTF1, its activity was measured toward various model substrates (CDNB, BITC, PITC, PNP-butyrate, and HNE) usually employed to measure the activities of GSTs catalyzing GSHconjugation reactions ( Table 2). An activity was detected toward all these substrates with catalytic efficiencies (k cat /K m ) ranging from 6.6 × 10 2 M −1 s −1 for PNP-butyrate to 3.1 × 10 3 M −1 s −1 for HNE. The slightly better catalytic efficiency obtained for HNE compared to other substrates is due to a better affinity of PttGSTF1 for this substrate. On the other hand, the lower efficiency observed with PNP-butyrate is due to a weak turnover number (k cat ). The kinetic parameters for the two tested isothiocyanate derivatives were in the same range. The difference by a factor around two of the apparent K m value indicates that variations in the aromatic groups (benzyl vs phenetyl) do not affect much substrate recognition. Comparing all substrates, the highest K m value was for CDNB but this is compensated by a better turnover number which is around 6-20 fold better than for the other substrates tested. Using PNP-butyrate as the second substrate, an apparent affinity of GSTF1 for GSH was determined. The K m value is 97.6 ± 6.0 μM. Contrary to Tau GSTs, GSTFs often proved to have peroxidase activities. For this reason, we have also tested cumene hydroperoxide (CuOOH) and tert-butyl hydroperoxide (t-BOOH). Whereas no activity was detected with t-BOOH, the catalytic efficiency obtained in steady-state conditions for the reduction of CuOOH into the corresponding alcohol is 3.2 × 10 3 M −1 s −1 . This is in fact quite close to the value obtained for example with a mitochondrial Prx IIF from poplar, the role of which is assumed to significantly contribute to peroxide detoxification or signaling (Gama et al., 2007). With most substrates, the substitution of the catalytic serine into alanine (PttGSTF1 S13A variant) generally led to a completely inactive enzyme. However, a residual activity was still observed with CDNB and PNP-butyrate, the catalytic efficiency being decreased by a factor of 40 and 20 respectively compared to the results obtained with PttGSTF1. Whereas this suggested that one or several residues other than the serine contribute to the decrease of the pKa of the thiol group of GSH, the PttGSTF1 S13C variant had no or negligible activity toward all these substrates. According to the mass spectrometry results, the reason may be the formation of a covalent adduct. Hence, this prompted us to investigative whether PttGSTF1 S13C has acquired properties similar to GSTs naturally having a cysteine residue in their active site signature by testing the thioltransferase activity using DHA and HED, two substrates usually employed for characterizing Grxs and cysteinecontaining GSTs. As expected, PttGSTF1 had no activity both with HED and DHA. Concerning PttGSTF1 S13C, whereas no activity has been detected with DHA, a reasonably good catalytic efficiency (1.1 × 10 3 M −1 s −1 ) was obtained with HED, essentially because of a good apparent affinity (K m value of 33.7 μM). Besides these classical assays, we sought to examine more unusual substrates/ligands that have been isolated with orthologous GSTF members i.e., auxin/indole-3-acetic acid (IAA) or a synthetic analog, 2,4-dichlorophenoxyacetic (2,4-D) (Bilang et al., 1993;Bilang and Sturm, 1995) and norharmane, indole-3-aldehyde and quercetin (Smith et al., 2003;Dixon et al., 2011). Hence, we investigated whether these compounds could constitute poplar GSTF1 substrates first by simply analyzing changes in the UV-visible spectra of each compounds as a function of time upon successive addition of GSH and PttGSTF1. However, we did not detect any significant spectral shifts (data not shown). Thinking that the glutathionylation may eventually not modify the absorption spectra of these molecules, the product of a reaction of several hours was analyzed by reverse phase-HPLC on a C18 column. However, no glutathionylated species can be separated and identified using this approach. Considering that some of these molecules may represent ligands and that the ligandin and catalytic sites in GSTs are generally overlapping at least partially, we have examined whether the addition of these molecules modulated PttGSTF1 activity. Despite using concentrations in the millimolar range, no effect was observed both using CDNB and PNP butyrate assays. We concluded that these compounds do not bind to PttGSTF1. THE STRUCTURES OF PttGSTF1 AND PttGSTF1 S13C IN COMPLEX WITH GSH AND MES REVEAL THE RESIDUES PARTICIPATING TO SUBSTRATE BINDING The crystallographic structures of PttGSTF1 and PttGSTF1 S13C, bound with ligands, have been obtained and refined to 1.5 and 1.8 Å resolutions ( Table 1). The crystals belonged to the space group P2 1 2 1 2 1 , and the asymmetric unit consisted of one biological dimer (residues Ala2-Ala215 in both monomers, Root Mean Square Deviation of 0.18 Å for 175 superimposed Cα atoms). The analysis of the Fourier difference maps of PttGSTF1 revealed the presence of two ligands in the active site in each monomer: a glutathione molecule originating from the pre-treatment performed with an excess of GSH and a MES molecule present in the crystallization buffer. They are located respectively in the G and H sites ( Figure 5A). Unless covalently bound, both ligands cannot occupy the active site simultaneously. Currently, we do not have any evidence for a GSH-conjugation reaction with MES nor data for any non-catalytic binding. Both glutathione and MES molecules were refined with complementary occupancies. In monomer A, the refined occupancies of glutathione and MES molecules were 58 and 42%, respectively. In monomer B, the corresponding refined occupancies were 71 and 29%, respectively. Therefore, PttGSTF1 structure can be described as two structures: PttGSTF1 in complex with glutathione and PttGSTF1 in complex with a MES molecule. Concerning PttGSTF1 S13C, the structure refinement confirmed that Cys13 is glutathionylated but this modification did not induce significant conformational changes in comparison to PttGSTF1 (RMSD of 0.33Å based on alignments of 350 Cα positions). In order to understand possible differences among GSTF isoforms, a detailed comparison was performed with the three other GSTFs (AtGSTF2, maize GST-I and GST-III) whose structures are known (Reinemer et al., 1996;Neuefeind et al., 1997a,b;Prade et al., 1998). The AtGSTF2 structure was solved in complex with S-hexylglutathione or with an acetamide herbicide like molecule-glutathione conjugate, ZmGST-I was in complex with lactoylglutathione or an atrazine-glutathione conjugate, and ZmGST-III was in an apoform. Interestingly, PttGSTF1 belongs to a distinct, uncharacterized GSTF subgroup (Figure 2). A PttGSTF1 monomer consists of an N-terminal domain (β1α1β2α2β3β4α3) and a C-terminal domain composed of α-helices (α4α5α6α6 α7α8) (Figure 5A) as classically observed in most GST classes. As expected, structures of plant GSTFs superimposed relatively well with a mean RMSD of 0.92 Å. Prominent differences are nevertheless observed in three regions ( Figure 5B). In PttGSTF1, an additional α-helix is observed in the segment between the strands β2 and β3 while others exhibit 1-3 FIGURE 5 | Overall structure of PttGSTF1 and comparison with other plant GSTFs. (A) Architecture of PttGSTF1 dimer. The MES molecule (black) is located in the H site, as shown in monomer A whereas the GSH molecule (gray) bound to the G site is depicted in monomer B. Monomers A and B are colored in light cyan and dark green respectively. Secondary structures are labeled only for monomer A for clarity. The GSH and MES molecules are shown as sticks and colored according to atom type (nitrogen, blue; oxygen, red; sulfur, yellow; and carbon, gray/black). Omit map (colored in purple) of contour level of 0.8 σ is shown around each bound ligand. It was built using the Composite omit map command of the Phenix software suite. (B) Superimposition of monomers of PttGSTF1 (dark green), A. thaliana GSTF2 (pink) and Z. mays GST-I (blue) and GST-III (yellow). Only noticeable secondary structural differences between GSTFs are shown for clarity. The GSH (gray) and MES (black) molecules highlight the putative positions of the G and H sites respectively. GSH and MES molecules are colored according to atom type and shown as sticks. short 3 10 -helices. This segment is involved in substrate binding and contains a conserved phenylalanine (Phe53 in PttGSTF1), which is assumed to be essential for dimerization (Prade et al., 1998). This phenylalanine represents the major inter-monomer contact, its side chain being buried in a hydrophobic pocket composed of Trp102, Thr105, Thr109, Val143, Ile146, and Tyr147 in PttGSTF1 and located between α4 and α5 of the other subunit. www.frontiersin.org December 2014 | Volume 5 | Article 712 | 9 Interestingly, among the residues involved in the dimer interface, the hydrophobic ones are those that are the most conserved in PtGSTFs (Figure 1). Another variation concerns the length and conformation of the linker found between α3 and α4 helices and that connects the N-and C-terminal domains. Considering the variable length of the linker, such conformational differences were expected. However, the central residue of this connecting region, Leu88 in PttGSTF1, is highly conserved and adopts a superimposable position in all plant GSTF structures. Its side chain, wedging between helices α3 and α6, connects the two domains. The last noticeable difference is likely to be a class-specific feature of PttGSTF1 in which the absence of 3 residues found in other poplar GSTFs (Figure 1) shortens the α4 helix. In PttGSTF1, a glutathione molecule is positioned in the G site groove which is mainly populated by polar residues from the N-terminal domain ( Figure 6A). The Glu68, Ser69, and Arg70 residues, situated in the β4-α3 loop and in α3, stabilize the glutamyl group of GSH through hydrogen bonds and Coulomb interactions. The NH and carbonyl groups of the cysteinyl moiety are hydrogen-bonded to the backbone amino group of Val56 that precedes the invariant cis-Pro57 found in all GSTs and in all Trx superfamily members. The carboxylate of the glycinyl residue interacts with the side chains of Gln42, Lys43 and Gln55 found in the loops connecting β2-α2 and α2-β3. The thiol group of the cysteine of the GSH moiety is quasi-equidistant to the hydroxyl groups of Ser13 and Thr14 (3.2 and 3.4 Å respectively). According to mass spectrometry data, in the PttGSTF1 S13C variant, GSH is covalently bound to the modified residue (Cys13). Apart this difference, the same GSH-protein interactions are observed in both crystal structures (Figure 6B). With regard to the electrophilic substrate site, a MES molecule occupies the position adopted by other substrates in known GSTF structures. Hence, the H site is delimitated by residues from three regions: residues 12-14 found at the end of the β1-α1 loop and in α1, residues 36-40 that are part of the β2-α2 loop and residues 119-123 which are located in the C-terminal end of α4 ( Figure 6C). The MES molecule is surrounded by the hydrophobic residues Leu12, Leu37 and Phe123. Moreover, the oxygen atom of the morpholino ring is hydrogen-bonded to the NH and OH groups of Thr14 and the sulfonic group forms a salt bridge with His119. However, the latter two residues are less conserved as compared to the three others suggesting that they might confer substrate specificities to PttGSTF1. DISCUSSION The existence of multigenic families is frequently explained by the functional divergence i.e., the acquirement of new or specific functions, appearing following gene duplication. With the complete sequencing of several plant genomes, it appeared that many species-specific duplication events occurred, leading to the expansion of the GSTF gene family. The maintenance of so many GSTF genes in the genomes (eight in poplar but up to ca 27 in some terrestrial plants) might be attributed for example (i) to a specific cellular/tissular expression associated to certain developmental stages or stress conditions, (ii) to specific subcellular localizations or (iii) to specific biochemical and structural characteristics. An additional layer of complexity and possible redundancy is the G-and H-sites. (A,B) Glutathione binding site of PttGSTF1 and PttGSTF1 S13C. (C) Electrophilic substrate binding site of PttGSTF1. The PttGSTF1 (dark green) and PttGSTF1 S13C (orange) monomers are shown in cartoon with a transparent molecular surface in (C). Residues involved in the binding of GSH (gray) and MES (black) molecules are shown as sticks, labeled and colored according to atom type. Putative hydrophilic interactions between the substrates and the enzyme are shown as black dashed lines. Frontiers in Plant Science | Plant Physiology December 2014 | Volume 5 | Article 712 | 10 existence of several tens of GSTUs in plants which have quite similar enzymatic and biochemical properties. Indeed, owing to the presence of the same conserved serine residue, GSTUs also possess glutathionylation activities toward herbicides, safeners and several other cyclic/aromatic compounds. An intriguing example illustrating the possible redundancy between GSTUs and GSTFs is the fact that petunia AN9, a GSTF gene, and maize Bz2, a GSTU gene, can complement mutants for the other gene (Alfenito et al., 1998). Some differences can however be sometimes noticed. For instance, contrary to most GSTFs, GSTUs usually do not have peroxidase activity. However, redundancy could exist with other GST classes, notably the Theta GSTs that do have such a peroxidase activity. In this study, we provide the first elements exploring the question of the redundancy among GSTF members and functions in poplar. Focusing on the particularities among poplar GSTFs that could explain the presence of eight genes, their putative subcellular localizations were first examined from the bioinformatic analysis of primary sequences. According to the absence of clear N-or C -terminal targeting sequences, all poplar GSTFs are predicted to be cytosolic proteins. This is generally in accordance with data obtained in other organisms either from translational GFP fusion as for several GSTFs of Physcomitrella patens (Liu et al., 2013) or from the absence of GSTF detection in studies of organellar proteomes. A plasma membrane localization was suggested for AtGSTF2 (Murphy et al., 2002) and a dual targeting in the cytosol and chloroplast was demonstrated for AtGSTF8 owing to the presence of an alternative transcription start site (Thatcher et al., 2007). However, only a few AtGSTF8 orthologs in other plant species have a similar extension. With regard to expression profiles, all poplar GSTF genes are redundantly expressed in some organs as leaves or reproductive organs. Moreover, the transcript levels are not necessarily correlated with protein levels. For instance, we did not detect GSTF1 transcripts in roots whereas quite important protein amounts were detected by western blot. It certainly illustrates the variations inherent to the plant developmental stages or to the fluctuations of environmental constraints as we have harvested our samples from a naturally growing tree and at different periods. Considering that many GSTFs could have similar cellular and subcellular expression territories, the difference should come from specific biochemical and/or structural properties. This parameter has been examined by producing PttGSTF1 as a recombinant protein and assessing its activity toward model substrates representing various types of biochemical activities as well as by solving the 3D structure of the first GSTF representative from a tree. Indeed, structures for only three GSTFs have been solved in the late 90's and nothing since that time. As other GSTF members bearing a conserved serine in the active site motif, enzymatic analysis showed that GSTF1 possesses glutathione-conjugating activity toward structurally diverse substrates and glutathione peroxidase activity. The kinetic parameters of GSTF1 activity toward the model substrate CDNB (k cat /K m of 1.3 × 10 3 M −1 s −1 ) are within the range of reported values for some GSTFs as P. patens GSTF1 (k cat /K m of 1.5 × 10 3 M −1 s −1 ) (Liu et al., 2013) although important variations can be sometimes detected. Triticum aestivum GSTF1 exhibits a 50 fold higher catalytic efficiency (k cat /K m of 7.2 × 10 4 M −1 s −1 ) (Cummins et al., 2003). While CDNB is an artificial substrate that may somehow mimic the structure of some herbicides and that is usually modified by all GSTFs, the other substrates used may be more physiologically relevant. BITC and PITC are representatives of a family of natural compounds found in Brassicaceae and produced by the enzymatic degradation of glucosinolates. Surprisingly, whereas glucosinolates are found in Arabidopsis, only a few Arabidopsis GSTF members among the 13 isoforms are able to catalyze conjugation reactions on BITC (Wagner et al., 2002;Nutricati et al., 2006;Dixon et al., 2009). The quite important turnover number obtained for the GSH-conjugation reaction of BITC by PttGSTF1 (k cat of 0.70 s −1 ) indicates that poplar GSTF1 may have the particular ability to recognize related molecules. As a matter of comparison, higher turnover numbers, around 25 s −1 , have been reported for Homo sapiens GST M1-1 or P1-1 both using BITC and PITC (Kolm et al., 1995). CuOOH is used as a molecule representative of bulky peroxides such as peroxidized lipids whereas HNE is a toxic aldehyde formed as a major end product of lipid peroxidation (Esterbauer et al., 1991). Both types of molecules have a dual function, being deleterious by promoting DNA damages or membrane protein inactivation, but at the same time, they represent signaling molecules. Whereas peroxide activity is systematically tested for GSTFs, the GSHconjugation of HNE has been rarely evaluated. One example is the demonstration that a Sorghum bicolor B1/B2 GSTF heterodimer purified from shoots of fluxofenim-treated plants exhibits a catalytic efficiency about 15 fold higher (calculated k cat /K m for this protein is around 2 × 10 4 M −1 s −1 ) than for PttGSTF1 (k cat /K m of 1.3 × 10 3 M −1 s −1 ) (Gronwald and Plaisance, 1998). With regard to peroxides, most GSTFs tested so far, whatever their origin, exhibit a glutathione peroxidase activity. Compared to other characterized GSTFs, poplar GSTF1 possesses quite elevated peroxidase activity toward cumene hydroperoxide with a turnover number of 1.92 s −1 (Dixon et al., 2009). It is for instance in the same range as those reported for Lolium rigidum and Alopecurus myosuroides GSTF1 which are considered as highly active peroxidases (k cat of 2.64 and 1.3 s −1 respectively) (Cummins et al., 2013). From a physiological perspective, it is worth noting that pathogen attacks are often accompanied by an oxidative stress that triggers, among other symptoms, lipid peroxidation. Also, important amounts of HNE are accumulated in Phaseolus vulgaris upon fungal infection by Botrytis cinerea (Muckenschnabel et al., 2002). With the known induction of GSTF genes by defense hormones or biotic stresses (Wagner et al., 2002), their known involvement in the synthesis of defense compounds as camalexin (Su et al., 2011), the peroxidase and GSH-conjugating HNE activities, it is conceivable that GSTF1 is involved in oxidative stress tolerance and/or oxidative signaling occurring in particular during pathogen or insect attacks. In fact, whereas GSTF1 expression is induced in poplar attacked by the tent caterpillar Malacosoma disstria (Ralph et al., 2006) contrasting results have been obtained for GSTF1 in the case of rust infected poplars. Indeed GSTF1 gene was found to be up-regulated at six dpi in a former study investigating gene expression in Populus trichocarpa × P. deltoides leaves infected by Melampsora medusae which represent a compatible interaction (Miranda et al., 2007). On the other hand, no regulation was detected when Populus nigra × P. maximowiczii leaves are infected by M. medusae or M. larici-populina (Azaiez et al., 2009) or when P. trichocarpa × P. deltoides leaves are infected by M. larici-populina either by a virulent (compatible) or an avirulent (incompatible) isolate (Rinaldi et al., 2007). According to the transcript measurements, no variation of GSTF1 protein level has been detected in P. trichocarpa × P. deltoides leaves during both compatible and incompatible reactions with M. larici populina. Here, the observed differences might simply be explained by differences in the poplar cultivars, rust isolates and time-points used in these independent studies, which altogether generate some specificity in these interactions. It would be informative to systematically analyze transcript and protein variations for all poplar GSTFs in different biotic interactions as done previously for example for the Arabidopsis or wheat GSTF families (Wagner et al., 2002;Cummins et al., 2003). To summarize this part, the biochemical and expression analyses demonstrated that, through its peroxidase and its GSH-conjugating activities, GSTF1 may have multiple roles notably related to xenobiotic detoxification or to oxidative stress tolerance both under biotic and abiotic constraints. Contrary to other GSTFs, we have not observed an interaction or an activity with auxin and other heterocyclic compounds such as norharmane, indole-3-aldehyde and quercetin that were previously found to interact with other GSTFs and AtGSTF2 in particular (Bilang and Sturm, 1995;Smith et al., 2003;Dixon et al., 2011). This may indicate that PttGSTF1 has no ligandin function. In fact, several GSTFs for which ligandin function has been demonstrated, such as AN8 or Bz2, do not have the catalytic serine but an alanine instead in a AAxP motif. It does not mean however, that these GSTFs do not have catalytic functions. In fact, when the catalytic serine of PttGSTF1 was replaced by an alanine, the glutathionylation activity is not totally abolished as we would expect and a weak activity toward certain substrates was still measurable. This suggests that the catalytic serine is important but not mandatory for GSH-conjugating reactions and that residues other than the catalytic serine could be involved in glutathione activation. In support of this view, it has been reported that human GSTO1-1, a Cys-GST, loses deglutathionylation activity and acquires glutathionylation activity when the catalytic cysteine is replaced by an alanine (Whitbread et al., 2005). While Ser13 likely corresponds to the catalytic residue found in most GSTFs and is the primary candidate for GSH activation, the hydroxyl group of the adjacent Thr14 is found approximately at the same distance in the PttGSTF1 structure. Hence, it is tempting to conclude that it might substitute to Ser13, at least in its absence. It is worth noting that with the exception of some GSTF clades, the members of which have a proline, the catalytic serine is often followed by another Ser or Thr in most members of other clades (Figure 2). Interestingly, neither the serine nor the threonine is conserved in the clade containing poplar GSTF8 which harbors aliphatic residues at these positions (AACP signature). In this specific case, we could speculate that the cysteine found after the threonine position acts as the catalytic residue. Although this will have to be confirmed experimentally, it is interesting to note that poplar GSTF3 and F7 and their close orthologs also have a cysteine at this position, and that fungal Ure2p-like enzymes have an asparagine that was recently assumed to be important for catalysis . Overall, this suggests that all residues forming the active site signature and present around the N-terminal end of α1 could substitute to each other. Another proof of this assumption is that the PttGSTF1 S13C mutant lost its glutathione peroxidase and glutathionylating activity but acquired the capacity to perform deglutathionylation reaction toward HED, an activity typical of Cys-GSTs. Although the detected activity is weaker than the one obtained with naturally-existing Cys-GSTs (Lallement et al., 2014a), it shows that changing the nature of the catalytic residue is sufficient to determine the type of GST activity. Accordingly, when the catalytic cysteine of poplar Lambda GSTs is mutated into a serine, a shift from the original deglutathionylation to glutathionylation activity was observed (Lallement et al., 2014b). Complementary to the biochemical and enzymatic analyses, the structural analysis should help understanding why GSTFs accept such diverse substrates but at the same time what are the fine differences that would generate substrate specificity. A comparison of poplar PttGSTF1 with AtGSTF2 structure does not point to dramatic structural changes. In fact, the glutathione binding site is in general not very different within a GST class but also among diverse GST classes. This is what we observed by superimposing GSTF structures. Rather, structural differences if any should come from variations in the H-site. However, owing to the lack of structures of GST in complex with their ligands, this H-site is often not very well defined. In the PttGSTF1 structure, the MES molecule, which likely mimics an H-site substrate (although it does not seem to be catalytically glutathionylated), is stabilized by five residues, Leu12, Thr14, Leu37, and His119 and Phe123. The Thr14 which is present in all poplar GSTFs except GSTF8, is in fact not found in other proteins whose structures are known, AtGSTF2 (SIAT signature), ZmGST-I (SWNL signature), and ZmGST-III (SPNV signature). Similarly, the His119 position is variable and it is occupied by an aromatic residue (Phe or Trp) in other poplar GSTFs as well as in AtGSTF2, ZmGST-I and ZmGST-III. Hence it is possible that these residues contribute to the recognition of specific substrates by PttGSTF1. In particular, the presence of His119 may be responsible for the binding of the MES molecule. On the contrary, the residues found at positions equivalent to Leu12, Leu37, and Phe123 in PttGSTF1 are also hydrophobic in most plant GSTFs and they are involved in the stabilization of the substrate in known structures of plant GSTFs in complex with herbicides. Thus, they seem to be critical for the electrophilic substrate recognition and they could constitute the core residues required for the general recognition of substrates. Supporting this view, it was shown that the Phe123 to Ile substitution in AtGSTF2 altered its ligand affinity and specificity (Dixon et al., 2011). Leu37 is found between β2 and β3, a region which is not well superimposable from one structure to another. For instance, five residues from this region are not visible in the electron density of the crystal structure of apo ZmGSTIII (Neuefeind et al., 1997b). A Phe35 modification in ZmGST-I (the residue equivalent to Leu37 in PttGSTF1) affects the enzyme affinity for its ligand (Axarli et al., 2004). Overall, this indicates that the β2-β3 region could be the protein area used by GSTFs to accommodate such a large spectrum of ligands/substrates. In GSTUs, the end of α4 helix and the C-terminal part are other regions that contribute to the correct positioning of the substrate in the H-site (Axarli et al., 2009). Similarly, the residues found at the end of α4 helix are also used by Lambda and Omega GSTs for substrate recognition which also involves the α4-α5 loop and a C-terminal helix (α9) which is specific to these two classes (Lallement et al., 2014b). In contrast, in GHR/Xi GSTs, proteins specialized in the reduction of glutathionylated quinones, no ample conformational change occurs upon substrate binding (Lallement et al., 2014c). To conclude on these biochemical and structural analyses, it is conceivable that most GSTFs display a common set of enzymatic activities on typical substrates that is linked to the conservation of core residues. The persistence of closely related genes in single species may be explained by subtle sequence changes that confer the ability to the enzymes to accommodate specific substrates and thus to acquire specific functions. Hence, to address this question of the enzyme divergence and substrate specificity, isolating and identifying physiological GSTF substrates should become a priority as well as accumulating more 3D structures of GSTFs from poplar and other plants, alone or more importantly in complex with their physiological substrates. AUTHOR CONTRIBUTIONS Henri Pégeot, Cha San Koh, Benjamin Petre, and Sandrine Mathiot performed the experiments under the supervision of Sébastien Duplessis, Arnaud Hecker, Claude Didierjean, and Nicolas Rouhier. All authors contributed to the writing of the manuscript, have read and approved the final manuscript.
v3-fos-license
2017-06-17T02:52:58.475Z
2010-04-04T00:00:00.000
23742780
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "52b71e4351ba007d79f2590c23cf19b2917455aa", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2489", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2fb660d6296f0812a2238d425d88ec0c873535b4", "year": 2010 }
pes2o/s2orc
Creating national care standards for neonatal intensive care units in 2007 BACKGROUND: Infant mortality rate is reported 3.18 in 1000 births in Iran. International organizations such as World Health Organization (WHO) and United Nations Children's Fund (UNICEF) consider applicable standards essential for providing effective health services in hospitals and health centers. Therefore, it is essential to create national care standards for neonatal intensive care units (NICU) in Iran. METHODS: This is a multiple triangulation study conducted in 2007. In the first step, international standards were extracted from appropriate sites. Then, using Delphi method, as well as the viewpoints of 15 experts in clinical medical sciences, a set of suggested standards for intensive care unit was prepared. In the third step, 42 clinical science experts of Iran were selected, and their viewpoints on applicability of the suggested standards were investigated through a descriptive survey method. Data obtained in this step were analyzed using descriptive statistics. RESULTS: First, intensive care standards were extracted; then clinical science experts reviewed the suitability and applicability of suggested set of standards for Iran and finalized them. Finally, 386 standards for intensive care were drafted and approved by 77.5% to 100% desirability rate for NICUs of Iran. CONCLUSIONS: The findings of the study showed that most standards were either appropriate or fairly appropriate. So, necessary changes in final standards were made based on subjects, viewpoints and suggestions as well as the results of con-sulting with iving importance to health care would have obvious effects on the improvement of regional health and existing economic resources. Nowadays, development of technologies and changes in service needs have increased the request for in-service educations. Efforts to improve supervision and evaluation through appropriate organization can lead to creation of standards to assess performance. Standards include regulations, guidelines and features of activities or results and are created to be used in specific profession through consensus of experts in that profession. In nursing profession, the essential factor for managing nursing services is evaluation systems, which means following standards of performance, goals, planning operational programs, quantity and quality of health care services, working hours and costs. 2 Recent development in science and technologies has improved the quality of nursing programs and as a result, nurses have faced advanced techniques and professional skills. Also, nurses cooperate with other health care team members as coordinators. Therefore, evaluating quality of nursing care is very important. 3 Nursing profession needs laws and regulations to provide the same care that good nurses provide within similar situations. As a result, providing high quality care should be based on determined standards. 4 Developing G Archive of SID www.SID.ir neonatal care standards and following them, especially those who need intensive care have decreased infant mortality. For example, in the United States the infant mortality rate reduced from 10 in 1000 births in 1987 to 6.9 in 2000. 5 The goal of standards is to create quality levels and desirable services to protect the society by nursing care facilities. Standards show the society that nurses are directly responsible for the quality of nursing services and adjust the quality level of their services to reach the desirable level. 6 Since the infant mortality in Iran is reported 3.18 in 1000 births, 7 which is much higher than this rate in the developed countries, it is essential to create health care standards for Iran to reduce the long term problems facing infants with high risk and to remove functions of taste among infant nurses. International organizations such as World Health Organization (WHO) and United Nations Children's Fund (UNICEF) consider applicable standards essential for providing effective health services in hospitals and health centers and neonatal care standards, which are components of children's rights, can prevent infant mortality. 8 Since standards for neonatal intensive care units in Iran have not been created and the authorities have felt the necessity of standards for NICUs more than ever and the researchers also have come to the importance of them in their clinical experiences, this study is conducted to develop national care standards for neonatal intensive care units in Iran based on the international standards of 2006. The special aims of the study included determining criteria for admission, care during hospitalization, infection control and hospital policies and discharge criteria for NICUs. Methods This study was based on multiple triangulation approach. Sampling was purposive and data were collected by a questionnaire, which included two sections of demographic data and standards. Answering was based on a 3 Licket scale of suitable, relatively suitable and unsuitable. Study population included experts of nursing and medical sciences. Entry criteria included having a master degree in nursing or be working for at least 2 years in NICU for nurses with undergraduate degree, having pediatric specialty for doctors, be willing to participate in the study. The option to leave the study at any time was open. Reliability and validity of the questionnaire was predicted using professionals ideas and consensus. 9 The study was conducted in three steps. In the first step, the standards of neonatal care of 10 countries and states were extracted from credible sources via internet, databases and other texts and the questionnaire items were developed using these standards. In the second step, Delphi technique (classic) was used to assess the reliability of the questionnaire. In order to use Delphi technique, the sample in the second step included 15 experts who had the entry criteria. They were 5 nursing faculty members (specialized in infants), one midwifery faculty member (specialized in mother and infant), 5 pediatrics who were faculty members of the Isfahan University of Medical Sciences and 3 nurses with master degree or bachelor degree with working experiences in NICU. After the questionnaires were completed, the viewpoints of experts were considered for editing the first draft, which was given again to the 15 experts. Standards were accepted with a consensus of 90% and the final standards were created. Finally, in the third step of the study, a survey method with descriptive design was followed for the national poll on NICU standards. In this step, the questionnaire which was prepared in the second step was sent to 60 experts from nursing schools and hospitals who had the inclusion criteria. The number 60 was considered by calculating the probable withdrawal of samples. In the third step of the study, the researcher traveled to different cities including Isfahan, Ahvaz, Tehran (health centers of Shahid Beheshti University, Tehran University and Iran University), Shiraz, Tabriz, Eurmieh and Yazd to deliver the questionnaire to experts with inclusion criteria in these places and gave them enough time (two weeks to one month) to com-Archive of SID www.SID.ir plete the questionnaire and followed up with a second trip to collect completed questionnaires. In total, 42 experts completed the questionnaires (30% did not complete and return the questionnaire). After revising, the national standards in accordance with executive, cultural, social and economic situations of Iran were prepared with a consensus of higher than 70%. Variables of the two sections of the questionnaire were analyzed using SPSS and frequency distribution. Results Most participants in the study (40.5%) had a master degree, were in the age range of 36-45 (45.2%) and had 2 to 10 years of work experience (52.34%) and 64.82% of participants had 6 to 10 years of work experience in ICU. By extracting care standards in different steps of the study, 386 standards for NICU in Iran were created ( Table 1). Most of these standards were favorable and relatively favorable (95% to 100%), but some of them that were less favorable (70% to 94%) included standards of the first study objective: in the "admission Archive of SID www.SID.ir criteria", level II or intermediate admission, hospitalization of infants with Apgar score of 4-6 in the fifth minute was 82.5% favorable and hospitalization of infants with a small surgery in past 24 hours was 85% favorable. In the level III or intensive admission, hospitalization of infants with life-threatening congenital defects was 87.5% favorable. In nurse duty at the admission time, providing care plan within 24 hours from the time of admission was 92.5% favorable. Less favorable standards related to the second objective of the study, "care during hospitalization" included: diagnosis and treatment of apnea 92.5%, educating parents for taking care of their infants 92.5%, nurses' responsibilities in providing prescribed care or transfusion of blood or blood products by two nurses (supervising nurse and responsible nurse) 92.5%, mouthwash every 2 hours after feeding infants receiving mechanical ventilation or NPO 80%, placing all infants under warmers or in the incubators 90%, daily weighing of infants with special or severe conditions 85%, physical check up for patients with special conditions such as lung hypertension every 12 hours 77.5%, monitoring growth process every week 90.5%, controlling and recording heart rate, respiration, oxygen saturation in NICU at least once per hour 90%, controlling peripheral blood pressure in NICU at least once per working shifts 92.5%, controlling temperature using underarm at the time of admission and then every 4 hours 90%, encouraging mothers to use electric breast pumps if necessary 90%, changing catheters every 3 days 87.5%, checking following items every 8 hours: bowel sounds, round belly size, intestinal torsion, existence of tachypnea-apnea and bradycardia 87.5%, taking medicine within one hour before and after prescribed time 87.5%, recording and setting time for using transducer, caliber air in each working shift 92.5%, checking mouth and lips every 4 hours for possible harms and pressure due to used techniques 92.5%, setting arterial oxygen to maintain within 88 to 96 percent in infants 92.5%, using usual pain killers such as mothers' nipple, breastfeeding and topical ointments (EMLA cream) 92.5%, bathing premature in-fants with sterile water 77.5%, giving pain killer or tranquilizers according to physicians' prescription to reduce stress and energy consumption 87.5%, determining ABG (arterial blood gases) during admission and in case of respiratory changes with physicians' prescription 87.5%, using glycerin in rectum if baby has no bowel movement for 24 hours 80%, washing skin just if it is necessary 90%, and comforting and using pain killer if the baby is restless 90%. Standards related to the third objective of the study, "infection control and hospital policies" included cooperation of the leader of medical team (pediatric with required criteria) and nursing manager in planning, executing, and controlling budget in the ward, which was favorable 92.5%. Standards of the fourth study objective, "discharge criteria" were all favorable 95% to 100%. Discussion Based on the results of the study, standards were revised based on suggestions and based on being favorable or not being favorable and the final version of national care standards were created. Standards that were 95 to 100 percent favorable were used as they were except for copyedit cases. Standards that were 70 to 90 percent favorable were revised based on the suggestions of participants and scholars and standards that were favorable less than 70% were omitted. In general, one of the reasons for consensus less than 90% for some standards is lack of nurses in health centers which makes nurses not to be involved in the decision-making process and reviewing patients' problems. Regarding standards of number of human resources in intensive units, Aiken et al (2003) said that reducing number of nurses causes increase of mortality and morbidity. Also, 10% increase in number of nurses with a bachelor degree reduces 5% of mortality rate of patients. 10 dence of hospital infections. 12 Some other important factors for nurses who work in ICU are skills and experience. Using skillful nurses reduces health care costs. In addition, Duke et al in 2000 mentioned the necessity of ICU standards to reduce mortality of infants and recruiting experienced staff for ICU to decrease the incidence of septicemia and pneumonia in babies. 13 Considering development of technologies in medical sciences, the necessity of recruiting professional nurses especially in ICUs are very obvious. Finally, by conducting a survey, final standards were prepared ( Table 1). The final standards included 21 criteria of admission that are in accordance with standards of Chicago, England and Lebanon, 17 standards for nurses duties at the time of admission, which are in agreement with those of Chicago, 20 standards for abilities of health care personnel in following these standards, which are in agreement with those of Chicago and Lebanon, 17 standards of nurses duties in providing these cares, which were in accordance with those of Chicago, England and the United States, 280 standards of care during hospitalization which were in accordance with those in Chicago, North Carolina hospitals and nursing committee, 5 discharge criteria which were in accor-dance with American Academy of Pediatrics, 16 standards of hospital policies along with 3 standards related to transferring criteria which were in accordance with those of Lebanon and England and 10 standards of infection control policies which were in accordance with those of Nursing Committee of North Carolina. 14 Most countries regardless of their wealth and size consider health and health services a major issue and most developing countries are trying to develop a health care system that can target the major needs of their societies. 10 Therefore, extracting and developing national care standards for NICU in Iran can be used as guidelines for related organizations such as Ministry of Health, Deputy of Treatment and hospitals to solve neonatal problems and improve their health and in general improve the quality of health in Iran. It is recommended that related organizations take advantage of the results of this study 14 to revise clinical nursing based on standards and to improve the quality of nursing performance and quality of health services in the hospitals all around the country. The Authors declare that have no conflict of interest in this study and ethical committee approved the study.
v3-fos-license
2020-09-05T13:08:42.250Z
2020-09-04T00:00:00.000
221499129
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2020.577980/pdf", "pdf_hash": "14162ef48b154311353de7d92c68193339e53796", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2490", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "14162ef48b154311353de7d92c68193339e53796", "year": 2020 }
pes2o/s2orc
Challenges and Prospects of New Plant Breeding Techniques for GABA Improvement in Crops: Tomato as an Example Over the last seven decades, γ-aminobutyric acid (GABA) has attracted great attention from scientists for its ubiquity in plants, animals and microorganisms and for its physiological implications as a signaling molecule involved in multiple pathways and processes. Recently, the food and pharmaceutical industries have also shown significantly increased interest in GABA, because of its great potential benefits for human health and the consumer demand for health-promoting functional compounds, resulting in the release of a plethora of GABA-enriched products. Nevertheless, many crop species accumulate appreciable GABA levels in their edible parts and could help to meet the daily recommended intake of GABA for promoting positive health effects. Therefore, plant breeders are devoting much effort into breeding elite varieties with improved GABA contents. In this regard, tomato (Solanum lycopersicum), the most produced and consumed vegetable worldwide and a fruit-bearing model crop, has received much consideration for its accumulation of remarkable GABA levels. Although many different strategies have been implemented, from classical crossbreeding to induced mutagenesis, new plant breeding techniques (NPBTs) have achieved the best GABA accumulation results in red ripe tomato fruits along with shedding light on GABA metabolism and gene functions. In this review, we summarize, analyze and compare all the studies that have substantially contributed to tomato GABA breeding with further discussion and proposals regarding the most recent NPBTs that could bring this process to the next level of precision and efficiency. This document also provides guidelines with which researchers of other crops might take advantage of the progress achieved in tomato for more efficient GABA breeding programs. INTRODUCTION Consumer preferences are shifting towards health-promoting and functionally enriched food products that could translate into healthier lifestyles. Therefore, industry and producers are encouraged to develop and design new products to target this growing market and promote studies to test their dietary functional claims (Diana et al., 2014;Ramos-Ruiz et al., 2018). G-Aminobutyric acid (GABA) is widely recognized as a bioactive and functional compound thanks to a plethora of in vitro and in vivo studies reporting its beneficial effects in treating many metabolic disorders (Yoshimura et al., 2010;Yang et al., 2012). In humans, GABA functions as an inhibitory neurotransmitter (Owens and Kriegstein, 2002), and it has been reported that the intake of GABA is effective in lowering the blood pressure of hypertensive patients (Inoue et al., 2003), inducing relaxation (Abdou et al., 2006), reducing psychological stress (Nakamura et al., 2009), and shortening sleep latency (Yamatsu et al., 2016), among other health benefits. Consequently, in recent decades, the food industry has focused on releasing and developing new GABA-enriched products such as tea, yogurt, bread, cheese and fermented foods (Park and Oh, 2007;Poojary et al., 2017;Quıĺez and Diana, 2017). However, the GABA concentration of these products is often insufficient to confer health-promoting effects and prevent lifestylerelated disorders. Many crop plants, including fresh vegetables, have high GABA levels. Hence, plant breeders have devoted considerable efforts to developing new and improved varieties of vegetables with increased GABA contents (Lee et al., 2018). However, classical breeding is often limited by the capacity to either identify or produce (i.e., through processes such as random mutagenesis) suitable parental germplasms, apart from being timeand resource-consuming. New plant breeding techniques (NPBTs) can overcome this barrier to obtain remarkable achievements in a safer and faster way. On the other hand, GABA and other GABA genes are involved in the metabolism of many important plant species, and their modulation could be exploited to develop more well-adapted and resilient varieties. Here, we provide an overview of the main aspects of GABA metabolism and how NPBTs could help to increase plant GABA contents using tomato as a model fruit-bearing crop. GABA CONTENTS IN CROP SPECIES After its first detection in potato tubers in 1949 (Steward et al., 1949), GABA has been found and measured in almost all economically important and model crop species, apart from bacteria, fungi and animals (Ramos-Ruiz et al., 2018). Some crops, at the harvest stage, accumulate considerable GABA levels that can substantially contribute to the daily recommended intake of GABA (10-20 mg) to generate positive health effects (Fukuwatari et al., 2001;Kazami et al., 2002;Inoue et al., 2003;Nishimura et al., 2016). The GABA content in crops varies across species and varieties and depends on a multitude of factors, such as the plant developmental stage, environmental conditions, response to biotic and abiotic stresses, and postharvest treatments (Ham et al., 2012;Kim et al., 2013;Chalorcharoenying et al., 2017) (Table 1). Among fresh vegetables, tomato is one of the crops that accumulates a significant amount of GABA in its edible parts, even though its content increases until the mature green stage and then rapidly decreases during ripening (Akihiro et al., 2008;Saito et al., 2008;Sańchez Peŕez et al., 2011). It has been found that the GABA content in tomato also varies greatly depending on the genotype or cultivar assessed. Saito et al. (2008) reported a range of 39.6 to 102.5 mg/100 g fresh weight (FW) (0.39-1.02 mg g −1 ) of GABA in 11 fresh market cultivars and a range of 35.4 to 93.3 mg/100 g FW (0.35-0.93 mg g −1 ) in 38 processing cultivars. Higher values for processing tomatoes were reported previously by Loiudice et al. (1995), where 11 lines of San Marzano cultivars showed a range of 132 to 201 mg/100 g FW (1.32-2.01 mg g −1 ) of GABA. On the other hand, even though few genotypes have been examined, tomato wild relatives generally seem to accumulate less GABA than modern cultivars (Anan et al., 1996;Saito et al., 2008;Deewatthanawong and Watkins, 2010). This is probably due to a positive selection for the "umami" flavor, which is linked to the glutamate content. GABA PATHWAY IN PLANTS GABA is a ubiquitous four-carbon nonproteinogenic amino acid that is widely distributed throughout the animal, plant and bacterial kingdoms. In plants, GABA is mainly metabolized via a short pathway called the GABA shunt, which is linked with several pathways, such as the TCA cycle (Bouchéand Fromm, 2004;Shelp et al., 2017; Figure 1). In the cytosol, GABA is irreversibly synthesized from L-glutamate via the H + -dependent glutamate decarboxylase (GAD) enzyme, or alternatively by polyamine (putrescine and spermidine) degradation or a nonenzymatic reaction from proline under oxidative stress (Shelp et al., 2012;Signorelli et al., 2015;Yang et al., 2015). A wide range of GAD copies, which are differentially expressed according to organ types, growth stages and environmental conditions, has been identified in different plant species (Bouchéand Fromm, 2004;Ramos-Ruiz et al., 2019). GAD usually presents a calcium/calmodulin (Ca 2+ /CaM) binding domain (CaMBD) at the C-terminus (30-50 amino acids) that modulates its activity in the presence of Ca 2+ at acidic pH (Snedden et al., 1996;Gut et al., 2009). Under physiological cell conditions (pH 7.0), CaMBD inhibits GAD activity by folding its active site. Increases in cytosolic Ca +2 and H + ions, usually as a stress response, unfold and bind CaMBD, releasing the GAD active site and stimulating its activity (Snedden et al., 1995). The GABA shunt and the TCA cycle are connected by a transmembrane protein, GABA permease (GABA-P), that allows GABA flux from the cytosol into mitochondria (Michaeli et al., 2011). Subsequently, GABA is catabolized to succinic semialdehyde (SSA) by the transaminase enzyme GABA-T. Depending on the substrate affinity, two GABA-T enzymes can catalyze the reaction: a-ketoglutarate-dependent GABA-TK or pyruvate-dependent GABA-TP. GABA-TK receives an amino group from a-ketoglutarate and generates SSA and glutamate, while GABA-TP requires pyruvate or glyoxylate that are converted into alanine or glycine (Shimajiri et al., 2013;Trobacher et al., 2013). However, the latter has been found exclusively in plants, usually showing a higher activity than GABA-TK, while the rest of the organisms use primarily GABA-TK (Narayan and Nair, 1990;Van Cauwenberghe et al., 2002;Bartyzel et al., 2003). Finally, SSA is catabolized in succinate, a TCA component, and NADH plus a hydrogen ion is generated from NAD + by the succinic semialdehyde dehydrogenase (SSADH) enzyme (Bouchéet al., 2003). Succinate and NADH are electron donors to the mitochondrial electron transport chain that produces ATP as a final outcome (Ramos-Ruiz et al., 2019). Alternatively, SSA can be converted to g-hydroxybutyric acid (GHB) and NAD(P) + in the presence of NAD(P) and a hydrogen ion by succinic semialdehyde reductase (SSR) in the cytosol or chloroplast, usually as a response to stresses (Simpson et al., 2008;Hildebrandt et al., 2015). When the NAD + :NADH ratio is low, the SSADH route is inhibited, as it depends on the energy balance in mitochondria, resulting in the accumulation of SSA due to the consequent GABA-T inhibition (Van Cauwenberghe and Shelp, 1999;Podlesǎḱováet al., 2019). GABA GENES IN TOMATO In the latest version of the tomato reference genome (Heinz 1706, version SL 4.0, Hosmani et al., 2019, five GAD genes have been annotated (Solyc11g011920, Solyc04g025530, Solyc05g054050, Solyc01g005000 and Solyc03g098240). The first GAD gene, ERT D1 (Solyc03g098240), was predicted in 1995 from the screening of the "Ailsa Craig" tomato fruit cDNA library, showing peak expression at the fruit breaker stage and slowly declining during ripening (Gallego et al., 1995). Subsequently, from immature fruits of "MicroTom", three GAD genes were characterized during fruit development: SlGAD1 (Solyc03g098240, allelic to ERD D1), which reaches its highest expression at mature fruit stages, and SlGAD2 (Solyc11g011920) and SlGAD3 (Solyc01 g005000), which increase their expression during fruit development and rapidly decline during ripening (Akihiro et al., 2008). However, while SlGAD1 did not exhibit a clear correlation with the GABA content during fruit ripening, SlGAD2 and SlGAD3 showed a positive correlation, suggesting a main role of the latter in GABA biosynthesis. This was confirmed by Takayama et al. (2015) by suppressing the three SlGAD genes through the RNAi approach, resulting in a significant GABA reduction for SlGAD2 and SlGAD3-suppressed lines in mature green fruit, while SlGAD1 GABA levels were similar to those of the wild type (WT). In tomato, three GABA-TP genes were suggested to catabolize GABA to SSA: SlGABA-T1 (Solyc07g043310) found in mitochondria, SlGABA-T2 (Solyc12g006470) in the cytosol and SlGABA-T3 (Solyc12g006450) in plastids (Akihiro et al., 2008;Clark et al., 2009). By RNAi loss-of-function analyses, Koike et al. (2013) suppressed the three SlGABA-T genes and observed an increase in GABA of up 9-fold in red mature fruits of SlGABA-T1-suppressed lines, while no significant correlation was observed between the GABA content and SlGABA-T2 and SlGABA-T3 expression. In light of these results and the fact that FIGURE 1 | g-Aminobutyric acid (GABA) shunt metabolism and related pathways in plant species. TCA, tricarboxylic acid cycle; GS/GOGAT, glutamine-synthetase/ glutamate-synthase cycle; GAD, glutamate decarboxylase; GABA-TK, a-ketoglutarate-dependent GABA transaminase; GABA-TP, pyruvate-dependent GABA transaminase; SSADH, succinic semialdehyde dehydrogenase; SSR, succinic semialdehyde reductase; GDH, glutamate dehydrogenase; SSA, succinic semialdehyde; Suc, succinate; GHB, g-hydroxybutyric acid; aKG, alpha-ketoglutarate; Glu, glutamate; Ala, alanine; Gly, glycine, PYR, pyruvate, GOA, glyoxylic acid. SlGABA-T1 is expressed at a higher level than SlGABA-T2 and SlGABA-T3 during fruit ripening (Clark et al., 2009), SlGABA-T1 was suggested as the major GABA-T gene responsible for GABA catabolism. These results were confirmed by Li et al. (2018) by a multiplex CRISPR/Cas9 system targeting the three SlGABA-Ts. The last steps of the GABA shunt in tomato are not yet well characterized. To date, one SSADH gene (SlSSADH, probably Solyc09g090700) has been identified in tomato as responsible for SSA catabolism in succinate, and it was expressed in fruits across all developmental stages even though it showed a low correlation with the GABA content (Akihiro et al., 2008). On the other hand, two SSR genes have been isolated in tomato (SlSSR1 and SlSSR2, probably Solyc09g018790 and Solyc03g121720) (Akihiro et al., 2008). The expression of SlSSR1 has been found to be slightly higher in mature red fruits than in breaker fruits, while SlSSR2 showed the opposite expression pattern (Deewatthanawong et al., 2010a). Further studies are required to fully characterize the two alternative routes of SSA catabolism in the tomato GABA shunt and to determine how they are linked with GABA biosynthesis. ROLES OF GABA METABOLISM Across the kingdoms, a plethora of processes, functions, and pathways that are directly or indirectly involved in GABA metabolism have been identified (Seifikalhor et al., 2019). In plants, GABA plays important roles in pH and redox regulation, energy production, carbon/nitrogen balance maintenance, plant growth regulation and development, senescence, pollen germination, and fruit ripening, among other functions (Kinnersley and Turano, 2000;Bouchéand Fromm, 2004;Fait et al., 2008;Renault et al., 2011;Yu et al., 2014;Carillo, 2018;Podlesǎḱováet al., 2019). Similar to many other plant molecules, such as calcium, jasmonates, abscisic or salicylic acid, GABA rapidly accumulates in response to environmental stress (Kinnersley and Turano, 2000;Gill et al., 2016). Plants have developed highly dynamic mechanisms to face unfavorable and stressful conditions to maximize their chances to survive, modulating their responses according to the stress severity and growth stage (Podlesǎḱováet al., 2019). Many studies on different species have reported the involvement of GABA in many abiotic stresses, such as salinity, drought, hypoxia, high/low temperature or light and nutrient deficiency/excess (Kinnersley and Turano, 2000;Renault et al., 2010;Al-Quraan et al., 2013;Espasandin et al., 2014;Wang et al., 2014;Daşet al., 2016). GABA also has inhibitory effects on biotic defenses, especially against insects and fungi (Sańchez-Loṕez et al., 2016a;Sańchez-Loṕez et al., 2016b;Scholz et al., 2017). Undoubtedly, any change affecting the genes involved in GABA metabolism, both at the sequence or expression level, could either potentiate or disrupt functions (Seifikalhor et al., 2019). In tomato, the potential roles of GABA metabolism have been investigated using, among other approaches, reverse genetics. The three SlGAD genes were suppressed individually and simultaneously by RNAi, demonstrating that SlGADs are key genes for GABA production in tomato (Takayama et al., 2015). However, the downregulation of GABA biosynthesis through SlGAD knockdown did not show a significant effect on plant growth and fruit development under stress-free conditions. In contrast, the loss-of-function of SlGABA-T genes resulting in drastic changes and abnormal phenotypes (Koike et al., 2013). Two out of three SlGABA-T mutants, SlGABA-T1 and SlGABA-T3, showed severe dwarfism with a plant height half or less than that of the Micro-Tom wild type, probably due to defects in cell elongation. Additionally, SlGABA-T1 also exhibited infertility and flower abscission, while no remarkable changes were observed in SlGABA-T2. Similarly, the suppression of SlSSADH by VIGS led to a dwarf phenotype with curled leaves, probably due to enhanced ROS accumulation (Bao et al., 2015). However, under 200 mM NaCl treatment, SlSSADH-suppressed plants exhibited superior salt resistance compared to that of the WT, showing a higher shoot biomass level and significantly higher chlorophyll content and photosynthesis rate. In contrast, under the same stress conditions, SlGAD-and SlGABA-T-suppressed plants showed severe salt sensitivity (Bao et al., 2015). These results suggested that tomato GABA metabolism is involved in salt stress tolerance. More recently, Li et al. (2018), using a multiplex CRISPR/ Cas9 system, targeted four GABA genes (SlGABA-T1, SlGABA-T2, SlGABA-T3 and SlSSADH) and CAT9, a protein that transports GABA from vacuoles to mitochondria for catabolism (Snowden et al., 2015), constructing a pYLCRISPR/ Cas9 plasmid with six sgRNA cassettes. Almost all edited plants with multiple mutations exhibited severe dwarfism with heights from 32.5 to 95.6% shorter than the WT. Moreover, the plants that presented higher GABA contents also showed pale green and curled compound leaves, and some of them developed a secondary axis and more leaflets with visible leaf necrosis. When analyzed by scanning electron microscopy, the leaf tissues of those mutants exhibited smaller and more compressed cells than those of the WT that were larger and stretched, confirming previous study results that suggested that GABA may be involved in plant cell elongation in vegetative tissues and leaf development regulation (Renault et al., 2011). Leaf necrosis was especially severe in SlSSADH-knockout lines, probably due to the increase in H 2 O 2 levels and GHB accumulation, similar to Arabidopsis SSADH-deficient mutants (Bouchéet al., 2003). Furthermore, some mutants also exhibited bud necrosis and fewer flowers, and only a few mutants set fruit, some of which were teratogenic (Li et al., 2018). As in other plant species, GABA is involved in stress tolerance in tomato (Seifikalhor et al., 2019). Seifi et al. (2013) observed that the overactivation of the GABA shunt in the sitiens tomato mutant played a vital role in resistance to Botrytis cinerea, maintaining cell viability and slowing senescence at the site of primary invasion. The authors hypothesized that the H 2 O 2mediated defense via GABA in response to B. cinerea might have restricted the extent of cell death in the vicinity of the pathogen penetration sites. More recently, it has been reported that GABA is highly involved in plant-pathogen interactions under Ralstonia solanacearum attack . Proteomic and transcriptomic profiles revealed that SlGAD2 was downregulated, whereas SlGABA-Ts and SlSSADH were upregulated during infection. The silencing of SlSSADH by VIGS did not result in significant changes under R. solanacearum infection, while hypersusceptibility was observed for SlGAD2suppressed lines, suggesting that SlGAD2 participates in the plant defense response. However, further studies are required to understand the molecular mechanisms underlying GABA metabolism in response to pathogen attack. Thus far, it is clear that GABA levels rapidly increase under stress conditions Ramesh et al., 2017). Recently, GABA has gained attention as a priming agent (Jalil and Ansari, 2020). Plant priming or conditioning is a promising strategy in which the plant's physiological state is intentionally altered by natural products or synthetic chemicals to promote a more effective defense response in the case of subsequent harsher stresses (Kerchev et al., 2020). When the plant reaches the "primed" state, broad-spectrum defense is partially induced by activating defense genes, changing the proteome profile and accumulating defense compounds, among other processes, minimizing the corresponding negative effects on crop productivity (Vijayakumari et al., 2016). GABA and its synthetic isomer b-aminobutyric acid (BABA) have been reported as effective priming agents (Kerchev et al., 2020). Their application prior to pathogen infection or abiotic stress, by foliar spraying or seed soaking, triggers a similar response as endogenous GABA, modulating the latter and significantly improving the plant immune system, defense response and vigor (Malekzadeh et al., 2014;Li et al., 2017). Yang et al. (2017) reported remarkable resistance to alternaria rot (Alternaria alternata) after applying 100 mg ml −1 of exogenous GABA to tomato fruits. Although GABA did not show direct antifungal activity, it induced host-mediated resistance at the right time by activating antioxidant enzymes such as peroxidase, superoxide dismutase and catalase and reducing cell death caused by reactive oxygen species. During GABA treatment, SlGABA-T and SlSSADH were found to be upregulated. Applications of exogenous GABA at 500 and 750 mmol L −1 to tomato seedlings under cold stress significantly reduced electrolyte leakage, an indirect cause of chilling injury (Malekzadeh et al., 2014). GABA reduced germination under chilling stress by increasing the activity of antioxidant enzymes and the concentration of osmolytes such as proline and soluble sugars compared to those of untreated seedlings; additionally, the malondialdehyde content, an indicator of plant oxidative stress, was reduced, which results in the maintenance of membrane integrity. Similarly, 0.5 mM exogenous GABA added to a solution of 200 mM NaCl improved seedling tolerance to salt stress, enhancing the contents of antioxidant compounds such as phenolics and endogenous GABA (Çekic, 2018). In addition to those on tomato, a plethora of studies have reported the successful usage of GABA as a priming agent in other crops, demonstrating its suitability for enhancing the innate plant immune system after stress exposure without deploying excessive energy expenses (Priya et al., 2019;Sheteiwy et al., 2019;Tarkowski et al., 2019). However, even though the success of GABA as a priming agent is compelling, the exact mechanisms of action are still puzzling. PROS AND CONS OF INCREASING GABA CONTENTS Undoubtedly, the most important advantage to increasing the GABA content in crops and food matrices is the great potentially beneficial effect on human health, especially the antihypertensive effects (Inoue et al., 2003). Increasing the daily dietary intake of GABA might, in the long run, prevent and alleviate high blood pressure effects. In this regard, tomato is of special interest since it is the most produced and consumed vegetable worldwide and is consumed daily produce for a large part of the human population. Tomato accumulates one of the highest GABA contents among fruit and vegetables, and unlike processed food products, its taste and chemical compounds are not manipulated by additional ingredients such as sugar, salt or fat, whose excess promotes side effects. Thus, a tomato with an enhanced GABA content would benefit many people through daily dietary consumption. However, this is not an easy task since endogenous GABA intensively accumulates from flowering to the mature green stage, being the predominant free amino acid, but rapidly declines during fruit ripening (Sorrequieta et al., 2010;Sańchez Peŕez et al., 2011). Most importantly, high accumulation of GABA could provoke a severe imbalance of amino acids in cells that leads to aberrant phenotypes. Koike et al. (2013) successfully increased GABA levels by suppressing SlGABA-T1 via RNA interference. However, the transgenic plants showed dwarf phenotypes, with heights less than half of that of the WT, and infertility coupled with severe flower abscission. Similarly, the Arabidopsis GABA-T-deficient mutant pop2 showed defectiveness in pollen tube growth and cell elongation in hypocotyls and primary roots (Palanivelu et al., 2003;Renault et al., 2011). Recently, Li et al. (2018) succeeded in increasing GABA levels up to 19-fold by a multiplex CRISPR/Cas9 system targeting the three SlGABA-Ts and SlSSADH. However, the edited plants barely set fruit, some of which were teratogenic, and showed severe dwarfism, pale green and curled compound leaves and necrosis on leaves and buds. Interestingly, Bao et al. (2015) suppressed the main genes involved in GABA metabolism by VIGS and observed a 40% reduction in GABA contents for SlGADs-suppressed lines and an increase of 1.5-and 2.0-fold for SlGABA-Ts and SlSSADH gene expression levels, respectively. However, only SlSSADH-edited plants showed defective phenotypes with curled leaves and severe dwarfism, probably due to GHB accumulation, as in the Arabidopsis ssadh-deficient mutant (Bouchéet al., 2003). On the other hand, attempts to increase GABA contents by manipulating SlGADs have resulted in less disruptive phenotypes. Takayama et al. (2015) overexpressed SlGAD3 and observed an increase in its mRNA levels of more than 20-fold in mature green and 200-fold in red fruits that translated to an increase in the GABA content. However, those plants did not show abnormal fruits or vegetative organs. The same authors succeeded in further increasing GABA levels in red-ripe fruits compared to their previous SlGAD3-overexpression line by overexpressing the coding sequence of SlGAD3, lacking 87 nucleotides from the end in the C-terminal domain with a fruitspecific promoter . Similar to other species, the removal of the GAD C-terminal domain, which is autoinhibitory, led to a significant increase in the GABA content Akama and Takaiwa, 2007). Despite no morphological abnormalities observed in the new transgenic lines, defects were observed during fruit ripening. The fruits never turned red and remained orange even 30 days after the breaker stage, along with a reduction in lycopene contents and lower mRNA levels of carotenoid genes. The authors hypothesized that the removal of the C-terminal domain provoked a metabolic disturbance due to the overexpression of SlGAD3. At 10 days after the breaker stage, GABA accounted for up to 81% of the total free amino acids in overexpression lines compared to 6.2% in the WT, whereas low levels of aspartate and glutamate were recorded. Aberrant phenotypes were also observed in similar studies where the GAD C-terminal autoinhibitory domain was removed and significantly higher levels of GABA were accompanied by low levels of glutamate. Transgenic tobacco plants expressing a truncated GAD from petunia exhibited plant growth abnormalities coupled with reduced cell elongation in the stem . Similarly, dwarfism, sterility and etiolated leaves were observed in transgenic rice plants expressing a truncated OsGAD2 (Akama and Takaiwa, 2007). These authors suggested that abnormalities could be the result of an amino acid imbalance in cells, especially low glutamate levels, due to its direct or indirect involvement in many fundamental pathways, such as gibberellin biosynthesis or posttranslational modification of cell wall proteins. Similarly, transgenic tomato fruits obtained by Bemer et al. (2012), where FUL1 and FUL2 transcription factors were simultaneously suppressed, showed orange-ripe phenotypes, two-fold higher GABA and eight-fold lower glutamate levels than the WT. The enhanced GABA fruits obtained by Takayama et al. (2017) also showed a delay in ethylene production, which peaked 10 days after the breaker stage compared with three days in the WT, and possibly a reduced ethylene sensitivity when the fruits were exposed to equivalent high levels of ethylene as the WT fruits. The authors also reported altered expression levels of transcription factors, such as TAGL1, FUL1, FUL2, RIN and ERF6, which play a major role in regulating ethylene biosynthesis, and lycopene accumulation genes, such as ACS2, PSY1 and CRTISO (Bemer et al., 2012;Shima et al., 2013;Fujisawa et al., 2014). Finally, overaccumulation of GABA in tomato fruit could lead to a reduction in glutamate, which is linked to the umami flavor (Yamaguchi, 1998), since glutamate is a GABA precursor. Indeed, significantly lower levels of glutamate in the high GABA tomato lines were observed, such as in SlGABA-T1-suppressed plants ( Koike et al., 2013) and SlGAD3DC-overexpression plants . Compared to the WT, the former showed an up to 11.7 times higher GABA content (13.59 vs 1.16 µmol/g FW of the WT) and 31.2% less glutamate (1.26 vs 1.83 µmol/g FW of the WT), while the latter showed an up to 18 times higher GABA content (28.56 vs 1.58 µmol/g FW of the WT) and 92.5% less glutamate (0.15 vs 2.00 µmol/g FW of the WT) in the red stage fruit. However, we have confirmed that a mild increase in GABA by 2-4 times does not dramatically affect the glutamate content, even though the effects vary among varieties (Lee et al., 2018). In addition, other molecules, such as some ribonucleotides (e.g., inosine and guanosine monophosphates), could synergistically potentiate, by a factor of 100, the umami taste in the presence of glutamate (Ninomiya, 1998;Yamaguchi, 1998;Chew et al., 2017). However, tomato flavor is the sum of complex interactions among sugars, acids and volatiles perceived between taste and olfaction, and preferences change among consumers, countries and cultures (Tieman et al., 2017). STRATEGIES DEPLOYED FOR IMPROVING GABA LEVELS In recent years, many strategies have been implemented to identify or breed materials to enhance GABA levels in crop species. In tomato, one of the first approaches was classical breeding through the identification of promising materials with higher GABA contents by screening the natural diversity of tomatoes for the subsequent development of new improved lines by crossing those materials with elite cultivars. Saito et al. (2008) screened 61 tomato varieties, including 38 processing and 11 fresh market cultivars, six wild species and six wild derivatives, for two years to identify naturally occurring GABA-enriched materials. However, even though an interesting variation in GABA content was found among the accessions, the results were poorly reproducible between the tested years, suggesting that GABA levels were highly influenced by the cultivation conditions. In fact, the average GABA content in red mature fruit was 50.3 mg/100 g FW in 2005 when the cultivars were screened in an open field and 66.8 mg/100 g FW in 2006 when tested in a plastic house. Nevertheless, the cultivar 'DG03-9' exhibited high GABA levels in both years and under saline stress (104.4 mg/100 g FW), showing its suitability to be used to breed new enhanced GABA-rich cultivars. Notably, the six wild accessions from three tomato-related species (S. pimpinellifolium [25.2 mg/100 g FW], S. cheesmanii [8.8-53.0 mg/100 g FW] and S. lycopersicum var. cerasiforme [11.0 mg/100 g FW]) exhibited a low average GABA content. Similar results were observed by Anan et al. (1996) screening accessions from S. peruvianum (9.8 -10.1 mg/100 g FW), S. pimpinellifolium (34.2-49.7 mg/100 g FW) and S. hirsutum (25.5 mg/100 g FW), whereas accessions from S. esculentum showed appreciable GABA levels (52.4-107.7 mg/ 100 g FW). On the other hand, three of the six wild derivates screened by Saito et al. (2008) that reported high average GABA contents (106.4-114.7 mg/100 g FW) were bred from S. chmielewskii. However, so far, the highest GABA content reported from a tomato wild relative was found in S. pennellii at approximately 200 mg/100 g FW (Schauer et al., 2005;Takayama et al., 2017). Despite the few studies that screened for natural variations in GABA contents in tomato germplasms, which are often difficult to compare due to the tissue or stage analyzed and/ or the protocol used for the GABA measurements, a wide diversity can be found across cultivated, heirloom and wild materials that can be used for GABA breeding. Classical breeding has achieved quantum leaps in crop improvement but entails some drawbacks. The development of a new variety can take up to 10 years with a classical breeding program, while that can be shortened when coupled with molecular marker selection. This makes the process time-and resource-consuming. In addition, the result is not always guaranteed when the aim is delimited to introgression of a specific alleles or traits while maintaining the genetic background and fitness of the elite variety. Furthermore, when the donor parent is a wild relative, additional challenges must be overcome, such as linkage drag of undesirable traits, crossing barriers or offspring infertility (Prohens et al., 2017). For these reasons, alternative approaches were explored to overcome these difficulties. One of them was by screening a TILLING population generated in the background of Micro-Tom (Saito et al., 2011). However, even though approximately 4,500 EMS lines were evaluated to isolate SlGAD3 mutant alleles, no significant mutations were found that were translated in lines with enhanced GABA levels (Ezura et al., unpublished results). Fortunately, a wide spectrum of new technologies has emerged to sort out some of the disadvantages of conventional crossbreeding and induced mutagenesis by radiation or chemical agents. These are labeled NPBTs, which include the development of genetically modified organisms (GMOs) and genome editing. Even though there is no consensus or ultimate definition of what NPBTs are, it is an umbrella term to define several techniques that make use of a genetic modification step where the final result does not include the presence of a foreign gene (i.e., a gene that is not present in the species or cannot be obtained by traditional crossbreeding from related species) (Schaart et al., 2016). Nevertheless, every country has a different regulatory definition of NPBTs, and the list of identified NPBT technologies greatly varies from one legislation to another, being constantly under revision and discussion by policymakers. For the sake of brevity and because of the complexity and dynamic evolution of this topic, we recommend the following reviews to provide an overview of the wide range of NPBTs available and their regulation in several countries (Schaart et al., 2016;Purnhagen et al., 2018;Kleter et al., 2019;Hartung, 2020;Schiemann et al., 2020;Zhang et al., 2020). Although GMO technologies have been used since the 1980s, first-generation GMOs cannot exactly reflect the transgene position in recipient plants. Thus, multiple transgenic events must be performed and screened to select transgenic plants with the pursued trait without undesired off-target effects (Qaim, 2020). Second-and third-generation sequencing platforms have allowed unprecedented knowledge of plant genomes and genomic regions controlling QTLs and major genes, fostering precision and speed in genome editing and breeding (Hickey et al., 2019). Since then, a new generation of mutation breeding techniques has started to target specific genomic regions, increasing specificity and reducing potential off-target effects along with saving time and resources. In fact, the greater specificity and precision of NPBTs is their main advantage over classical breeding and random mutagenesis that do not allow for specific targeting or cannot control potential additional genetic changes that might be introduced due to the linkage associated with the latter. NPBTs allow for targeting the individual genes controlling traits, allowing preservation of the genetic background of an elite variety and avoiding the undesirable effects of linkage drag. In tomato, NPBTs have been used for understanding GABA metabolism and gene actions and investigating how to enhance GABA levels in red ripe fruits. The latter was achieved by applying two different strategies: increasing SlGADs activities and consequently GABA biosynthesis; and decreasing or silencing SlGABA-Ts activity and thus reducing GABA degradation. Although three copies of SlGADs and three of SlGABA-Ts were found in tomato, it is currently clear that SlGAD3 and SlGABA-T1 are mainly involved in GABA accumulation during fruit ripening. The first attempt to increase GABA content using genetic engineering was made by Koike et al. (2013), who targeted SlGABA-T1 by an RNA interference approach under the control of the constitutive CaMV 35S in Micro-Tom. The 35S::SlGABA-T1 RNAi -suppressed lines exhibited 1.3-fold (118.6 mg/100 g FW) higher GABA content than the WT at the mature green stage, 2.0-fold (126.8 mg/100 g FW) higher at the yellow stage and 6.8-fold (106.2 mg/ 100 g FW) higher at the mature red stage. SlGABA-T1 suppression reduced the catabolism of GABA to SSA in the ripening stage, limiting its degradation from the peak reached at the breaker stage. While in the WT, the GABA content dropped 83.1% from the mature green to red stage, the decrease in 35S::SlGABA-T1 RNAi lines was only 3% on average. However, as reported above, those plants exhibited severe abnormalities. Nevertheless, when they replaced the 35S promoter with E8, a strong inducible promoter specific to tomato fruit ripening, to avoid the systemic suppression of SlGABA-Ts by the CaMV 35S promoter, the E8::SlGABA-T1 RNAi -suppressed lines showed a similar WT phenotype with no evidence of dwarfism or infertility. Although the GABA levels of E8::SlGABA-T1 RNAi lines were similar to those of the WT at the mature green stage (71.1-87.6 mg/100 g FW), their content was 2.5-fold higher than that of WT (45.3-59.8 mg/100 g FW) at the red stage, dropping 29.0% versus 72.4% in the WT, although their GABA content was almost half of that of 35S::SlGABA-T1 RNAi . These results suggested that the systemic knockout of SlGABA-Ts, especially SlGABA-T1, led to abnormal phenotypes, whereas a fruit-specific knockout, knockdown or less severe gene expression reduction produced normal phenotypes with enhanced GABA levels. The opposite approach was attempted by Takayama et al. (2015), who overexpressed SlGAD3 by generating transgenic plants under the control of the CaMV 35S promoter in Micro-Tom. The OX-SlGAD3-overexpression lines exhibited 2.7-to 3.3fold higher GABA contents than the WT at the mature green stage and 4.0-to 5.2-fold higher GABA contents at the red stage. Despite the much higher GABA content, the OX-SlGAD3 lines did not show significant visible phenotypic changes. The same authors, to further increase GABA levels in red-ripe fruits, overexpressed the full-length coding sequence of SlGAD3 (SlGAD3 OX ) and were missing the same 87 nucleotides from the end in the C-terminal domain (SlGAD3DC OX ), substituting CaMV35S for the fruit-specific E8 promoter and the NOS terminator for Arabidopsis heat shock protein 18.2 (HSP) . At 10 days after the breaker stage, SlGAD3DC OX lines exhibited 11-to 12-fold higher GABA levels than the WT (237.1-268 mg/100 g FW) and almost double those of SlGAD3 OX lines (123.7-154.6 mg/100 g FW). Although the latter displayed substantially higher mRNA levels than the previously generated OX-SlGAD3 lines (Takayama et al., 2015), probably due to the Arabidopsis HSP terminator, which is more effective than the NOS terminator in enhancing mRNA expression (Kurokawa et al., 2013), their GABA levels were similar. This result suggested that even though increased mRNA expression translates into a higher GABA content, it does not linearly increase with the mRNA level, and other factors are involved in regulating GABA levels, such as the C-terminal domain. In fact, the SlGAD3 OX and SlGAD3DC OX lines exhibited similar mRNA levels, but the GABA content was almost doubled in the latter, implying that the C-terminus also acts as an autoinhibitory domain in tomato. However, even though the lines of both constructs did not display any morphological abnormalities, probably by virtue of replacing CaMV35S for the fruit-specific E8 promoter, the SlGAD3DC OX lines exhibited a delay in ethylene production and ethylene sensitivity along with an orange-ripe phenotype that never turned completely red. In light of these results, Nonaka et al. (2017) further investigated the effects of the C-terminal region in SlGAD3 to breed an enhanced GABA line without defects, targeting the autoinhibitory domain by the CRISPR/Cas9 system. After comparing the amino acid sequence of SlGAD orthologs from five species for the conserved tryptophan, lysine and glutamate residues involved in CaM binding, the 37th amino acid at the Cterminal domain was selected as the target to induce a premature stop codon in SlGAD3. This target was different from the 29th amino acid (87 nucleotides) selected by Takayama et al. (2017). The T 1 regenerated plants of TG3C37 that had a stop codon at 34, 36 and 40 amino acids at the C-terminal, thus upstream of the autoinhibitory domain and close to the target at the 37th amino acid, and exhibited a higher GABA content at the red mature stage of up to 15 times more (125.73 mg/100 g FW) than that of the WT. Although some lines were slightly smaller than the WT, flowering and fruit yield were not affected, and no significant phenotypic defects were observed. Considering the absence of visible morphological and physiological abnormalities and the range of GABA levels obtained, these results were similar to those obtained by overexpressing the full coding length of SlGAD3 (SlGAD3 OX ). The authors stated that even though targeted mutagenesis by CRISPR/Cas9 is as effective a strategy as the transgenic approach in increasing GABA contents, CRISPR/Cas9 could be more publicly accepted than conventional transgenesis in the near future. The latest attempt to develop improved GABA tomato lines was made by Li et al. (2018) using a multiplex CRISPR/Cas9 vector with six gRNA cassettes to target the three SlGABA-Ts along with SlSSADH and CAT9 that were transformed in the Ailsa Craig cultivar. The edited plants were divided into six groups based on the mutation patterns from single to quadruple mutated targets. However, only the single SlGABA-T1 and the double SlGABA-T1 and SlGABA-T3 mutants set fruits due to the severe infertility of the rest of the combinations. The SlGABA-T1 edited lines exhibited 1.43-fold higher GABA levels (102.80 mg/100 g FW) than the WT at the mature green stage and 2.95-fold higher (73.83 mg/100 g FW) levels at the red stage. Even though those fruits did not show differences in size, shape and color with the WT, the plants experienced abnormalities in leaf development and plant growth. Once again, these results demonstrated that SlGABA-T1 is deeply involved in GABA metabolism, but its severe mRNA expression reduction by suppression or mutation leads to deficient plants not suitable for breeding. ADVANTAGES OF THE USE OF NPBTs IN CROP BREEDING AND THEIR SOCIETAL ACCEPTANCE Genome editing is a technology that enables precise mutagenesis only in targeted genes. Therefore, using this technology, we can significantly reduce the labor and time required to introduce desirable mutations, which is a challenge for conventional mutagenesis techniques using chemical or irradiation treatments. This property would be a major advantage in breeding crops that have strong consumer preferences, require a large number of varieties in a single crop and have rapidly changing needs because to breed more varieties, more labor and time would be required. Tomatoes are one such example. They are consumed all over the world, and preferred shapes, colors, flavors and uses vary by region. Since genome editing technology can directly reproduce useful genetic mutations in breeding parents, we would be able to improve the traits of familiar varieties more rapidly and efficiently. In the case of improving the GABA content in tomato, genome editing would be effective not only in experimental cultivars but also in commercial cultivars. Indeed, we succeeded in reproducing the mutation that confers an increased GABA content in fruit in several commercial cultivars by applying the CRISPR/Cas9 system as used in experimental cultivars (Nonaka et al., 2017). It took only half a year to obtain null-segregant (T-DNA free) plants with homozygote mutations. Even though an additional selfing or backcross step would be needed to select elite lines as in conventional breeding, it can reduce the time to develop breeding material considerably when compared to conventional cross breeding, which takes 3-5 years. However, frequently, NPBTs are confused or associated with the first generation of modified crops, where the use of foreign DNA to develop new and improved crops allows society and policymakers to perceive health and environmental risks associated with their cultivation and consumption. Even though three decades of GMO cultivation and dozens of studies reported no more human health risks than those posed by conventional agriculture (Nicolia et al., 2014;Zaidi et al., 2019), the general perception of GMO technologies remains negative, with large differences among countries. The emergence of genome editing technologies such as CRISPR/Cas has generated great expectations among scientists, breeding companies and the food industry regarding the possibility of changing consumer perceptions of food biotechnology. However, even though the societal acceptance of NPBTs is higher than that of the first GMO crops, currently, "food technology neophobia" still affects many consumers and not only high-income countries (Farid et al., 2020;Siegrist and Hartmann, 2020). Often, this confusion is promoted by the same lawmakers that legislatively, as in the case of the European Court of Justice, equate genomeedited crops with the first generation of GMOs, negatively influencing the public perceptions of NPBT products (Callaway, 2018;Zaidi et al., 2019). Nevertheless, there are grounds for cautious optimism due to the policy relaxation of some countries that are considering opening their markets to NPBT products such as China and Russia. Currently, in seven countries (Argentina, Brazil, Chile, Colombia, Israel, Paraguay and the US, plus India, Uruguay, Honduras, Guatemala and El Salvador, which are under consideration), genome-edited crops that do not incorporate foreign DNA are regulated as conventional plants with no additional restrictions, while Japan, Canada and Australia have stricter regulation of genome-edited products than conventional ones but less stringent regulation than that of GMOs that incorporate DNA from other species (https://crispr-gene-editing-regs-tracker.geneticliteracyproject. org/). In contrast, other countries such as New Zealand, Mexico or the EU strictly regulate NPBT products, basically banning their development and introduction. ALTERNATIVE PROSPECTS FOR GABA IMPROVEMENT USING NBPTs New NPBT approaches and techniques have been launched at a frantic pace in the last two decades, especially for genome editing. However, the first-generation of gene-editing methods, such as zinc-finger nuclease or TALEN, have been totally eclipsed by the CRISPR/Cas9 system. Since the first evidence of its powerful applications in 2012 (Jinek et al., 2012), the CRISPR/Cas9 system and its subsequent variants have become the most widely used NPBTs due to their simplicity, reliability and versatility. Many of these variants are focused on improving the genome editing process from the first monoguide CRISPR/ Cas9 versions towards multiplexed and multilocus strategies (Vad-Nielsen et al., 2016;Zetsche et al., 2017), enhanced Cas enzymes and new ribonucleoprotein complexes (Bernabe-Orts et al., 2019;McCarty et al., 2020) and cleavage patterns (Komor et al., 2016;Shimatani et al., 2017), among others. Apart from the performance of the genome editing system per se, interesting CRISPR/Cas9 applications and approaches have been proposed and validated in plants. One of the most promising is cis-regulatory region engineering (cis-engineering), which targets the noncoding sequences controlling gene transcription. Many examples have reported that mutations in cis-regulatory elements (CREs) produce significant phenotypic and morphological changes that have been selected during domestication (Meyer and Purugganan, 2013;Swinnen et al., 2016). In tomato, changes in CREs and promoter regions were translated in elongated (SlSUN) and larger fruits (FW2.2, FW3.2, SlWUS) or those with improved b-carotene contents (Slcys-B) (van der Knaap et al., 2014;Li et al., 2020) as just a few relevant examples. However, despite the great potential of this approach, currently, the vast majority of CRISPR/Cas9 studies focus on targeting coding sequences for null allele editing or controlling transcription by activation (CRISPRa) or inhibition (CRISPRi), and only 15 studies so far have reported successful applications of cis-engineering (Li et al., 2020). One of them was successfully conducted in tomato by cis-engineering the promoters of genes controlling fruit size, inflorescence branching and plant architecture, achieving multiple cis-regulatory alleles with a continuum of quantitative variation (Rodrıǵuez-Leal et al., 2017). The scarcity of cis-engineering studies might be due to the lack of information on CRE functions and the regulatory complexity and redundancy of transcriptional control. Additionally, in contrast to targeting the coding regions that could produce substantial and pleiotropic effects, the effects of cis-regulatory allele targeting are often more subtle and discrete (Wittkopp and Kalay, 2012). Perfect examples of detrimental pleiotropic effects in GABA studies were reported by Koike et al. (2013) and Li et al. (2018) who suppressed and edited, respectively, the coding sequences of SlGABA-Ts. In both studies, the regenerated plants showed multiple abnormalities, making them useless for breeding despite the higher GABA levels achieved. To overcome these drawbacks, the authors of this review are currently cis-engineering the promoter region of SlGABA-T1 (Figure 2). The aim of this study was to modulate SlGABA-T1 gene expression to achieve the best balance between reduced enzyme activity and a higher GABA content without detrimental pleiotropic effects that cause plant abnormalities. This can be theoretically achieved by producing many cis-engineering promoter alleles that exhibit a continuum of gene expression and CRE combinations. However, like most plant species, almost no knowledge has been generated regarding the structure of promoters and other regulatory sequences in tomato, and no prior information is currently available for CREs and their interactions in the promoter of SlGABA-T1. Even though prior knowledge would facilitate and speed the process, cis-engineering could be implemented if a multiplexed CRISPR/Cas9 system was developed (Li et al., 2020). For this, we assembled a vector containing four gRNAs that was designed within a region 2 kbp upstream of the SlGABA-T1 start codon and transformed in Micro-Tom ( Figure 2A). Furthermore, the same vector was also used to transform the high GABA line SlGAD3DC37 developed by Nonaka et al. (2017) to exploit the potential synergistic effects between the cis-engineering promoters in SlGABA-T1 alleles and SlGAD3 without the autoinhibitory domain ( Figure 2B). A complementary approach could also involve cis-engineering the promoter of SlGAD3DC37, combining the effects of removing the autoinhibitory domain and modulating the gene expression level by cis-engineering ( Figure 2C). Apart from achieving improved GABA lines, this study may provide useful information on GABA gene promoters that will open the path to future strategies that are currently unavailable. For example, if CREs and their interactions with regulatory elements are sufficiently characterized, the multiplex unspecific gRNA approach could be replaced by site-specific promoter editing. A step further in this direction would be the upgraded precision from genome editing to base editing or the groundbreaking prime editing strategy that has recently been demonstrated to be feasible in higher plants (Rees and Liu, 2018;Anzalone et al., 2019;Lin et al., 2020). However, other NPBT strategies could be more challenging to implement for GABA breeding until their technical limitations and low efficiency are overcome. Among those, a fascinating approach is promoter insertion or promoter swapping that allows introduction or replacement of CREs or entire promoters. However, these approaches rely on the homology-directed repair (HDR) pathway, which has exhibited low efficiency in plants (Chen et al., 2019). Nevertheless, in tomato Čermaḱ et al. (2015) succeeded in inserting a 35S promoter upstream of the gene controlling anthocyanin biosynthesis (ANT1), resulting in enhanced anthocyanin accumulation. FINAL REMARKS The ubiquity of GABA across the kingdoms and its involvement in many fundamental pathways has captivated the interest of scientists for several decades. More recently, after uncovering its potential benefit for human health, the food industry and plant breeders are devoting efforts and resources to improving the GABA levels in food matrices and crops. In plant breeding, NPBTs have demonstrated a higher efficiency and precision than classical crossbreeding for achieving this goal, and studies using induced mutagenesis unlabeled the genetic basis of GABA regulatory pathways, some of which also lead to an increased GABA content in some cases. Nevertheless, large room for improvement exists that can be harnessed via the new generation of NPBTs that are emerging at an incredible speed. The progress achieved in tomato could potentially transfer to other crops, taking advantage of the knowledge generated to shorten GABA breeding programs.
v3-fos-license
2022-09-18T15:03:01.291Z
2022-09-01T00:00:00.000
252352071
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-393X/10/9/1534/pdf?version=1663238601", "pdf_hash": "2c52f1e422ec0183227321c50545b2ed36575561", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2491", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cc1b9662fff9c1986e500bab988ac553c96400b9", "year": 2022 }
pes2o/s2orc
Public Perceptions of the Emerging Human Monkeypox Disease and Vaccination in Riyadh, Saudi Arabia: A Cross-Sectional Study The human monkeypox disease is caused by the monkeypox virus (MPXV), which is a zoonotic disease. In the year 2022, the prevalence of monkeypox cases swiftly increased worldwide and the disease has now been declared a global public health emergency. The present study aimed to assess the public’s perceptions and knowledge of and attitudes toward monkeypox in Riyadh, Saudi Arabia. This questionnaire-based cross-sectional study was conducted from 15 May to 15 July 2022. The participants’ perceptions, knowledge, and attitudes were collected via a 28-item-based questionnaire survey. The survey was based on 1020 participants (554 (54.3%) were females, and 466 (45.7%) were males). The results reveal that out of 1020 participants, 799 (78.3%) respondents believed that monkeypox disease has developed into a pandemic situation, and 798 (78.2%) suggested that the disease is most common in Western and Central Africa. Further analysis shows that 692 (67.8%) respondents agreed that monkeypox cases are increasing worldwide, 798 (21.8%) believed that monkeypox is commonly transmitted through direct contact, and 545 (53.4%) of respondents reported that it is easily transmitted from human to human. Moreover, 693 (67.9%) participants mentioned that monkeypox disease is spreading more widely as people travel from one country to another, while 807 (79.1%) participants were aware that smallpox and monkeypox have similar clinical features. Furthermore, the majority of participants (p = 0.033) agreed that health officials should start a vaccination campaign to combat monkeypox. Regarding preventive measures and vaccination campaigns, 641 (62.8%) participants suggested that health officials should take public preventive measures and 446 (43.7%) recommended that health officials start vaccination campaigns against monkeypox. The knowledge of human monkeypox among the general population in Riyadh, Saudi Arabia was satisfactory for all ages, genders, levels of education, and economic groups. Moreover, the majority of participants proposed adopting preventive measures and starting a vaccination campaign to combat monkeypox disease. The knowledge of monkeypox in the public domain is a key factor to improve the public‘s capacity to minimize the disease burden and fight against viral infectious diseases at regional and global levels. Introduction The human monkeypox disease, known simply as monkeypox disease, is caused by a monkeypox virus (MPXV) and is a zoonotic infectious disease, frequently found in African countries [1,2]. The MPXV belongs to the "genus Orthopoxvirus, subfamily Chordopoxvirinae and family Poxviridae". The genomes of these viruses are ≈200 kb long, with replication characteristics, and are involved in host range determination and pathogenesis [3][4][5]. The possible pathogenesis and transmission of MPXV are animal-animal, animalhuman, and human-human transmission. Direct or indirect contact with bodily fluids, respiratory droplets, the skin lesions of an infected person, and patients' contaminated possessions, bedding, clothing, and environment have been associated with inter-human transmission [4,6]. The transmission of disease may be due to close physical, skin-to-skin, or face-to-face interaction [7,8]. MPXV infection can also be transmitted through bites or scratches from infected animals, and the contamination of raw meat [9]. Moreover, rodents and squirrels may also play a role in the transmission of MPXV to humans [9]. The MPXV was first found in 1958 among a group of monkeys housed in a research institute in Copenhagen, Denmark [10]. About 12 years later, in September 1970, the MPXV was identified for the first time in humans in the Democratic Republic of Congo [11,12]. In the new millennium, in the year 2003, the first case of the MPX disease was reported from endemic to non-endemic countries [13,14]. The World Health Organization has stated that MXPV disease is a global emergency [15]. This year, from 1 January 2022 to 19 August 2022, MPXV has swiftly spread from non-endemic to endemic regions [16], involving 94 countries and infecting 41,358 people; 387 cases were reported from seven endemic African countries and 40,971 cases in 87 non-endemic countries in Europe, America, Australia, and the Asian continent [14]. Currently, there is no specific approved vaccine for MPXV. However, vaccination against the smallpox virus provided cross-protection against MPXV [17]. There are three generations of vaccines used against the smallpox virus [18]. The first-generation vaccine was used against smallpox until 2008. This vaccine was highly effective in preventing smallpox and played a vital role in the eradication of smallpox all around the world. In 1980, the WHO declared the eradication of smallpox, and this vaccine was discontinued [7]. The second-generation vaccine, the live attenuated tissue culture-derived vaccinia virus vaccine has been used for populations who might be at high risk for orthopoxvirus. The third-generation vaccine, the modified vaccinia Ankara-Bavarian Nordic (MVA-BN), was approved for human use in Canada and Europe [7]. In the present situation of global emergency, public awareness about monkeypox disease is vital to educating the people to fight against such infectious diseases. Therefore, the present study aimed to assess the public perceptions and knowledge of monkeypox in Riyadh, Saudi Arabia. Study Design and Settings This questionnaire-based cross-sectional survey was steered in the "Department of Physiology, College of Medicine, King Saud University, Riyadh, Saudi Arabia", from 15 May to 15 July 2022. Study Area Demographics The total population of Saudi Arabia is around 35.8 million; the male population is 20.70 million and the female population is 15.14 million, with a median age of 32.4 years. Riyadh is the largest city and the capital of Saudi Arabia. The city consists of five regions with a total population of about 7.54 million people, comprising 57% males and 43% females. More Saudi women are studying in universities than men; there are about 551,000 women and 513,000 men studying for bachelor's degrees in universities. Study Sample Size The targeted study participants were Saudi and non-Saudi male and female residents in the capital city of Riyadh, Saudi Arabia. A power formula was used to calculate the sample size, based on a 50% population proportion, a 95% confidence interval, and a 5% margin of error. For this study, a sample size of about 800 people was required, but the number of participants who responded and were included in the study analysis was 1020. Study Survey Procedure and Instrument The study participants were invited to join the questionnaire survey using social media platforms (via WhatsApp and emails). The study objectives and a polite request for their consent and voluntary participation were presented at the beginning of the questionnaire. The study variables were socio-demographic characteristics, age, gender, occupation, level of education, socioeconomic status, and allied questions about knowledge, attitude, and perceptions regarding monkeypox disease. The study was based on a well-designed questionnaire [19] issued to the targeted population of Riyadh, Saudi Arabia. The survey participants represent the entire Riyadh region of Saudi Arabia. After receiving permission, the questionnaire [19] was slightly modified and used for the data collection. The reliability and technical issues of the questionnaire were tested among 10 participants, and their feedback was taken in the pilot survey test. After ethical approval, the questionnaire was distributed online through email, Google, and social media platforms in Riyadh, Saudi Arabia. The questionnaire was distrusted by 1300 participants, 1020 (78.46) participants responded to the survey, and 280 (21.53%) did not respond to the survey. Among the 1020 participants, 554 (54.3%) were females, and 466 (45.7%) were males. The names of the participants were not collected to maintain confidentiality. The invitations to participate in the online survey were distributed by the social media program WhatsApp and e-mails. To achieve a better response, one reminder was also sent as a followafter the initial message. An introductory page consisted of information on the research objectives, demographic information, and the expected benefits. The survey was estimated to take about 10 min to complete. The duplication distribution of the questionnaire was checked, and the raw data were extracted and imported for analysis. The questionnaire consisted of 28 questions to assess the knowledge, attitudes, and perceptions regarding monkeypox. The questionnaire was developed in the national language of Arabic, and also in the English language. Ethical Considerations An introductory page was provided, informing the participants that they could exit the survey at any point, and before enrolling, they were asked to provide their consent to participate. This study was approved by the Ethics Committee Institutional Review Board Statistical Analysis The results were examined using the SPSS software, version 26.0 for Mac. The demographical variables of age, gender, occupation, education, and socioeconomic status were reported, using frequency and percentage. The response score was reported using mean and standard deviation. The comparisons between the variables were analyzed using independent sample t-tests, ANOVA, and chi-squared tests. A p-value of <0.05 was considered significant. Table 1 represents the demographic profiles of the respondents. The results indicate that the majority of respondents were in the age group of 21-30 years old (558, 54.7%), followed by 15-20 years old (388, 38.0%); and 554 (54.3%) were female and 466 (45.7%) were male. Regarding the education level of the respondents, of the maximum number of participants, 608 (59.6%) had bachelor's degrees, while 341 (33.4%) had middle-or highschool qualifications, 38 (3.7%) had master's degrees, and 6 (0.6%) had a Ph.D. Additionally, the results indicate that most of the respondents (483, 47.4%) were from the central region and 117 (11.5%) were from the eastern region of the capital city of Riyadh. In terms of the professional level of the participants, most of them were students (644, 63.1%), followed by 79 (7.7%) who were working in the private sector, 47 (4.6%) had government jobs, and Vaccines 2022, 10, 1534 4 of 11 33 (3.2%) were health practitioners. The majority of the respondents were single (932, 91.4%), while 69 (6.8%) were married and 18 (1.8%) were divorced. Their socio-economic status shows that 596 (58.4%) respondents had a monthly income of less than SAR 3000, followed by 81 (7.9%) who were earning SAR 9000 or more (Table 1). The Respondents' Awareness of Monkeypox The respondents were presented with different statements to assess their level of understanding and information about monkeypox. The results reveal that 799 (78.3%) respondents believed that monkeypox disease has become a pandemic, 798 (78.2%) believed that this disease is most common in Western and Central Africa, and 692 (67.8%) agreed that monkeypox cases are increasing worldwide. Upon further inquiry, 798 (21.8%) believed that monkeypox disease is commonly transmitted through direct contact, and 545 (53.4%) respondents mentioned that monkeypox disease is easily transmitted from human to human. As per the viewpoint of 693 (67.9%) people, monkeypox disease is widely spreading as people travel from one country to another. Moreover, 807 (79.1%) respondents were aware that smallpox and monkeypox have similar symptoms, while half of the respondents (491, 48.1%) were aware of the clinical symptoms of monkeypox disease and a large number of respondents (889, 87.2%) knew that a skin rash is the main symptom of the disease. There is a distinction between smallpox and monkeypox, and 585 (57.4%) respondents Figure 1). Figure 1). Preventive and Recommendation Measures This section reports the precautions taken by the respondents to avoid getting infected with monkeypox (Table 4, Figure 2). First of all, the respondents were asked if they were afraid of contracting monkeypox; in response to this query, 412 (40.4%) answered yes. Moreover, 141 (13.8%) respondents were afraid of visiting family or friends, due to the chance of contracting monkeypox, while the majority (879, 86.2%) would not stop visiting anyone. Furthermore, 399 (39.1%) respondents would not travel to other countries as they were afraid of contracting monkeypox disease, and 228 (22.4%) respondents suggested taking hygienic preventive measures to prevent this disease. Table 4. Preventive and recommended measures against monkeypox disease. Preventive and Recommendation Measures This section reports the precautions taken by the respondents to avoid getting infected with monkeypox (Table 4, Figure 2). First of all, the respondents were asked if they were afraid of contracting monkeypox; in response to this query, 412 (40.4%) answered yes. Moreover, 141 (13.8%) respondents were afraid of visiting family or friends, due to the chance of contracting monkeypox, while the majority (879, 86.2%) would not stop visiting anyone. Furthermore, 399 (39.1%) respondents would not travel to other countries as they were afraid of contracting monkeypox disease, and 228 (22.4%) respondents suggested taking hygienic preventive measures to prevent this disease. Perceptions of a Vaccination Campaign against Monkeypox The Chi-squared test has been applied to check the association between the demographic variables and the recommendation of the respondents that health officials should start a vaccination campaign. The results reveal that there was no relationship between recommending that health officials should start the vaccination campaign and Perceptions of a Vaccination Campaign against Monkeypox The Chi-squared test has been applied to check the association between the demographic variables and the recommendation of the respondents that health officials should start a vaccination campaign. The results reveal that there was no relationship between recommending that health officials should start the vaccination campaign and the respondents' age (p = 0.110), gender (p = 0.216), occupation (p = 0.560) and socioeconomic status (p = 0.672). However, there is a statistically significant association between the recommendation of health officials to start vaccination campaigns and education level (p = 0.033) Table 5. Preventive Measures and Recommendations for Monkeypox Vaccination The data were further analyzed between the preventive measures taken and the recommendation for monkeypox vaccination. The results indicate that the respondents were afraid due to the risk of monkeypox disease, and the participants were more likely to recommend vaccination as they do not want to suffer from the virus and the vaccine is essential for the prevention of the disease (p < 0.01). The results reveal that 63.3% of respondents were afraid of the virus and that they would recommend that health officials start a vaccination program for the disease. Moreover, if people are taking precautions for the hygienic prevention of monkeypox, they would prefer to receive the vaccine as well, as hygiene is important but is not the only preventive measure against the virus. Furthermore, 68.9% of respondents took preventive measures and recommended that health officials start a vaccination program. The two conditions were associated with each other (p < 0.01). Additionally, monkeypox disease has caused a fear of visiting family and friends so that people could protect themselves from the virus; it was observed that 77.3% of respondents have limited their visits to family and friends as a result and they recommend the vaccination for MPXV; these two situations were linked with each other (p < 0.01). Monkeypox fears have also limited their visit to other countries; 57.6% of respondents agreed that they are not travelling to other countries because they are fearful of catching the virus and recommend that health officials should start a vaccination program against monkeypox disease and that these two variables have an association with each other (p < 0.01) ( Table 6). Discussion Since early May 2022, the human monkeypox outbreak across many countries has raised concerns about a possible change in the pattern of monkeypox transmission, and that the disease now poses a greater global threat [15]. The transmission of monkeypox diseases is not only limited to the close contract but it can also be transmitted through respiratory droplets, direct or indirect contact with bodily fluids, certain possessions, the skin lesions of an infected person, and a contaminated patient's environment [4,6]. The present study findings suggest that the general population in Riyadh, Saudi Arabia has a satisfactory level of knowledge about the human monkeypox disease. The majority of participants proposed adopting preventive measures and initiating a vaccination campaign to eradicate monkeypox disease at regional and global levels. The present study findings reflect high levels of endorsement of public perceptions about the emerging monkeypox threat. Monkeypox disease was declared a global emergency on 23 July 2022 by the WHO [16] and alarm bells began to ring worldwide, as a reminder of the similar situation that arose immediately before the spread of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), also known as COVID-19, which was declared a global pandemic in March 2020 [20]. Meo et al. (2022) [15] reported that the number of cases of monkeypox is now surging drastically. From what started as an endemic zoonotic disease that was restricted to Central and West African countries, it was not until earlier this year that the number of cases began to climb in countries where monkeypox had never existed before, indicating that the disease is now becoming a global health concern. Monkeypox has now spread to around 94 states across the world. With an emergency situation declared, the international community must immediately act to prepare for a possible pandemic, so that this time, the global healthcare system does not lack sufficient supplies and is not taken by surprise, as it was in 2020 when the COVID-19 pandemic first spread [20]. Harapan et al. (2020) [19] reported that knowledge regarding monkeypox among a group of general practitioners in Indonesia was low, although knowledge about mon-keypox is essential to enhance the profession's capacity to respond to human monkeypox prevalence. In another study, Sallam et al. (2022) [21] reported that knowledge regarding the emerging monkeypox disease among 615 university students in Jordanian health schools was unsatisfactory. About 26.0% of the respondents knew that vaccination could help to prevent monkeypox. Age was associated with better human monkeypox (HMPX) knowledge. Our study, with a sample size of 1020, had satisfactory results, with more than half of the respondents answering most questions appropriately. About 71.4% of participants knew that smallpox vaccinations can be used to protect against monkeypox. The knowledge of human monkeypox among the general population in Riyadh, Saudi Arabia, was satisfactory for all ages, genders, levels of education, and economic groups. Riccò et al. (2022) [22] conducted a study in Italy that involved 566 participants and reported that the knowledge status of its participants was quite unsatisfactory, with substantial knowledge gaps on all aspects of monkeypox. In our study, 71.4% of respondents were aware of the smallpox vaccine's role in protecting against monkeypox, and only 58.6% of respondents in the Italian study were somewhat in favour of implementing variola vaccinations to prevent monkeypox. Nonetheless, in terms of knowledge about monkeypox and a positive attitude toward future monkeypox vaccines, it is essential to educate communities about such emerging viral diseases. An important tool for dealing with a health emergency crisis is to ensure the education of the masses regarding how to approach, deal with, and protect oneself if nearby people become infected. Providing essential knowledge ensures that the public takes the necessary precautions themselves and relieves the burden on the healthcare authorities by limiting the spread and surging of the disease. These steps must be taken in adequate time before the onset of its general spread; analyzing the pre-existing knowledge level of the public is the most important way to combat infectious diseases. Study Strengths and Limitations The strength of this study is that this is among the first studies to correlate the public's knowledge of the emerging monkeypox outbreak in Riyadh, Saudi Arabia. The results of this study may be helpful in providing awareness programs to further improve the public's knowledge of virus emergence and diseases. The sample size was suitable for representing the general public's perceptions. The limitation of this study was that the data were collected only from the capital city of the country; it would be more appropriate to collect additional data from other cities in the country. Recommendations It is essential to educate the public and provide timely advice to increase public awareness of monkeypox disease through lectures, seminars, and the involvement of electronic and print media. The implementation of a "One Health approach", a multidisciplinary and multi-sectoral, collaborative, and transdisciplinary approach at the inland, regional, national, and global levels, with one objective of achieving ideal health outcomes and identifying the interconnection between people, animals, plants, and the environment. Moreover, the early diagnosis of patients, rapid notification to health authorities about suspected cases of MPXV, and the implementation of public intervention measures are decisive in eradicating the disease. Furthermore, the smallpox vaccine campaign and the development of antiviral drugs to treat this neglected tropical disease are highly recommended. Conclusions The knowledge of human monkeypox among the general population in Riyadh, Saudi Arabia was satisfactory for all ages, genders, levels of education, and economic groups. The majority of participants showed a positive attitude and proposed adopting preventive measures and starting a vaccination campaign to combat the spread of monkeypox disease. The knowledge of monkeypox disease among the public is a key factor to improve the public capacity to minimize the disease burden and fight against viral infectious diseases at regional and global levels.
v3-fos-license
2018-04-03T13:12:48.731Z
2018-04-02T00:00:00.000
4557071
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-018-2793-9", "pdf_hash": "cfc994e5c95abd7aecb5727fd5bf84c4741ffd86", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2493", "s2fieldsofstudy": [ "Biology" ], "sha1": "cfc994e5c95abd7aecb5727fd5bf84c4741ffd86", "year": 2018 }
pes2o/s2orc
Host specificity of Enterocytozoon bieneusi genotypes in Bactrian camels (Camelus bactrianus) in China Background Enterocytozoon bieneusi is an obligate, intracellular fungus and is commonly reported in humans and animals. To date, there have been no reports of E. bieneusi infections in Bactrian camels (Camelus bactrianus). The present study was conducted to understand the occurrence and molecular characteristics of E. bieneusi in Bactrian camels in China. Results Of 407 individual Bactrian camel fecal specimens, 30.0% (122) were E. bieneusi-positive by nested polymerase chain reaction (PCR) based on internal transcriber spacer (ITS) sequence analysis. A total of 14 distinct E. bieneusi ITS genotypes were obtained: eight known genotypes (genotype EbpC, EbpA, Henan-IV, BEB6, CM8, CHG16, O and WL17), and six novel genotypes (named CAM1 to CAM6). Genotype CAM1 (59.0%, 72/122) was the most predominant genotype in Bactrian camels in Xinjiang, and genotype EbpC (18.9%, 23/122) was the second-most predominant genotype. Phylogenetic analysis revealed that six known genotypes (EbpC, EbpA, WL17, Henan-IV, CM8 and O) and three novel genotypes (CAM3, CAM5 and CAM6) fell into the human-pathogenic group 1. Two known genotypes (CHG16 and BEB6) fell into the cattle host-specific group 2. The novel genotypes CAM1, CAM 2 and CAM4 cluster into group 8. Conclusions To our knowledge, this is the first report of E. bieneusi in Bactrian camels. The host-specific genotype CAM1 was the predominant genotype, which plays a negligible role in the zoonotic transmission of E. bieneusi. However, the second-most predominant genotype, EbpC, has greater zoonotic potential. Background Microsporidia are a diverse group of emerging obligate intracellular eukaryotic fungi and there are approximately 1300 microsporidian species in 160 genera [1]. To date, there are at least 14 microsporidian species reported to be infectious to humans [2]. Enterocytozoon bieneusi is the most frequently detected species in humans [3], as well as in domestic animals and wildlife [4], and even in environmental water samples [5]. More than 200 E. bieneusi genotypes have been identified in humans and animals by polymerase chain reaction (PCR) based on ribosomal internal transcribed spacer (ITS) gene sequence analysis [2,6]. Molecular phylogenetic analysis has shown that all E. bieneusi ITS genotypes are clustered into nine large groups, including the potentially zoonotic group 1, and some host-specific groups (Group 2 to Group 9) [7]. The Bactrian camel (Camelus bactrianus) was the major means of transportation on the ancient Silk Road. Today, the population of Bactrian camels in China has been estimated at 242,000, most of which are domesticated in desert and semi-desert areas of northwestern China and play an important role in the livelihood of pastoralists through providing milk and meat [8]. There are some reports of intestinal pathogen infections in camels and Bactrian camels in the Middle East countries and China, such as Eimeria spp. and Cryptosporidium spp. [9,10]. However, E. bieneusi infection has not been previously reported in Bactrian camels. This study was undertaken to better understand the prevalence of E. bieneusi in Bactrian camels and assess the host specificity of E. bieneusi infections in Bactrian camels in China. Specimen collection A total of 407 individual fresh fecal specimens from Bactrian camels were collected from 18 different grazing Bactrian camel groups in 11 collection sites of Xinjiang Uygur Autonomous Region (hereinafter referred to as Xinjiang) of northwestern China. Only one specimen was collected per animal. These specimens were collected during August and September of 2013 and from July 2016 to July 2017 ( Table 1). The grazing Bactrian camel groups were kept outdoors and shared pastures with cattle, sheep, goats and wild animals, and each group had approximately 30-300 animals. After animal defecation, about 50-100 g of each fresh specimen was collected immediately from the ground using sterile gloves. Each specimen was collected in a plastic container and marked with the specimen number and site. The specimens were transported to the laboratory and stored in 2.5% (w/v) potassium dichromate solution at 4°C before DNA extraction. DNA extraction and PCR amplification Approximately 200 mg of each fecal specimen was washed at least three times with distilled water by centrifugation at 5000× g for 5 min to remove the potassium dichromate. DNA was extracted using the E.Z.N.A.R® Stool DNA Kit (Omega Biotek Inc., Norcross, GA, USA) according to the manufacturer's instructions. For E. bieneusi screening, nested PCR assays were used to amplify an rRNA gene fragment containing the entire internal transcriber spacer (ITS) [6]. Each specimen was analyzed in duplicate using positive and negative controls. The secondary PCR products were examined by electrophoresis in a 1.5% agarose gel and visualized after staining with GelRed™ (Biotium Inc., Hayward, CA, USA). Sequencing and phylogenetic analysis The positive secondary PCR amplicons were sent to a commercial company (GENEWIZ, Suzhou, China) for sequencing. The sequence accuracy was confirmed with bidirectional sequencing, and the sequences obtained were aligned with reference sequences downloaded from Table 1 The infection status of E. bieneusi and genotypes in Bactrian camels in Xinjiang, China GenBank to determine the genotypes, using the program ClustalX 2.0 (http://www.clustal.org/). The genotypes of E. bieneusi isolated in this study were compared with known E. bieneusi ITS genotypes with a neighbor-joining analysis in the Mega 5 program [6]. A bootstrap analysis was used to assess the robustness of the clusters using 1000 replicates. The established nomenclature system was used in naming the E. bieneusi ITS genotypes [11]. Nucleotide sequence accession numbers The nucleotide sequences reported in this paper have been submitted to the GenBank database at the National Center for Biotechnology Information under the accession numbers: MG602791-MG602796. Statistical analysis Chi-square test was used to compare the prevalence of E. bieneusi infections and predominant genotypes distributions. Differences were considered significant at P < 0.05. To the best of our knowledge, this is the first report of E. bieneusi in Bactrian camels, and the pathogen is widespread in Xinjiang, northwestern China. In China, the average prevalence of E. bieneusi in animals ranges from 0.9% (4/426) in rabbits [12] to 45.6% (426/934) in pigs [13]. However, E. bieneusi infection has only been reported in some animals in northwestern China (Table 2), the average prevalence ranging from 1.1% (4/353) in white yaks [14] to 47.8% (22/46) in sheep [2]. In Xinjiang, only dairy calves [15] and grazing horses [16] have been previously reported to have E. bieneusi infections, with a prevalence of 16.5% (85/514) and 30.9% (81/262), respectively. The high prevalence in Bactrian camels found in this study may be the result of free feeding and drinking water, and mixed feeding with cattle, sheep, goats and other animals in the same pastures, and with the poor veterinary service. The phylogenetic analysis of the ITS genotypes revealed the following clusters: Group 1, Group 2 and Group 8. The six known genotypes (EbpC, EbpA, WL17, Henan-IV, CM8 and O) and three novel genotypes (CAM3, CAM5 and CAM6) identified in this study fell into the human-pathogenic Group 1 (Fig. 1), which is the genotype of major zoonotic potential suggesting that the Bactrian camels play a potential role in E. bieneusi transmission to humans [11]. In contrast, the two known genotypes CHG16 and BEB6 fell into the cattle hostspecific Group 2. The three novel genotypes CAM1, CAM 2 and CAM4 clustered into Group 8 (69.7%, 85/ Fig. 1 Phylogenetic relationships of the E. bieneusi genotypes identified in this study and other reported genotypes. The phylogeny was inferred with a neighbor-joining analysis of the internal transcribed spacer (ITS) sequences based on distances calculated with the Kimura two-parameter model. Bootstrap values > 50% from 1000 replicates are shown at the nodes. The genotypes detected in this study are shown with triangles; previously known genotypes observed in this study are marked with open triangles and the new genotypes are indicated by filled triangles 122) (Fig. 1), suggesting that the host-specific genotype CAM1 in Bactrian camels exhibits less zoonotic potential compared to the genotypes clustered into the humanpathogenic group. In previous studies, E. bieneusi genotype EbpC and EbpA were reported in humans and various animals and were also the predominant genotypes in the reports of humans and pigs in China [13,17]. Genotypes EbpC and EbpA were the most common E. bieneusi genotypes in grazing horses in Xinjiang [16], and genotype EbpC was also identified in dairy calves in Xinjiang [15]. Similarly, the E. bieneusi zoonotic genotypes EbpC and EbpA were identified in Bactrian camels in the present study. However, there were no published reports of genotypes EbpC and EbpA in animals in Gansu, Ningxia, Qinghai and Shaanxi, northwestern China (Table 2). Further investigations of the epidemiology and host specificity of E. bieneusi in humans and other animals in Xinjiang might be informative. Conclusions The present study demonstrated a widespread occurrence of E. bieneusi in Bactrian camels in Xinjiang, China. The host-specific genotype, CAM1, was the most predominant genotype, which plays a negligible role in the zoonotic transmission of E. bieneusi. The secondmost predominant genotype, EbpC, in addition to other genotypes of zoonotic potential, was also commonly identified in Bactrian camels in this study. Bactrian camels could serve as a vector for E. bieneusi transmission to humans and other animals, and vice versa.
v3-fos-license
2022-10-07T01:15:59.098Z
2022-10-06T00:00:00.000
252734753
{ "extfieldsofstudy": [ "Physics", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevE.108.L042602", "pdf_hash": "aa300296ebd72ac00270fdcbcfcb09af65623d45", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2494", "s2fieldsofstudy": [ "Biology", "Engineering", "Physics", "Materials Science" ], "sha1": "aa300296ebd72ac00270fdcbcfcb09af65623d45", "year": 2022 }
pes2o/s2orc
Constitutive model for the rheology of biological tissue The rheology of biological tissue is key to processes such as embryo development, wound healing and cancer metastasis. Vertex models of confluent tissue monolayers have uncovered a spontaneous liquid-solid transition tuned by cell shape; and a shear-induced solidification transition of an initially liquid-like tissue. Alongside this jamming/unjamming behaviour, biological tissue also displays an inherent viscoelasticity, with a slow time and rate dependent mechanics. With this motivation, we combine simulations and continuum theory to examine the rheology of the vertex model in nonlinear shear across a full range of shear rates from quastistatic to fast, elucidating its nonlinear stress-strain curves after the inception of shear of finite rate, and its steady state flow curves of stress as a function of strain rate. We formulate a rheological constitutive model that couples cell shape to flow and captures both the tissue solid-liquid transition and its rich linear and nonlinear rheology. The rheology of biological tissue is crucial to processes such as morphogenesis, wound healing and cancer metastasis.On short timescales, tissues withstand stress in a solid-like way.On longer timescales, they reshape via internally active processes such as cell shape change, rearrangement, division and death [1,2].Tissues are thus viscoelastic [3].Power law stress relaxation [4,5] and slow oscillatory cell displacements [6] after straining underline their rate dependent mechanics.Tissues furthermore undergo spontaneous solid-liquid transitions [7][8][9][10][11] driven by both active processes, such as fluctuations of cell-edge tensions, motility and alignment, and geometric constraints [12], with important implications for morphogenesis and cancer progression.Nonlinear rheological response to tensile stretching includes stiffening [13] or fluidization [14] of single cells, and stiffening then rupture of tissue monolayers [15].Internal activity can likewise induce nonlinear phenomena such as superelasticity [16] and fracture [17]. Understanding tissue rheology theoretically is thus of major importance.Well studied vertex and Voronoi models [9,18,19] of confluent tissue, with no gaps between cells, represent a 2D tissue monolayer as a tiling of polygonal cells.They capture a density-independent solidliquid transition tuned by a parameter characterising the target cell shape, which in turn embodies the competition between cortex contractility and cell-cell adhesion [7][8][9].Vertex models have also been used to study the linear mechanics of tissues [20][21][22], and their response to nonlinear stretch [23] and shear [24][25][26][27].Recently, vertex model simulations of a tissue that is fluid-like in zero shear demonstrated a shear-induced rigidity transition above a critical strain, applied quasistatically [27]. While vertex models and other mesoscopic models have played an important role in advancing our understanding of tissue mechanics, it is also helpful to develop coarse grained continuum rheological constitutive models.Early work formulated a continuum model that couples cell shape and cell motility, capturing some of the glassy dynamics of tissue [28].Inspired by early hydrodynamic theories of active fluids and gels [29,30], con-tinuum constitutive models have been developed to characterize the role of cell shape change, rearrangements, division and death in morphogenesis [2,25,[31][32][33][34][35]. Still lacking, however, is a continuum hydrodynamic constitutive model capable of describing both the spontaneous solid-liquid transition of confluent tissues and its rheological response to external deformation and flow.Inspired by mean-field theories of cell-shape driven transitions [22,27,28] and by fluidity models of the rheology of dense soft suspensions [36], we introduce such a model. The key new insights of our approach are as follows.First, we distinguish the role of geometric frustration (encoded in the cell perimeter p), from that of T1 topological rearrangements (encoded in our fluidity variable a).The former is key to the zero-shear liquid-solid transition and (when coupled to our orientation tensor σ ij ) strain stiffening at small to modest imposed strains [27].The latter cause the plasticity associated with the stress overshoot at imposed strains O(1), and the ultimate steady flowing state.Second, in modeling the geometric frustration, we distinguish a tensor characterizing individual cell shape (of which p is the trace), and a tensor characterizing the average cell orientation at the tissue scale [28]. We furthermore submit this new continuum model to stringent comparison with simulations across a full range of shear rates from quasi-static to fast.We demonstrate our continuum model to capture both the zero-shear solid-liquid transition and strain stiffening transitions reported in Ref. [27], the full nonlinear stress vs. strain behavior after the inception of shear, and the steady state flow curves of stress vs. shear rate. Vertex model simulations -The vertex model [18,19] represents the tightly packed confluent cells of a 2D tissue monolayer as c = 1 • • • N c polygons that tile the plane.Each cell is defined by the location of its n c = 1 • • • ν c vertices, with any two neighbouring vertices α and β connected by an edge of length ℓ αβ .The elastic energy of the tissue is controlled by the interplay of pressure within each cell and tension along the cell edges.Assuming the cell-edge tension per unit length is uniform across the tissue, the energy can be written as where each cell experiences an energy cost for deviation of its area A c and perimeter P c from target values A c0 and P c0 , with area and perimeter stiffness κ A and κ P .The first term on the RHS models 3D cell volume incompressibility via an effective 2D area elasticity [19,37]. The second describes the competition between cell cortical contractility and adhesion between neighbouring cells in controlling cell-edge tension and perimeter [7,19,37].We denote by ⃗ F n = − δE δ⃗ xn the total force on the n th vertex of the tiling at position ⃗ x n due to interactions with all other vertices.In an applied shear of rate γ, with flow direction x and shear gradient y, we assume over-damped dynamics with drag ζ, d⃗ xn dt = ζ −1 ⃗ F n + γy n ⃗ x, with Lees-Edwards periodic boundary conditions.The cells also undergo T1 topological neighbor exchanges that allow the tissue to plastically relax stresses [9,[38][39][40]. To focus on amorphous tissue structures, we simulate a 50 : 50 bidisperse tiling of N c = 4096 cells of target areas A 0 = 1, 1.4, which sets our length unit.We adjust P c0 for the two cell populations to maintain the target cell shape p 0 = P c0 / √ A c0 the same for all cells.We choose units in which κ A = 1 and ζ = 1 and set κ p = 1.0 throughout.We vary p 0 and the imposed shear rate γ.As an initial condition, we seed a planar Voronoi tiling then evolve the above dynamics to steady state in zero shear.At time t = 0, we switch on shear and measure the shear stress Σ ij (t) = 1 N N n=1 F ni x nj , where the sum is over all N vertices in the tiling, and the mean cell perimeter p(t) = 1 Nc Nc c=1 p c .Denoting by ⃗ t nc the unit vector along the edge of length l nc between the n c th and (n c + 1)th vertices of cell c, we define a single-cell shape tensor σ c ij = 1 νc νc n=1 l nc t nc i t nc j , where the sum is over the ν c vertices of the c-th cell, and the tissue-scale averaged orientation tensor We use the same notation Σ ij , σ ij , p for the counterpart coarse-grained quantities in our constitutive model below. In the absence of external stress, the vertex model exhibits a liquid solid transition as a function of the target shape p 0 [7,41].For p 0 < p * 0 the energy barriers to T1 transitions are finite and the system is a solid with a finite zero-frequency linear shear modulus.At the critical value p * 0 , the mean energy barrier for T1 transitions vanishes, giving liquid response for p 0 > p * 0 .For our bidisperse tiling, p * 0 = 3.85.For monodisperse disordered polygons p * 0 ≃ 3.81, a value close to that of a regular pentagon [7].This value is renormalized by motility [9] and by cell alignment with local spontaneous shear [42].It was recently realized that this transition has a geometric origin associated with the underconstrained nature of the energy in Eq. 1 [20,22,43].For regular hexagons the transition occurs at the isoperimetric value p iso = 8 √ 3 ≃ 3.722.Below this value it is not possible to satisfy both target area and perimeter and the ground state has p = p * 0 and finite energy.This is the solid or incompatible state.For p 0 > p iso there is a family of zero energy area and perimeter preserving ground states, with p = p 0 .The system can accommodate an externally applied linear shear by adjusting its shape within this degenerate manifold [22].The compatible system is therefore a liquid with zero shear modulus, although it stiffens and acquires rigidity at finite strains [27]. Constitutive model -We now construct a continuum model that accounts for the mean-field liquid-solid transition, and also captures the key rheological features of the vertex model: (i) reversibility of linear response to small strains, (ii) strain stiffening at intermediate strains, (iii) plastic relaxation at larger strains, due to T1 cell rearrangements, and (iv) a yield stress in the steady state flow curve Σ( γ), as obtained in Ref. [27].Although our model below is cast in frame invariant form, capable of addressing any flow, we focus on response to simple shear, to compare with our vertex model simulations. We assume dynamics of the cell perimeter governed by: with α and τ p constants and invariant strain rate γ = 2D ij D ij .In the absence of shear, p relaxes on a timescale τ p to a steady state that displays a transcritical bifurcation as a function of the target cell perimeter p 0 , with p = p * 0 in the solid phase p 0 < p * 0 and p = p 0 in the liquid phase p 0 > p * 0 , capturing the liquid-solid transition [7].The same transcritical structure emerges by writing exact equations for the relaxation of a single cell modeled as a regular n−sided polygon according to the vertex model dynamics prescribed above. In shear, the perimeter is advected by flow and stretched by the shear rate γ.In addition, the coupling ασ ij σ ij captures a key intuition of our approach: that a shear-induced global cell orientation σ ij provides an effective mean field that distorts the individual cell's shape p away from its zero-shear value.As a result, in the solid phase p increases relative to its zero shear value p = p * 0 from the outset of straining.In the liquid phase, p increases relative to its zero shear value p = p 0 only after a critical strain amplitude γ c , capturing the strain-induced stiffening transition [27].The behavior introduced by the coupling of single-cell shape, as quantified by the mean perimeter p, to the tissue-scale cell shape σ ij is analogous to the influence of cell alignment due to internally generated stresses in Drosophila germband extension [42].Indeed, the form of coupling of p to σ ij in Eqn. 2 is justified both by experiment [42] and mean field theory [22,27]. The cell orientation tensor is taken to obey an evolution equation of the widely used Maxwellian form, where K ij = ∂ j v i is the strain rate tensor and D ij = 1 2 (K ij + K ji ).The last term in Eq. 3 describes plastic relaxation.It vanishes in linear response (small strains), where a = 0 (see below), allowing the orientation tensor σ ij to build linearly and reversibly with strain, as expected in the absence of plastic T1 events.Consistent with previous studies of the vertex model [19,22,38] we write the deviatoric stress tensor Here C is constant and p 0 the target cell perimeter. In linear response (small strains), the effective modulus G 0 = C(p − p 0 ) is non-zero in the solid phase, where p > p 0 , and zero in the liquid phase, where p = p 0 . Were the factor a on its RHS a constant inverse relaxation time, Eq. 3 would be the widely used Maxwell model, capturing viscoelasticity, but not the irreversible plasticity of T1 events.To model plasticity, we take a to be a fluidity-like variable [36] with dynamics: with f ( γ) = β γ/(1 + 1 2 τ 0 γ), in which β is constant and τ 0 a microscopic time.As suited to an athermal tissue, with no relaxation events induced by temperature or activity (no cell motility, division or death), this is a purely straindriven dynamics.In linear response, a = 0, giving a reversible dependence of σ ij on strain.In weak shear, a builds on a strain O(1) to model the plasticity of T1 events via the final term in Eq. 3. In steady weak shear a = f ( γ) ≈ β γ, giving a divergent relaxation time 1/a as γ → 0, and a yield stress in the steady state flow curve. We explore different values of shear rate, γ, and the target perimeter p 0 relative to the transition p * 0 .(See Appendix for model parameters.)We prescribe as initial condition to shear a perimeter p(t = 0) equal to its steady state value in zero shear, an orientation tensor σ ij (t = 0) = 0, and fluidity a(t = 0) = 0. We then switch on a simple shear K ij = γδ iy δ jx at time t = 0 and track the evolution of p, σ xy and Σ xy as a function of time t or equivalently (to within a constant factor γ) accumulating strain γ = γt.Hereafter we drop the xy subscript, writing σ xy = σ and Σ xy = Σ. Results -Our constitutive model captures the liquidsolid transition as a function of target cell shape in zero shear [7] and the shear-induced rigidity transition of the liquid-like tissue, above a critical shear strain, applied quasistatically γ → 0 [27].See Fig. 1a, which shows the shear stress Σ vs. strain γ in shear at rate γ = 10 −6 .At small strains, just after the inception of shear, the modulus G 0 = dΣ/dγ| γ=0 is finite (solid-like) for p 0 < p * 0 but zero (liquid-like) for p 0 > p * 0 (Fig. 1b, dashed line).In the liquid phase, the stress Σ and slope dΣ/dγ first become non-zero above a nonlinear critical shear strain γ c , heralding a strain-induced stiffening transition (solid line in Fig. 1b, defined as the strain at which the stress first exceeds 10 −5 at any p 0 .) Having explored quasistatic shear, we now consider nonlinear shear flow across a full range of shear rates from quasi-static to fast.The evolution of Σ, σ and p as a function of strain since the inception of shear is shown in Fig. 2, for a range of p 0 below and above p * 0 .The left column shows the results of vertex model simulations.The right shows the predictions of our constitutive model, which performs well in capturing all the qualitative features of the simulations. At small strains, just after shearing starts, the effective modulus G 0 = dΣ/dγ| γ=0 is finite in the solid phase, p 0 < p * 0 , but small in the liquid phase, p 0 > p * 0 .Indeed, repeating the simulations for progressively lower strain rates γ → 0 in the solid phase, G 0 tends to a non-zero constant, G 0 (p 0 , γ → 0), consistent with the quasistatic results discussed above.In the liquid phase, G 0 → 0 as γ → 0, again consistent with the quasistatic results. At higher strains, γ = O(1), strain stiffening is observed: the slope of Σ vs γ increases with increasing γ.This is particularly pronounced in the liquid phase, p 0 > p * 0 , where the effective modulus dΣ/dγ was very small at small strains (tending to zero as γ → 0, as just discussed), but becomes appreciable after a strain γ = O(1) (even in the limit γ → 0).After this regime of strain stiffening, the stress overshoots slightly before declining to a constant in the final state of steady flow. This rich behaviour is readily understood within our simple constitutive model.The initial fluid-like behaviour for p 0 > p * 0 arises because p = p 0 before shearing commences, giving zero effective modulus in Eq. 4. As strain increases, tissue deformation is captured by the growth of σ, which in turn yields an increase of p relative to its equilibrium value due to the coupling term in α in Eq. 2. This is also responsible for the less pronounced strain stiffening in the solid phase, p 0 < p * 0 .The subsequent overshoot in stress Σ (and perimeter p) at larger strains is caused by the overshoot in the cell orientation σ seen in the middle panels of Fig. 2. The stress decline after overshoot arises in the vertex model from plastic relaxation via T1 events, an effect captured in the constitutive model via an increase of fluidity a with shear.The tissue shape tensor σ is essentially independent of p 0 in the vertex model (at low strain rates), consistent with the lack of any coupling of the evolution equation for σ ij to p in the constitutive model. At long times, t → ∞, after many strain units γ = γt → ∞, a state of final plastic flow is reached in which each of Σ, σ and p attains a steady value.This is reported as a function of γ in Fig. 3, for the vertex model (left column), and constitutive model (right), with good semi-quantitative agreement.In rheological parlance, the steady state relationship Σ = Σ( γ) is termed the "flow curve".The vertex model flow curves show a dynamical yield stress: a non-zero limiting intercept lim γ→0 Σ( γ) = Σ Y ̸ = 0. Importantly, this is true both for p 0 < p * 0 and for p 0 > p * 0 : whereas liquid and solid states are distinct and separated by a transition at small strains, in steady nonlinear shear, however slow, the vertex model displays a non-zero yield stress up to a larger p 0 = p * * 0 > p * 0 [27], as also seen in Fig. 1.This is easily understood within our constitutive model.In steady shear, Eq. 5 predicts the fluidity a = f ( γ) = β γ/(1 + 1 2 τ 0 γ).Combining with Eq. 3 for the orientation gives σ = γ/a = 1 β (1 + 1 2 τ 0 γ).Were we to assume p − p 0 = 1, independent of strain rate, we would obtain a flow curve Σ( γ) = C β (1 + 1 2 τ 0 γ), with a yield stress σ Y = C β as γ → 0 and Newtonian behaviour Σ ∝ γ as γ → ∞.The actual flow curve is modified somewhat in comparison, due to the strain rate dependence of p − p 0 .Importantly, however, it retains a yield stress because p ̸ = p 0 in steady flow, even in the limit γ → 0: the perimeter is always strongly perturbed from its unsheared value, due to the coupling ασ ij σ ij in Eq. 2. Intuitively, the key effect of a steady shear, even when applied quasistatically, is to deform cells away from their target shape such that they carry a stress and the liquid phase seen at small strains is destroyed. Conclusions -We have presented a continuum constitutive model for the rheology of confluent 2D biological tissue and demonstrated it to capture the rich rheophysics seen in simulations of the vertex model under applied shear.This includes strain-stiffening of the liquid above a critical strain, a stress overshoot at larger strains due to the plasticity of T1 rearrangements, and a finite yield stress in steady shear, even in the (zero-shear) liquid phase.Our model includes the effects of cell shape change and rearrangements on mechanical behaviour, and will provide a useful phenomenological framework for modeling the rheology of biological tissue.Elucidating its predictions in deformation protocols besides simple shear is left to future work, as are extensions to incorporate other active processes such as cell motility, division and death. Model parameters are the modulus-like quantity C, the microscopic time τ 0 and the parameter β in the function f for the fluidity, the transition value of the target perimeter p * 0 , the coupling of perimeter to orientation α, and the perimeter relaxation time τ p .We choose units C = 1 and τ 0 = 1, and treat p * 0 , α, β and τ p as fitting parameters in comparing our constitutive model with the vertex model simulations.We have found p * 0 = 3.85, α = 0.36, β = 2.0 and τ p = 0.1 to give the best fit.Among these, p * 0 is the value of p 0 at the (zero-shear) liquid-solid transition.Accordingly, we set the value of p * 0 in our continuum model to that value found in our vertex model simulations.β sets the quasistatic limit of the shear component of cell orientation tensor, lim γ→0 σ xy = 1/β, with β = 2.0 in our vertex model simulations.α sets the effective modulus G(p − p 0 ) in the shear induced solid phase, with p − p 0 = p * 0 − p 0 + ασ ij σ ij as γ → 0, and accordingly sets the flow curve's yield stress, lim γ→0 Σ( γ).We choose α to give the best fit of the continuum model's yield stress to that of the vertex model simulations.Finally, τ p controls the steepness of the flow curve at high strain rates (where the vertex model is likely to become less reliable ) and the small finite value ∼ γτ p of the stress before the true quasistatic strain-stiffening transition. The numerical timestep is Dt = Dtl min /F max with l min the minimum edge length, F max the maximum vertex force and Dt = 0.01.T1 events are triggered below a critical edge length l c = 0.01. Vertex model simulations The vertex model [18,19] represents the confluent cells of a tissue monolayer via c = 1 • • • N c polygons that tile the plane.Each cell is defined by the location of its n c = 1 • • • ν c vertices, with any two neighbouring vertices connected by an edge.Each vertex belongs to three neighbouring cells (or transiently four, during a T1 event, see below), with three (or four) edges stemming from it accordingly.Each edge belongs to two neighbouring cells. Consider the n c th and (n c + 1)th vertices of cell c. Cell c contributes to these two vertices an equal and opposite force of magnitude, κ P (P c −P 0 ), acting as a tension along the edge that connects them.This models a competition between cell cortical contractility and adhesion between neighbouring cells [37], with κ P an elastic constant [19], P c the cell perimeter and P 0 its target value [7].Cell c furthermore contributes to the same two vertices a force of magnitude κ A (A c − A 0 )l nc , acting as a pressure in the direction of the edge normal outwards from cell c, with l nc the length of the edge connecting the vertices.Physically, this models 3D cell volume incompressibility via an effective 2D area elasticity of constant κ A , with A c the cell area and A 0 its target value [19,37].Fig. 4 shows a sketch of these forces.Each of the two vertices also belongs to two other neighbouring cells (or three, during T1 events), which contribute forces likewise. For the nth vertex of all N in the tiling, we denote the sum of the forces from each of its associated cell edges by F n .In shear of rate γ, with flow direction x and shear gradient y, the vertex position ⃗ x n obeys over-damped dynamics with drag coefficient ζ as a function of time t: with Lees-Edwards periodic boundary conditions. Consider the vertex at the junction between cells αβγ and a neighbouring vertex between cells αβδ.When the edge connecting these vertices shrinks below a small length l c , a T 1 transition removes these two vertices and replaces them with new ones at the junctions of cells βγδ and αγδ, conferring plastic cell rearrangement. Componentwise constitutive equations in simple shear In the main text, we presented our constitutive model in tensorial, frame-invariant form, capable of addressing any imposed deformation or flow protocol.Here we extract the components of those equations relevant to homogeneous imposed simple shear flow with K ij = γδ iy δ jx . The cell perimeter evolves according to: where α and τ p are constants, and the trace of the cell orientation tensor σ ii = σ xx + σ yy . The orientation tensor obeys an evolution equation of the widely used Maxwellian form, which componentwise is written as: The shear stress where C is constant and p 0 the target cell perimeter. To model plasticity, we take the quantity a in the evolution of the orientation tensor to be a fluidity-like variable [36] with its own dynamics: with and f ( γ) = β γ/(1 + 1 FIG. 2 . FIG. 2. Rheological behaviour of the vertex model (left) and constitutive model (right) in shear startup at a shear rate γ = 10 −3 for values of the target perimeter p0 = 3.50, 3.55, 3.60 • • • 4.00 (in black, red, green • • • orange curves downwards; curve for p * 0 = 3.85 in purple).Shown is the evolution of the shear stress (top), shear component of the orientation tensor (middle) and cell perimeter (bottom) as a function of accumulating strain γ = γt. FIG. 3 . FIG. 3. Steady state (t → ∞) dependence of the shear stress (top), shear component of the orientation tensor (middle) and cell perimeter (bottom) for the same values of the target perimeter p0 as in Fig. 2, with the same line colour coding.Results are shown for the vertex model in the left column and the constitutive model in the right column.
v3-fos-license
2019-04-18T13:12:50.070Z
2014-04-22T00:00:00.000
53989989
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/mpe/2014/302684.pdf", "pdf_hash": "7e8c5257dfde5773c9b64bbf933733284b10f8cf", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2495", "s2fieldsofstudy": [ "Engineering" ], "sha1": "7e8c5257dfde5773c9b64bbf933733284b10f8cf", "year": 2014 }
pes2o/s2orc
A Multiagent Evolutionary Algorithm for the Resource-Constrained Project Portfolio Selection and Scheduling Problem A multiagent evolutionary algorithm is proposed to solve the resource-constrained project portfolio selection and scheduling problem. The proposed algorithm has a dual level structure. In the upper level a set of agents make decisions to select appropriate project portfolios. Each agent selects its project portfolio independently.The neighborhood competition operator and self-learning operator are designed to improve the agent’s energy, that is, the portfolio profit. In the lower level the selected projects are scheduled simultaneously and completion times are computed to estimate the expected portfolio profit. A priority rule-based heuristic is used by each agent to solve the multiproject scheduling problem. A set of instances were generated systematically from the widely used Patterson set. Computational experiments confirmed that the proposed evolutionary algorithm is effective for the resourceconstrained project portfolio selection and scheduling problem. Introduction The project portfolio selection problem (PPSP), together with its various extensions, has been widely studied during the last decade.Given a set of project proposals and constraints, the traditional PPSP is to select a subset of project proposals to optimize the organization's performance objective [1].Mathematic models have been proposed in the literature.The project portfolio profit is regarded as the natural performance objective and utilized by most models, for example, the zero-one integer programming model [2].Since the PPSP is an NP-hard problem [3], metaheuristic algorithms such as evolutionary approaches are widely used [4]. Most studies on the PPSP generally dissever the inherent relationship between portfolio selection and project scheduling.The traditional PPSP is based on some assumptions.It is assumed that an individual project has a fixed and unchangeable schedule [5]; hence, only the project selection decision is considered to impact the final portfolio profit.However, project scheduling tends to affect the portfolio feasibility by adjusting the start and completion time of its activities [6].Especially when resources are constrained, scheduling of project activities helps to better utilize the limited resources, and consequently to increase the portfolio profit [5].Inclusion of project activity scheduling as a subproblem of project portfolio selection helps improve the overall organization performance even though it increases the complexity of decision making.This combined problem is termed as the resource-constrained project portfolio selection and scheduling problem (RCPPSSP).The RCPPSSP can be described as a problem to select an optimal portfolio of projects and schedule their activities to maximize an organization's stated objectives without exceeding available resources or violating other constraints [7]. The RCPPSSP has attracted increasing attention in recent years as a new research problem.Owing to the dual level structure of the RCPPSSP, most algorithms in the current literature are also composed of two parts.In the upper level, decisions are made to select project portfolios.In the lower level, procedures of multiproject scheduling are adopted to improve the performance of the selected portfolio.Due to the NP-hard nature of the project portfolio selection problem, researchers developed heuristics and metaheuristics to improve the solution quality and computational efficiency. For example, an implicit enumeration procedure was developed for all possible project priority sequences with high profit [7].An ant colony optimization (ACO) based on the max-min ant system [8] was proposed, in which solutions were encoded as walks of agents in a construction graph, and transition probabilities were computed to determine the probability of an arc of the graph being chosen by the agents in the next iteration [8].An iterative multiunit combinatorial auction algorithm [5] was also used to select project portfolios through a distributed bidding mechanism.In the lower level, heuristics such as greedy heuristic [8] and priority rule-based heuristics [9] are widely adopted for multiproject scheduling. In recent years, agent-based computation has been widely applied in distributed problem solving.An agent is a selfcontained problem solving entity [10] which exhibits the properties of autonomy, social ability, responsiveness, and proactiveness [11].In a multiagent optimization system (MAOS) [12], self-organization agents [13] interact to optimize their own problem solving with limited declarative knowledge and simple procedural knowledge under ecological rationality [12].Specifically, agents explore in parallel through three types of interactions, namely, cooperation, coordination, and negotiation [14].Since interactions among the agents contribute to solution diversity and rapid convergence in some cases [15], it is recommended to embed the MAOS in evolutionary algorithms to improve the solution quality [12,[15][16][17][18]. Recently, multiagent evolutionary algorithms have been used for single project scheduling [19]. The objective of this paper is to develop a multiagent evolutionary algorithm for the RCPPSSP.The master procedure in the upper decision level is designed by combining the neighborhood competition operator and self-learning operator in the multiagent system.A priority rule-based heuristic is adopted for the subprocedure in the lower decision level.In Section 2, we present the resource-constrained project portfolio selection and scheduling problem and its mathematical model.Section 3 explains the multiagent evolutionary algorithm that we have developed for the RCPPSP.Computational experiments and results are discussed in Section 4. Finally, we conclude this paper in Section 5. Project Portfolio Selection and Scheduling The objective of the resource-constrained project portfolio selection and scheduling problem is to maximize the project portfolio profit.The problem can be recognized as a dual level decision problem [8]. The upper level is to select feasible project portfolios under resource constraints.There are a set Ω of candidate projects from which we select the optimal portfolio.A pool of types of limited and renewable resources is available for all projects.It is assumed that there is no other relationship among the projects besides resource competition. The lower level is to solve the multiproject scheduling problem, that is, to determine the start (completion) time of each activity without violating the precedence relations or resource constraints [5].Given the resource constraints, scheduling project activities within a portfolio helps shorten the project duration and increase the portfolio profit since the project profit is a decreasing function of the project completion time [7]. Two sets of decision variables are designed in this paper: is for project selection and for project activity scheduling, as shown in the following formulae: The notations used in this paper are listed in Notations for the RCPPSSP section. The objective (2) is to maximize the total profit of the selected project portfolio.Constraint (3) ensures that all activities of a selected project are completed.It also enforces that every activity in unselected projects is not executed.Constraint (4) is to guarantee the selected project is complete before its deadline.Constraint (5) describes the precedence relations among activities which requires an activity to start only after all its predecessors have been completed.Constraint (6) ensures that in each time period the demand on any resource does not exceed its capacity.Formulae (7) and (8) declare the decision variables. Multiagent Evolutionary Algorithm To solve the RCPPSSP, a multiagent evolutionary algorithm (MAEA) is proposed.Corresponding to the dual level structure of the RCPPSSP, the MAEA has two levels as well. The master procedure for the upper level is to select project portfolios and a priority rule-based heuristic [20] is designed as the subprocedure for the lower level to do multiproject scheduling. Project Portfolio Selection. The multiagent evolutionary algorithm is a combination of two theories: multiagent systems and evolutionary algorithms [17].Generally, a multiagent system [15] is composed of an environment, a set of objects, a set of agents, a set of relations between objects (agents), and a set of operations.During observation of the environment and interaction with other agents, the fitness value of an agent can be estimated and optimized on the basis of the possessed resources, abilities, and knowledge [17]. In this paper a multiagent evolutionary algorithm was proposed to solve the project selection problem.We designed a multiagent system in which each agent selects project portfolios according to its own preferences and environment.The evolution of the agents is realized by the neighborhood competition and self-learning operators.In neighborhood competition, loser agents will be replaced by new generated agents.In this way, the information and knowledge of individual agents will spread to the whole system.The winner agents will conduct self-learning by applying its own knowledge, for which a simple genetic algorithm is developed.Since most multiagent systems adopt the real-valued representation which is not appropriate for the project selection problem, we designed an agent system based on a discrete representation and modified the operators correspondingly. Multiagent System. The objective function (2) of the RCPPSSP can be simplified as the following formula: where (x) denotes the portfolio profit which is equal to and Θ is an -dimensional search space of the project selection problem.The "x" in boldface represents a vector which is a candidate solution in the search space.The component " " is a 0-1 variable which takes the value of 1 when project is selected or the value of 0 otherwise. An agent for the RCPPSSP can be defined as follows. Definition 1.An agent denoted as a represents a candidate solution x to the RCPPSSP.The value of its energy is equal to its value of the objective function in (2): The agent a living in an environment makes decisions autonomously to increase its energy as much as possible.To realize the local perceptivity of agents, the environment is organized as a lattice-like structure which can be defined as follows [16].Definition 2. All agents live in a lattice-like environment denoted as .The size of is size × size . size is determined as size = × + 1, where 0 < < 1. Neighborhood Competition Operator. In the agent lattice, agents compete with their neighbors to gain more resources so that their purposes, that is, objectives of the RCPPSSP, can be achieved.Reference [16] noted that the neighborhood competition operator facilitates information diffusion to the whole lattice.To describe the neighborhood competition operator, we define the neighborhood as follows. Definition 3.All agents with a line or diagonal line connecting to agent a constitute the neighborhood of agent a.The competing neighborhood of a V is denoted as V .The perceptive range of an agent's competing neighborhood determines the number of competing neighbors as For example, when is equal to 1, the number of agents in the neighborhood V is 8.The basic rule for neighborhood competition operator is defined as follows [16]. Rule 1.If the agent a V satisfies (11), it is a loser; otherwise, it is a winner: The winner survives in the agent lattice, but the loser perishes and is replaced by a new agent a V generated from the local-best agent a * V which is defined as Two alternative strategies [16] are adopted to generate the new agent a V . Strategy 1. A set is composed of sequence numbers of the positions where the agent a V takes different values from One example of Strategy 1 is presented in Figure 2. Strategy 2. Mutation in the evolutionary algorithm is adopted to transform agent a * V to a V , which is represented in formula (14).In formula ( 14), 1 ≤ ≤ and Random is generated randomly in the interval of (0,1) One example of Strategy 2 is presented in Figure 3. In this paper a uniform random parameter ∈ (0, 1) is used to determine which strategy is to be applied to generate the new agent.Firstly, we calculate the similarity between the loser agent a V and the local-best agent a * V with maximum energy in the neighborhood by the following formula: where . The higher the value of || is, the more similar a V to a * V is and the much lower the chance to get a better solution through Strategy 1 is indicated. Therefore, the rule to select an appropriate strategy is designed as follows. Rule 2. When the value of || satisfies ( 16), Strategy 1 is adopted to generate new agents.Otherwise, Strategy 2 is adopted: 3.1.3.Self-Learning Operator.In order to survive in competition, agents in the lattice may take actions to increase their energy by using their own knowledge [21].The selflearning operator [16,21] is designed to help agents achieve this purpose.It is assumed that only winner agents have the chance to conduct self-learning. Definition 4. All agents with a line or diagonal line connecting to agent a constitute the neighborhood of agent a. The self-learning neighborhood of a V is denoted as V .The perceptive range of an agent's self-learning neighborhood determines the number of self-learning neighbors as (2 + 1) 2 − 1, where 0 ≤ ≤ (1/2) size . The basic rule for the self-learning operator is defined as follows [16,21].Rule 3. If agent a V satisfies (17), it has the chance to execute the self-learning operator: A simple genetic algorithm (SGA) [22] is adopted to realize the self-learning of agent a V , in which the chromosome takes the same representation as an agent.The ℎth chromosome in the th generation is denoted as follows: chromosome where 0 ≤ ≤ MaxGA and 1 ≤ ℎ ≤ PopSize.MaxGA denotes the maximum number of iterations and PopSize represents the population size.The fitness function of a chromosome is equal to the energy of its corresponding agent. In the self-learning process, the initial population is generated as follows.The first chromosome in the initial population is equal to agent a V and all other chromosomes are generated by the following formula: where is determined by Three types of operators are applied to generate the new population in each generation, including selection, crossover, and mutation.Binary tournament [23] is used for selection.The elitist strategy [24] is employed to guarantee convergence of the genetic algorithm.One-point crossover is adopted.A crossover point is randomly selected and then the genes in the two parent chromosomes after the crossover point are exchanged to generate two offspring chromosomes [25].The mutation locus is also selected randomly.The selected gene is then mutated by negation.The probabilities of crossover and mutation are denoted as and respectively. Repair Mechanism. In the case of resource constraints and multiproject scheduling, it is possible that some agents (solutions) are infeasible.Especially the neighborhood competition operator and self-learning operator may lead to infeasible agents.Therefore, repair mechanisms are necessary and have been proposed in the relevant literature [8]. In this paper, the infeasible agents are repaired by removing projects in sequence under a certain rule.In an RCPPSSP, the profit of a candidate project depends on its completion time which is unknown in advance, so it is inconvenient to apply other rules except the random rule [8].Given an infeasible portfolio, a random project is selected and removed from the portfolio.This process continues until feasibility is achieved.Once the repaired portfolio is feasible, the new agent representing the repaired feasible portfolio is used to replace the incumbent infeasible agent. Multiproject Scheduling. A priority rule-based heuristic is applied to schedule the selected projects in the lower decision level.Using the minimal slack (MINSLK) priority rule [26] and the serial schedule generation scheme (SSGS) [27], a multiproject schedule is generated for the selected portfolio.In a multiproject environment, the slack (SLK) of an activity is computed as follows: where LF and EF are the latest and earliest finish times, respectively, which are estimated by the critical path method (CPM).Suppose a subset Φ is selected in the master procedure.In project ∈ Φ, there are activities which are denoted as (, ). The SSGS heuristic consists of ∑ ∈Φ stages.In each stage , two disjoint activity sets are identified [27].The scheduled set includes the activities which are already scheduled; and the decision set contains the unscheduled activities whose predecessors are all in the scheduled set .According to the MINSLK priority rule, the activity with the minimum slack is selected from the decision set and scheduled as early as possible without violating the resource constraints.The activity is then moved from the decision set to the scheduled set.Algorithm 1 shows the pseudocode of the priority rulebased heuristic. To determine the feasibility of the multiproject schedule, we set a upper bound for each project's completion time, denoted as .Specifically, for project in the selected portfolio Φ, if its completion time goes beyond its upper bound , the portfolio Φ is recognized as infeasible and shall be repaired. Computational Experiments and Results This section presents the experiment design and computational analyses for investigating the performances of the proposed multiagent evolutionary algorithm for the RCPPSSP.Based on the design of experiment (DOE) approach, a set of instances was generated systematically.Then parameter configurations of the algorithm were set up through testing of examples.The proposed MAEA was then compared with other algorithms in the literature. Experiment Design.An RCPPSSP instance consists of a pool of candidate projects, a set of profit profiles of all projects, and the resources available for all projects.Three project pools with 10 projects in each pool and three other pools with 20 projects in each pool were generated randomly by 72 instances with three types of resources and different networks selected from the widely used Patterson set [28]. The above six project pools are denoted as PAT10 1, PAT10 2, PAT10 3, PAT20 1, PAT20 2, and PAT20 3, respectively.It is assumed that a project achieves its base profit if it is complete at its critical path length (CPL) and the profit decreases with a profit decreasing rate 1 as its completion time increases.Referring to [7], the base profit and actual profit of project are calculated by formulae ( 22) and ( 23), where denotes the resource utilization coefficient which is subject to a uniform distribution in the interval of [0.5, 1.5].The upper bound of project completion time is based on the critical path length by formula (24), and the relaxation rate 2 has the value of 0.4 in this paper.Resource tightness is introduced to estimate the capacity of resource from its maximum resource demand max in the critical path method: Two levels of profit decreasing rate 1 (2% or 8%) and resource tightness (30% or 60%) were designed, which forms four experiment cells as shown in Table 1.This 2 2 experimental design crossed with our six project pools yielded 24 instances to test the proposed algorithm. Parameter Configurations. Parameters of the proposed MAEA were determined through experiments, including the maximum generations of agents MaxMA, the lattice size size , the coefficient , and the perceptive ranges and .The parameters of the SGA for self-learning were also assigned, including the maximum generations MaxGA, the populationsize PopSize, the crossover probability , and the mutation probability . The number of generations and the lattice size were determined firstly.With the increasing size of the agent lattice and more generations of evolution, the multiagent algorithm is more likely to find the optimal solution in a longer computation time.Through testing on examples, MaxMA was set at 100 in this paper.The suitable lattice size is proportional to the number of candidate projects.We estimated size = × + 1 by a coefficient = 0.4 in this paper.Since the designed 24 instances have 10 or 20 projects, the lattice size size takes the values of 5 or 9, respectively.According to Definitions 3 and 4, when = 10, both perceptive ranges and belong to the set {1, 2}.Similarly, when = 20, both and belong to the set {1, 2, 3, 4}.Our testing showed that the proposed MAEA has a good performance when = = 1. The coefficient is used to select the strategy of generating new agents to replace loser agents.When is larger, Strategy 1 is more likely to be applied, which means the new agent will be more similar to the best agent in its neighborhood.Consequently, the multiagent algorithm may be easily trapped in a local optimum.However, the stability of the algorithm will be affected when is much smaller.To tradeoff between the convergence and stability, we set to 0.25 according to our experiments. In summary, the parameters for the proposed multiagent evolutionary algorithm were determined as follows: In the Ranking method, all candidate projects are ranked by a certain priority rule.Projects are then scheduled one by one according to their priorities until the next project makes the portfolio infeasible.It was reported that the greedy "max profit" rule performs best [29] and hence was used in this paper. The MKP ranking method also involves two stages [29].In the first stage, the project selection problem is solved as a multidimensional 0-1 knapsack problem (MKP) and then all selected projects are prioritized by the "max profit" rule.In the second stage, all projects selected in the first stage are scheduled sequentially.In case a selected project cannot be scheduled before its deadline due to resource constraints or it has a negative profit, the project is removed from the portfolio. The MAEA and benchmarking algorithms were implemented in C language on a PC with a CPU at 2.0 GHz and 2 GB physical memory.The average computation times are shown in Table 3.It is obvious that the profit decreasing rate has a significant role in determining the computation time of the multiagent evolutionary algorithm.If the profit decreases faster after the project's critical path length as in Cell 3 and Cell 4, the algorithm takes a much longer time to search for portfolios with projects complete in time. The average profits of 24 instances achieved by the three algorithms are 7486.85,7489.24, and 8120.70,respectively.It is observed that the MAEA has a higher average profit than the other two methods. To investigate the performance of the proposed MAEA, the Wilcoxon Signed Ranks Test was applied to analyze the data in Table 2. Table 4 shows the paired comparison outcomes of statistical analysis.The profits obtained by the MAEA are significantly higher than the other two methods at a significance level of 0.001. Conclusions In this paper, the resource-constrained project portfolio selection and scheduling problem is formulated as a 0-1 integer programming model.The problem has a dual level structure.Project scheduling in the lower level helps increase the portfolio profit by improving the resource allocation among selected projects and rescheduling their activities.A multiagent evolutionary algorithm is proposed to solve the RCPPSSP.The algorithm adopts a dual level structure owing to the nature of the RCPPSSP.In the upper level, agents in an agent lattice are designed to search for feasible portfolios automatically.The neighborhood competition operator and self-learning operator are integrated to accelerate the evolution of agents.In the lower level, each agent adopts a priority rule-based heuristic to conduct multiproject scheduling to better utilize the scarce resources.We conducted an experiment to test the performance of the proposed algorithm.A set of 24 instances were generated from the Patterson set systematically.Computational results show that the proposed multiagent evolutionary algorithm has an outstanding performance. Figure 2 : Figure 2: Strategy 1 to generate a new agent. Figure 3 : Figure 3: Strategy 2 to generate a new agent. Table 2 : Profit data for problem instances.
v3-fos-license
2018-12-15T23:19:28.799Z
2015-01-01T00:00:00.000
56341582
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2015/03/matecconf_iceta2015_04022.pdf", "pdf_hash": "9f3ed84fbff06c0ad0f6cb3841d1834abbf8dda7", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2497", "s2fieldsofstudy": [ "Geology" ], "sha1": "9f3ed84fbff06c0ad0f6cb3841d1834abbf8dda7", "year": 2015 }
pes2o/s2orc
Study on Influence of Mud Pollution on Formation Fracture Pressure The mud pollution may change the mechanical properties of rock during oil and gas drilling process, which affects the prediction of fracture pressure, leads to the failure of hydraulic fracturing treatment. Therefore, it is necessary to study influence of mud pollution on formation fracture pressure to improve the forecasting accuracy. The mud pollution has influences on the modulus of elasticity and the Poisson's ratio of rock by the mud pollution experiment, the core microstructure is observed around the mud pollution. Based on the experiment and research, the effects of mud pollution on the fracturing pressure are studied by finite element software system ANSYS, the factors such as pollution depth, perforation length and Poisson's ratio of polluted area are taken into account. The result of the experiment indicated that the modulus of elasticity of rock is reduced and the Poisson's ratio of rock is increased by the mud pollution. Through computing and analyzing, it can be concluded that increases in pollution depth and Poisson's ratio can lead to a vast increase in formation fracturing pressure. A calculation example is presented and the results show that the results of this research can provide valuable guidance to the designers of hydraulic fracturing treatment. INTRODUCTION Hydraulic fracturing is an effective method to stimulate in low permeable reservoirs.As the demand of industry for oil and gas increases, the petroleum exploration is improved and developed.The petroleum exploration is targeting high-temperature, deep and dense reservoir, in which the hydraulic fracturing treatment often fails because the fracture pressure is too high to fracture the formation [1][2][3][4].Meanwhile, the rock mechanics properties can be changed by the mud solid particles plug and mud-filtrate invasion during drilling, thus lead to changes in formation fracture pressure.There is a big gap between the forecast fracture pressure and actual result in the field performance of hydraulic fracturing in deep wells, which causes the formation cannot be fractured.The inaccuracy of forecast result is mainly due to lack consideration of mud pollution in the fracture pressure prediction model.At present, the calculation of fracture pressure is mainly based on well-logging information [5][6][7][8][9].The correlation analysis on theories and the experimental study are necessary to predict the fracture pressure of polluted wells accurately, thus successful ratio of fracture operation can be increased. EXPERIMENTS OF MUD POLLUTION'S EF-FECT ON ROCK MECHANICS PROPERTIES The method to analyze fracture pressure of polluted perforated wells with the finite element method is the same as the elastic finite element method for mechanics.The rock mechanics properties (such as compressive strength, Young's modulus, Poisson's ratio) may be changed at different degrees with the mud pollution during the drilling and completion processes [10], so it is necessary to consider the variation of rock mechanics properties during the period of calculating fracture pressure with the finite element method. Mud pollution effects on the rock mechanics parameters The level of formation pollution and the influence rule for rock mechanics parameters var-iance were obtained by formation pollution experiment.Three cores drawn from Sichuan Basin of a well at the same depth were determined by the instrument of triaxial stress rock mechanics test for their rock mechanics parameters, then mud (the mud used in the experiment is polysulfide drilling mud with density of 1.1 g/cm3, viscosity of 39 mPa.s, pH 10.) pollution experiment was carried out un-der reservoir pressure and temper- ABSTRACT: The mud pollution may change the mechanical properties of rock during oil and gas drilling process, which affects the prediction of fracture pressure, leads to the failure of hydraulic fracturing treatment.Therefore, it is necessary to study influence of mud pollution on formation fracture pressure to improve the forecasting accuracy.The mud pollution has influences on the modulus of elasticity and the Poisson's ratio of rock by the mud pollution experiment, the core microstructure is observed around the mud pollution.Based on the experiment and research, the effects of mud pollution on the fracturing pressure are studied by finite element software system ANSYS, the factors such as pollution depth, perforation length and Poisson's ratio of polluted area are taken into account.The result of the experiment indicated that the modulus of elasticity of rock is reduced and the Poisson's ratio of rock is increased by the mud pollution.Through computing and analyzing, it can be concluded that increases in pollution depth and Poisson's ratio can lead to a vast increase in formation fracturing pressure.A calculation example is presented and the results show that the results of this research can provide valuable guidance to the designers of hydraulic fracturing treatment.ature conditions on this three cores.To simulate different mud soaking time, the first core stays in the mud filtrate for half an hour, the second core stays for an hour and the third core stays for two hours.And after that, this three cores were determined again by the instrument of triaxial stress rock mechanics test for their rock mechanics parameters(The testing confining pressure is 140MPa; the testing temperature is 150 ; the testing pore pressure is 140MPa and the maximum axial load is 1000KN),the results were shown in Figure 1 and Figure 2. The reason why the core's Young's modulus is in-creased in short pollution time is that the mud filtrate and part of solid particles of mud invade the core under the pressure drawdown, and then the solid particles destroyed the core's connectivity by filling in the core's big channel to make the strength increased.With the extension of pollution time, more and more mud filtrate invade core's channel and hit on the rock to loosen core's structure, so the Young's modulus is decreased and Poisson's ratio is increased.Figure 4 is the microscopic structure of the polluted core, which is away from the pollution face for about 5mm and 15mm.Figure 5 is the microscopic structure of the core, which is away from the pollution face for about 25mm and 40mm.It's found that the big channel is firstly filled by the invader, the amount of filamentous polymer and barite invaded in the channel tend to be decreased as the distance from the pollution face is increased.When the distance from the pollution face reaches 40mm, the pore profile can be clearly observed.That is to say, the mud stops to invade the core for 40mm away from the surface of the core (the core is 50mm long). Conclusion of the experiment The experiment in lab shows that the mud pollution will change the rock mechanics parameters of the formation, that is, the Young's modulus is decreased and the Poisson's ratio is increased.All of these cause the stress field around the wellbore to change.There is an invasion depth of mud can be observed by the microscopic structure of the core after polluted by the mud, that is to say, there are two regions of different mechanics properties in the polluted core: One is the pollution area, and another one is unpolluted area.So the pollution area and its size must be taken into account to calculate the fracture pressure of the mud ICETA 2015 04022-p.3polluted well by finite element method. FRACTURE PRESSURE PREDICTION FOR MUD POLLUTED PERFORATED WELL Calculation of polluted rock fracture toughness Currently, the routine calculation methods for rock fracture toughness have their shortcoming and difficulty to a certain extent.The rock fracture toughness can be calculated by the statistical relationship among the rock fracture toughness, hardness index, uniaxial compressive strength and Young's modulus [11][12][13]. The relationship among the rock's uniaxial compressive strength, shale content and dynamic Young's modulus is shown as follows [14] The rock tensile strength can be calculated by: The confining pressure P C can be calculated by: The relationship among rock fracture toughness, deep confining pressure and tensile strength is shown as follows: Where E= Young's modulus; P = Poisson's ratio; V=shale content; P v =overlying formation pressure; P p =pore pressure; C V = uniaxial compressive strength; C P = confining pressure; t V =uniaxial tensile strength;D = elastic parameters. Pollution depth's influence on fracture pressure So far, there is no certain method to ascertain the pollution zone precisely.The depth of pollution zone is generally considered to be 600mm~1200mm after drilling and completion [15].The ANSYS-international large-scale general-purpose finite element software is adopted to calculate fracture pressure under influence of mud pollution.The size of formation model is 3000mm*3000mm, for improving the calculating efficiency, the size of simulation model is half of the formation model.The maximum horizontal principal stress is 85.5MPa; the minimum horizontal principal stress is 85.5MPa; the vertical stress is 110.5MPa, and the pore pressure is 53.2MPa.The radius of well bore is 50mm; perforation diameter is 10mm; perforation length is 500mm; perforation azimuth is zero.Young's modulus and Poisson's ratio of pollution zone are respectively 18000MPa and 0.27, while Young's modulus and Poisson's ratio of unpolluted zone are respectively 25000MPa and 0.25.The compressive strength and critical stress-intensity factor of pollution zone are respectively 247.6MPa and 32.7MPa´m1/2.There are five simulation models with five different pollution depths (600mm, 700mm, 800mm, 900mm, and 1000mm).The tensile strength failure criterion is used to determine whether the formation is a failure or not. Figure 6 is the schematic of pollution zone in simulation model.Figure 7 is the stress contour when the model cracks.Figure 8 is the curve between the fracture pressure and the pollution depth, the fracture pressure is increased with increasing pollution depth.As the pollution depth is increased from 0.6m to 1m, the fracture pressure is increased by 2.78MPa. Figure 9 is the curve between the fracture pressure and the pollution depth after the pollution region was traversed, the perforation length is 500mm.As shown in Figure 9, the pollution depth is increased from 0m to 0.4m, the fracture pressure is increased from 93.1MPa to 93.49MPa but the changes of fracture pressure were not obvious. Influence of Poisson's ratio of pollution area on fracture pressure There are five simulation models with five different Poisson's ratios of pollution area (0.27, 0.28, 0.29, 0.30, and 0.31).The perforation length is 600mm; the radius of well bore is 50mm; the perforation diameter is 10mm; the Young's modulus of pollution area is 25000MPa; the pore pressure is 53.2MPa; the perforation axis is in the direction of maximum horizontal principal stress; the pollution depth is 700mm.Figure 10 is the curve between the fracture pressure and the Poisson's ratio.As shown in Figure 10, the fracture pressure is increased with the Poisson's ratio.As the Poisson's ratio is increased from 0.31 to 0.26, the fracture pressure is increased by nearly 5MPa.Thus it can be seen that the mud pollution changes the formation fracture pressure, and we must take action to relieve pollutions to reduce fracture pressure for the deep formation of pollution.Well PL-3 in Sichuan Basin is a vertical well, its target zone is the 2nd Xu Member, the depth of target zone is 3800m, and the reservoir belongs to porous type.Acid treatment is operated in well PL-3 before fracturing, and the well-head pressure of acid treatment is up to 83MPa when the pumping rate is 2.5m 3 /min.Based on the well's basic date and the results of mud pollution experiment, the formation fracture pressure after polluted is predicted by the finite element model.The input parameters are listed in Table 1; Figure 11 delineates the displacement equivalence value map in the X-direction after the formation polluted by mud.The fracture pressure is forecast to 112.1MPa, and that's what led to high treatment well-head pressure in acid treatment. After the mud pollution being relieved by acidizing, the predicting formation fracture pressure is 99.4MPa.Figure 12 delineates the displacement equivalence value map in the X-direction.The hydraulic fracturing treatment was implemented smoothly on Well PL-3.The predicting formation fracture pressure was consistent with the field treatment monitor fracture pressure. CONCLUSIONS (1)In the mud pollution experiment, the Young's modulus is increased at initial pollution stage and then declined; on the contrary, the Poisson's ratio is declined at initial pollution stage and then increases.Finally, the Young's modulus is declined and the Poisson's ratio is increased. (2)The fracture pressure is increased with increasing pollution depth before the pollution region being traversed, but the changes of fracture pressure are not obvious after the pollution region being traversed. (3)The fracture pressure is increased with the Poisson's ratio of pollution area.As the Poisson's ratio is increased from 0.31 to 0.26, the fracture pressure is increased by nearly 5MPa. Keywords: rock mechanics; pollution zone; fracturing pressure; finite element; hydraulic fracturing DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2015 Figure 1 compares the cores' Young's modulus before and after mud pollution.The first core increases its Young's modulus by 3.3 percent after staying in the mud for half an hour, nevertheless, the second core rapidly lowered its Young's modulus by 11.7 percent after staying in the mud for an hour, and the third core lowered its Young's modulus by 7.4 percent after staying in the mud for two hour.Overall, the cores' Young's modulus is decreased after polluted by drilling mud.On the contrary, the cores' Poisson's ratio is increased after polluted by drilling mud, shown in Figure 2. The first core's Poisson's ratio is slightly decreased after staying in the mud for half an hour, the second core is rapidly increased its Poisson's ratio by 20.4 percent after staying in the mud for an hour, and the third core is increased its Poisson's ratio by 15.4 percent after staying in the mud for two hour. Figure 1 . Figure 1.Young's modulus contrast before and after being polluted Figure 2 . Figure 2. Poisson's ratio contrast before and after being polluted Figure 3 . Figure 3. Microscopic structure of the core before (a) and after (b) polluted Figure 4 . Figure 4. Microscopic structure of the core distance from the pollution face 5mm (a) and 15mm (b) away Figure 5 . Figure 5. Microscopic structure of the core distance from the pollution face 25mm (a) and 40mm (b) away statistical, the relationship between rock fracture toughness K IC and confining pressure P c is shown as follows: Figure 6 . Figure 6.Scheme of pollution region (depth of pollution region is 600mm, dark area) Figure 8 . Figure 8. Curves between fracture pressure and pollution depth Figure 9 . Figure 9. Curves between fracture pressure and pollution depth after the pollution region was traversed Figure 7 . Figure 7. Displacement equivalence value map in the X-direction (pollution depth is 600mm) Figure 11 .Figure 12 . Figure 11.Displacement equivalence value map in the X-direction (after being polluted) Figure 10 . Figure 10.Curves between fracture pressure and Poisson's ratio Study on Influence of Mud Pollution on Formation Fracture Pressure Table 1 . Input parameters
v3-fos-license
2016-01-22T01:30:34.548Z
2010-01-02T00:00:00.000
10673776
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.4304/jait.1.1.26-37", "pdf_hash": "f54ccd07828d3359708c4c9eacc1fd8e0efbf1e5", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2498", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "f54ccd07828d3359708c4c9eacc1fd8e0efbf1e5", "year": 2010 }
pes2o/s2orc
Integrated Performance and Visualization Enhancements of OLAP Using Growing Self Organizing Neural Networks OLAP performance and its data visualization can be improved using different types of enhancement techniques. Previous research has taken two separate directions in OLAP performance improvement and visualization enhancement respectively. Some recent works have shown the benefits of combining OLAP and Data Mining. Our previous work presents an architecture for the enhancement of OLAP functionality by integrating OLAP and Data Mining. In this paper, we proposed a novel architecture that not only overcomes the existing limitations, but also provides a way for an integrated enhancement of performance and visualization using self organizing neural network. We have developed a prototype and validated the proposed architecture using real-life data sets. Experimental results show that cube construction time and its interactive data visualization capability can be improved remarkably. By integrating enhanced OLAP with data mining system a higher degree of enhancement is achieved which makes significant advancement in the modern OLAP systems. I. INTRODUCTION OLAP technology refers to a set of data analysis techniques to view the data from all of the transactional systems in an interactive way in order to support the decision-making process.The fast growing complexity and volumes of the data to be analyzed impose new requirements on OLAP systems [1].An OLAP system's performance and level of data visualization can be enhanced using different tools and techniques.With the coupling of these enhancement techniques, OLAP functionality can be enhanced [2].However, OLAP performance improvement and visualization enhancement have been taken separately in the past. Figure 1 depicts the integration of performance improvement and visualization enhancement practices.Another aspect of enhancement is Data Mining, which aims at the extraction of synthesized and previously unknown insights from large databases [3].It can be viewed as an automated application of algorithms to detect patterns and extract knowledge from data that is not obvious to the user [4].Some recent work has shown the benefits of combining OLAP and Data Mining.According to [5], automated techniques of Data Mining can make OLAP more useful and easier to apply in the overall scheme of decision support systems.Furthermore, Data mining techniques like Associations [6], Classification [7], Clustering [8] and Trend Analysis [9] can be used together with OLAP to discover knowledge from data [10]. Figure 1.Integration of Enhancement Techniques [2] In the quest of OLAP enhancement research, Asghar [3] proposed a functionality-enhancement technique using self-organizing neural networks.This technique proposed the integration of Data Mining with OLAP by passing the mined data to the OLAP engine for a more focused analysis and, hence, added intelligence to the OLAP system.The major limitation of the proposed enhancement architecture was the deficiency of work in the enhancement of OLAP performance and visualization.No visualization enhancement technique was used for the expanded view of the OLAP data.Data cube processing time and its physical drive storage were also not discussed.Users of the proposed architecture had to formulate queries to manually retrieve the data of their choice.Interactive visual analysis was missing which is a very attractive functionality of OLAP systems.Typical OLAP data do not change, as they are usually historical data.A major concern is often the support of ad-hoc data exploration by an analyst or other users looking for trends or patterns at v levels of details, perhaps integrated with decision support applications [10]. In this paper, we extend the previous work to overcome the existing limitations and provide an enhanced architecture that can cater for both performance improvement and visualization enhancement.The newly proposed architecture has various modules which allow the integrated performance and visualization enhancement of the OLAP system. We developed an OLAP prototype system using C# language and other software development tools such as Microsoft SQL server [11], Microsoft Analysis Services [12] and Dundas OLAP services for Windows Form [13].Experiments are done with data from Forest Cover Type [14] and Zoo [15] data set.It is observed that using the proposed architecture we can enhance the existing OLAP systems in terms of performance and visualization and can get a higher degree of overall enhancement by integrating the performance improvement and visualization enhancement techniques.To the best of our knowledge, no such architecture which features an enhancement solution for OLAP systems for improving performance and enhancing visualization capabilities was ever proposed.Our experimental results show that the cube construction time can be improved remarkably by using the clustered data tables as compared to relational tables.Similarly, by implementing various visualization and enhancement tools and APIs [16] at the OLAP systems can improve the level of interactive data visualization through the use of different types of charts, graphs and data grids. The remaining of this paper is organized as follows: Section 2 highlights the summary of the past work on which our solution is based.Section 3 addresses the related work.We elaborate the proposed architecture of enhanced OLAP in section 4. The design is followed by description of implementation of our prototype in section 5. Section 6 is where experimental results are disused and compared with the previous work.A conclusions and possible future research directions are drawn at the end. II. PREVIOUS WORK This section provides a brief summary of our previous work about OLAP functionality enhancement, on which our proposed solution is built upon.In our previous work, we extended the capability of the OLAP systems by the use of a neural network.In addition to the usual visualization capabilities, it provided users with the opportunity to analyze data in clusters at different levels of abstraction.The technique used is basically called Growing Self-Organizing Map (GSOM) [17].GSOM has been developed as a flexible data mining feature mapping method over the traditional Self-Organizing Map (SOM) [18].The innovation of GSOM is the ability of generating feature maps of different levels of data abstraction using a parameter called 'spread factor'.This spread factor is initially used for generation of hierarchical clusters and analysis technique which is known as dynamic SOM Tree.These hierarchical clusters from the dynamic SOM Tree are subsequently used to provide the OLAP user with the ability to visualize and select data clusters at different levels of abstraction for further detailed analysis.Figure 2 depicts the previously proposed architecture for enhancing OLAP's functionality.This architecture indicates two different approaches to pass the data set to an OLAP engine.The framework we devised and presented in this paper is based on the hierarchical clusters generated by GSOM which are then translated into manual relational tables.The tables are stored in the relational database which serves the data source for the OLAP engine. The architecture in the past work however has a number of drawbacks.Firstly, the clusters generated from GSOM are to be manually translated into relational tables.This means that user involvement is required for the clusters to be mapped on to a relational schema.Secondly, the OLAP user is unable to perform interactive visualization on the clustered data as there is no such facility available at the front-end.Customized queries are required to view the clustered data which requires knowledge of the complete clustered data in advance.It also lacks of support of Multidimensional expressions (MDX) which is a query language for the OLAP database.Users have to rely on the SQL command, 'GROUP BY' clause to perform runtime aggregation of data.Cube construction time thus grows unfavorably as the increase of size of the data sets. The motivation of this paper establishes from the need to keep up with the pace of incremental and fast OLAP development, and the limitations at the previous architecture specifically in terms of its performance and data visualization. III. RELATED WORKS As far as performance is concerned in OLAP system, the bottleneck usually involves data processing speeds over the structures of the data cubes.The authors in [5] identified the problem that OLAP operations require complex queries on underlying data, which can be very expensive in terms of computation time.Using parallel computers is one solution.Another approach which is similar to our solution proposed here is to improve cube processing time rather than the OLAP query processing time.For instance, authors in [19] suggested that OLAP performance can be improved by using the (MHC) Multidimensional Hierarchical Clustering technique.Clustering was introduced as a way to speed up aggregation queries without additional storage cost for view materialization. In contrast, our work is to have used GSOM for generating hierarchical clusters for a focused analysis instead of speeding up OLAP queries.Similarly, authors in [20] achieved Heuristic Optimization of OLAP in MHC (multi-dimensionally hierarchically clustered) databases.They found that commercial relational database management systems use multiple onedimensional indexes to process OLAP queries that restrict multiple dimensions.They presented architecture for MHC databases based on CSB star schema.In our work, we adopted this concept of using relational tables that contain clustered hierarchical information, that are transformed into typical star schema.Along with this concept, researchers in [21] suggested an enhanced OLAP operator based on the Agglomerative Hierarchical Clustering (AHC).The operator is called Operator for Aggregation by Clustering (OpAC) that is able to provide significant aggregates of facts referred to complex objects.Our approach is slightly different that we do not use any operator for clustering.Instead, we use a separate analysis server to construct a cube using the star schema source residing in the database server. There are other innovated techniques for cube enhancements, such as Cube Presentation Model (CPM) recommended in [22].CPM can be naturally mapped with an advanced visualization technique called Table Lens.A visual interface for exploring OLAP data with coordinated-dimension hierarchies is introduced in [10]. In literature, a lot of works were devoted to visualization enhancement techniques.Just to name a few, an advanced tool (CommonGIS) for highly interactive visual exploration of spatial data is in [23].Followed by that, authors in [24] extended a tool for Spatial OLAP, called SOVAT (Spatial OLAP Visualization and Analysis Tool).A hierarchy-driven compression technique for the advanced visualization of multi-dimensional cubes is suggested in [25].Then a new visual interactive exploration technique for OLAP is presented in [26].This is similar to our work in terms of OLAP user facilitation.This enhanced architecture, allows novice users of OLAP technology to explore and analyze OLAP data cubes without sophisticated queries. Subsequently a framework is proposed in [27] for querying complex multi-dimensional data and transforming irregular hierarchies to make them navigable in a uniform manner.Lately in 2008, Mansmann [28] introduced a comprehensive visual exploration framework which implements OLAP operation as a form of powerful data navigation and allows users to explore data using a variety of interactive visualization techniques.The Dundas visualization toolkit which we have used in our work to visualize the OLAP data also allows user to view the data using a number of charts.The use of this software makes our work similar as our choice of visualization of data also provides a number of visualization views to understand and analyze the data in an interactive way. As observed from our review, research communities advocate that associating Data Mining to OLAP is useful for rich analysis and OLAP tools [21].By following this fusion idea, we emphasize the coupling of performance and visualization enhancement of OLAP systems as our solution.Why then is there a need for OLAP enhancement architecture?Though a number of enhancement architectures were proposed in the past, none of the work so far was intended towards integrated enhancement of both OLAP performance and visualization, and hence a strong need exists for it [25]. IV. PROPOSED ARCHITECTURE FOR ENHANCED OLAP To fulfill the growing demands of OLAP users [3], a standardized architecture are required which can easily be deployed as a complete system, it can support the integrated enhancement.Figure 3 depicts such architecture, which enforces integrated enhancement of OLAP's performance and visualization. We describe the important components of the proposed architecture for the enhancement of OLAP systems.The goal of this architecture is to integrate OLAP with Data Mining and to provide integrated performance and visualization enhancement.To achieve this objective, we deployed separate servers; one is for the database and the other one for the OLAP data.This architecture indicates two channels to pass data to the OLAP engine.The first path is the conventional or non-clustered method where a data set is loaded directly into the database server through Extract, Transform and Load (ETL) process.The data are stored in the form of a relational database.From the relational database a star schema is designed using standard SQL queries and data is loaded into the star schema.The OLAP server takes this star schema as a source to construct OLAP cubes.It also provides storage and management mechanism for the cube data.At the front-end a visualization tool captures the cubes generated by the OLAP server and displays the data in form of charts, reports and tables. A. Data Processing One unique feature about the architecture we have devised to enhance OLAP is the hierarchical data clusters generated by GSOM.GSOM is based on the design of an unsupervised neural network [29].The use of GSOM is mainly to produce the hierarchal clusters. Data set is first fed to GSOM tool which produces the hierarchical clusters using a numerical spread factor.User can set the spread factor to control the number of hierarchical clusters generation using GSOM.An example is given in Figure 4 where the spread factor is 3. Once the clusters are generated the clusters are mapped manually into relational tables.The relational tables are stored in a database in the database server.From these relational tables star schema is created and uploaded with data.As mentioned previously the star schema becomes the data source.Using this source, cubes of the clustered data are constructed.These clustered cubes become the source of data to be visualized using the prototype in the same manner.From the visualized clusters, user can select the clusters of choice and perform analysis on particular clusters instead of the complete set of data. B. Visualization A Front-end visualization tool is connected with the OLAP server that generates cubes using star schema as a data OLAP cubes, displaying the cube data in various views such as charts, graphs, reports and tables at the user's end.Users can perform basic OLAP operations at will using front-end tool; hence, interactive analysis is available using drill down, roll up and other standard OLAP operations.Visualization capabilities of both traditional and clustered OLAP data are enhanced.The proposed architecture achieves its objective as it integrates OLAP with Data Dining and uses clustered data for performance improvement in terms of cube processing time.In addition to this, the Visualization tool at the front end supports an interactive visual exploration of OLAP data using drill-down, roll-up charts and tables.The visualization tool enhances the visualization capabilities of both traditional and clustered OLAP data. V. IMPLEMENTATION This section is presented in two phases: in the first phase we explain the implementation details of the proposed architecture; in the second phase we describe the experiments performed on two different data sets. Our architecture supports two data loading approaches, clustered and non-clustered.For the non-clustered approach which is the ordinary one, the data set stored in a form of single comma separated text file (depicted in Figure 5) is uploaded in a database as a table using Microsoft SQL Server 2000's Data Transformation Services (DTS) as shown in Figure 6.After the database has been uploaded with the data, using SQL queries a single fact table and three dimension tables are created and uploaded with data to form a star schema as shown in Figure 7.This star schema residing in the database server is a source data for OLAP server.In this project we are using Microsoft Analysis Services as an OLAP data server.The OLAP engine in the Analysis server uses this data source to construct cubes.The wizard also facilitates the user to create dimensional hierarchies.Once the cube wizard is finished with the dimensions and measures selection it prompts the user for the cube storage mode as well.The Storage Design Wizard of MS Analysis Services allows for three types of storage modes, namely multidimensional (MOLAP), Relational (ROLAP) and Hybrid (HOLAP).Figure 9 shows the selection interface of storage type.Selection Step in the Cube Wizard.The cube performance can be controlled to some extent by setting the aggregation options of the cube.Aggregations are pre-calculated summaries of data that make querying of the cube faster.The wizard allows users to do setting of the aggregation options as well. Figure 10 illustrates the aggregate setting option which can be controlled using both performance and size of the cube.Finally, the cube along with all the given settings, are processed and the details are shown to the user.Figure 12 presents a preview of the built-in cube browser of MS Analysis Services.The cube browser only shows data in a data grid.In order to enhance the visualization capabilities of the OLAP systems, we have developed a prototype using the Dundas OLAP services. With the aid of it, the prototype takes cubes residing in the OLAP server as a source and displays the cube data in the form of a rich and interactive interface.The user can perform the basic OLAP operations and can also see the data in a number of chart types and colorful grids.A number of reports can also be saved in Extensible Markup Language (XML) format.Figure 13 shows the user interface of our prototype which has all the above mentioned controls to work on cube data.The prototype has been developed using Microsoft visual studio 2005 and C sharp programming language.The second approach towards OLAP enhancement is the use of GSOM for loading the clustered data into the OLAP system.Data set is first fed into a GSOM tool which produces the hierarchical clusters using the spread factor.User can set the spread factor to control the number of hierarchical clusters generation using GSOM as depicted in Figure 4. Once the clusters are generated the clusters are mapped manually into relational tables.The relational tables are stored in a database in the database server (MS SQL server 2000).From these relational tables star schema is created and uploaded with data.As mentioned previously the star schema becomes the data source and using this source, cubes of the clustered data are constructed.These cluster-based cubes become the source of data to be visualized using the prototype in the same manner. Figure 14 shows the cluster-based cube data in the form of bar chart using chart view of the prototype.Using this approach, user can select the clusters of choice and perform analysis on particular clusters instead of the complete cube data. VI. EXPERIMENTS As explained earlier, for the experiments we selected two different data sets for testing and validation of the results of our proposed architecture.The first is a limited dataset called the Zoo data set and the second one is a larger data set of Forest Cover Type.The idea is to verify how well our new architecture would perform for data of various sizes.Especially we want to demonstrate the integrated enhancements of performance and visualization.We tested both clustered and non-clustered based approaches on the data sets.However, only the clustered based approach is discussed in details here. A. Experiment 1 -with Simple Data In our first experiment we chose a small data set called Zoo Data.Firstly the data set is passed to GSOM to generate the hierarchical clusters.We performed this process using different values of spread factor on the data in order to obtain hierarchical clusters at different levels of abstraction.Only three levels of hierarchical clusters were selected for analysis. It was observed that the hierarchical clusters generated by using three different values of spread factor such as 0.01, 0.05 and 0.1 identify different groups and subgroups of several animal types.These groups clearly indicate the relevant grouping of data based on user interest and these clusters can be further spread out if necessary.GSOM provides an unbiased grouping of data in terms of clusters.These clusters were picked at diverse levels of abstraction and stored in relational tables.One of the main resulting tables is shown in Table 1. Child_ Child _Clus From these cluster relationship tables, we created a fact table.The two tables output from GSOM, namely Cluster Name Table and Cluster Hierarchy Table are used as dimensions to create a star schema.The fact table is created and updated using SQL queries as shown below. • Fact Table Data Insertion INSERT INTO [Fact_table] ( [AnimalID],[CategoryID],[Hair],[Feathers],[Eggs],[Mil k],[Airborne],[Aquatic],[Predator],[Toothed]Backb one],[Breathes],[Venomous], [Fins] ,[Legs],[Tail],[Domestic],[Catsize] ) • Fact Table Data The OLAP server takes this star schema and uses the clustered data to generate and process a cube.We have named the database having clustered zoo data as Clustered_Zoo.We further created an OLAP database and created a new cube called cluster_zoo_cube.The OLAP database which has stored the cube becomes the source of cube data.In the prototype, we set the connection so that the front-end tool can connect string with the OLAP server to get cube data and display it in the forms of charts, graphs, grids and reports.The connection string used to connect Microsoft's Windows Forms with the OLAP server is as follows; Data Source=[server name]; Provider=msolap.2; Initial Catalog=Clustered Zoo This connection string identifies the server name on which the OLAP server is hosted.Initial catalog is the OLAP database name which has the cubes stored in it.The prototype uses a function to display the cube data on the front end interface.The function used to display cube data on the client interface is as follow. olapClient1.ShowData(); This function is responsible for showing the cube data so that the user can be facilitated with graph, charts and reports.Figure 16 shows the cluster_zoo_cube data using the prototype. By using this interface, the user can drill down and roll up on the cube data and can see an instant manipulation of the chart.Switching from chart view to grid view is easy.Reports can be managed and saved using the report viewer.This interface offers an interactive visualization of charts.The basic operations that can be performed using this prototype are depicted in figures 17, 18 and 19.The bar charts show the hierarchy of the number of animals present in each cluster.It is obvious from the prototype that the user can see the information of different clusters and can select any cluster of relevant interest for analysis.The presence of this unbiased grouping of data provides the user not only the selection of interested groups for OLAP operations but also to simplify processing of queries. B. Experiment 2 -with Complex Data The experiment was conducted over a large data set of Forest Cover Type [14].The data set has 581,012 instances and 54 attributes.The attribute breakdown shows is has 12 measures with 54 columns of data from which 10 are quantitative variables, 4 are binary wilderness areas and 40 binary soil type variables.The data represent a typical analytic situation of certain complexity.The main purpose of the experiment is to validate the capabilities of our proposed architecture.We tested both clustered and non-clustered based approaches on the data sets.The class distribution of the Forest Cover Type data set is shown in table 3. Firstly, the dataset is fed into the GSOM to generate hierarchical clusters.Hierarchical clusters were then transformed into relational tables.From the relational schema we created a Star Schema.An OLAP database called Clustered_ForestCovertype has been created to store cubes data.The star schema has one fact table and two dimension tables.Using this fact table and dimension tables we created a cube and named it as Clustered_Forest_Cube.This cube was stored in the OLAP database and that database was the source for the prototype.The cube was connected to the prototype and the cube data was shown on the front end.Readers are referred to [31] for more details of the process.The Fact table for Forest Cover type data set is in a similar format as in Table 1 in Experiment 1.Using this fact table and dimension tables we created a cube in the same way that was discussed in the previous experiment and named it as Clustered_Forest_Cube.This cube was stored in the OLAP database and that database became the source for the prototype.The cube was connected with the prototype and the cube data was shown on the front end.It is clear from the prototype that the user can perform analysis on the cluster of his/her choice.The visually enriched interface allows interactive exploration of cube data hence enhances the power of OLAP system using the proposed architecture. VII. DISCUSSION The results of the experiments are discussed pertaining to the Forest Cover Type data set which represents an example of large size and complex data.This section is divided into two segments.In the first part we discuss the level of performance improvement achieved in terms of cube construction time.The second segment discusses the degree of visualization enhancement for the cube data A. Performance Improvement We have performed experiments for both clustered and non-clustered Forest Cover Type dataset.The nonclustered data set had originally 581,012 records.This huge amount of data has been loaded in a MS SQL Server database called ForestCoverType. From this database a star schema has been generated using SQL queries, some of which are exposed in Figure 21.This schema became the source for the cube construction.From this non-clustered data set we generated a cube and named it Forest Cube, having 3 dimensions and 1 fact table.The Forest Cube process time was calculated and it was observed that it took 1 second to process 60,180 rows of data to construct the cube. We carried out the same experiment on the clustered data which was first fed into the GSOM and then transformed into the Star Schema.A cube named Clustered_Forest_Cube was generated using Microsoft Analysis Services depicted in Figure 22.This clustered data cube was almost instantly processed and its construction time was less than a second.Figure 23 shows the cube construction time comparison of both clustered and non-clustered data. The results of the experiments to calculate the construction time of a cube shows that the clustered data cube takes less time as compared to the non-clustered data cube.The significant thing about the graph is the rapid variation in the processing time.The processing time line of the non clustered data is increasing rapidly as the volume of data increases.In the case of clustered data the processing time does not increase so sharply.It is identified that if huge amount of data has to be dealt with in the construction of OLAP cube then the clustered approach is more suitable.For instance, the experiment performed clearly shows that if we increase the size of both clustered and non clustered data it affects the processing time of cube.Interestingly, the rate of increase of processing time when the data is not clustered is quite high.It can be seen in the graph that distance between the two lines keep on increasing as the data increases.For the clustered data the line remains below 200 units of processing time but it reaches up to 1000 units for the same non clustered data. It is evident from the time comparison graph that the cube processing time can be reduced by using the clustered data, and the user can perform the targeted and fast multidimensional analysis on the clustered cube.Hence, it is shown from our benchmarking experiments that our proposed architecture yields performance improvement of OLAP data cube in computational time. B. Visualization Improvement We have constructed cubes for both Zoo and Forest Cover Type data sets using Microsoft's Analysis Services OLAP engine.With the development of the prototype and embedding the OLAP data visualization controls by Dundas Software, we provide OLAP users with a rich user interface to perform targeted and interactive visual exploration of the data. The user can view the same data in a number of charts and graphs simply by selecting the chart type from the toolbar in our prototype.For the sake of demonstration, we show the Clustered Zoo cube data using our prototype. Figure 24 shows the outputs of the animal cube in various visual chart formats from our prototype. Furthermore, users can perform an interactive analysis on the grid as well.Users can drill down and roll up to a level of detail using the "+" and "-" buttons present on the grid and charts.This interactive visualization is enhanced since the previous work only used the GROUP BY clause of SQL queries to retrieve data from the source system.The comparison of the previous data representation and the representation using the developed prototype is shown in Figure 25. VIII. CONCLUSION AND FUTURE WORK In this paper we proposed integrating OLAP performance and visualization enhancement techniques.To support this integration, we devised an architecture for the integrated enhancement using GSOM which generates hierarchical clusters as data input for OLAP.Hierarchical clusters help to enhance targeted analysis in OLAP by views of clusters which is not possible in relational database. The proposed architecture along with its components is described in details.To demonstrate its advantages, a prototype has been developed and experiments are conducted over two real-life data-sets.It is observed that our architecture is relatively easy to implement and manage.Experimental results show that OLAP performance and visualization can be enhanced significantly.At the end we have compared the cube constructing times with those of the previous work, and emphasized the benefits of integrating performance improvement with visualization enhancement techniques. Currently we are working on the dynamic generation of relational tables from the GSOM data.In addition to this we are also working on other Data Mining techniques that can be integrated with OLAP systems to enhance its analysis capabilities. Furthermore, we are focusing on identifying the other limitations of the current OLAP systems.We are exploring how OLAP can be further extended and enhanced to meet the new challenges and to make it a more effective, efficient and intelligent OLAP system. Figure 4 . Figure 4. Hierarchical Cluster Decomposition of Forest data set. Figure 5 . Figure 5. Data in text file. Figure 8 Figure 8 shows the cube wizard of MS Analysis Services.Using this wizard user can construct cubes from the data by selecting the different measures (facts) available in the fact table. Figure 9 . Figure 9. Storage Design Wizard -storage type selection step. Figure 11 Figure11shows the summary of the cube process.MS Analysis Services have a built-in Cube browser to view the cube data and perform the basic OLAP operations such as drill-down and roll-up. Figure 12 . Figure 12.Animal Cube Data in Cube Browser.Dundas OLAP Services has the following user interface controls: [30].a) OLAP Chart: A central data visualization area to display the cube data.b) OLAP Grid: A visualization grid for cube dataanalysis.c) OLAP Toolbar: Quick access to key control functions.d) Cube Selector: A control to select a cube from a multi-cubed data source. Figure 13 . Figure 13.User Interface of Developed Prototype. Figure 14 . Figure 14.Chart view of clustered zoo data. Figure 15 . Figure 15.Chart View of Clustered Forest Data Cube. Figure 18 . Figure 18.Drill Down on Cluster 3 (C3) Figure 20 depicts the drill down and roll up operations performed on the clustered cube data showing the number of forests present in each cluster. Figure 19 . Figure 19.Samples of Drill-Down Operation on Clustered Forest Data. Figure 20 . Figure 20.SQL Queries for star schema generation. Figure 21 . Figure 21.Star schema in Cube Editor of MS Analysis Services. Figure 23 . Figure 23.Preview of Different Chart Types Using Our Developed Prototype. Figure 24 . Figure 24.Comparison of Visual Representation of Data by the Old and New Prototypes. TABLE OF CLUSTERED ZOO DATA
v3-fos-license
2020-10-19T18:08:35.651Z
2020-09-30T00:00:00.000
224953584
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/138856/1/remotesensing-14-00185-v2.pdf", "pdf_hash": "c82485e48556ef630baabaae17020fba6b274edf", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2499", "s2fieldsofstudy": [ "Geology" ], "sha1": "62ea5d970b00355e193a4efcbd6c9ff414ce03dc", "year": 2020 }
pes2o/s2orc
Investigation of the Time-Lapse Changes with the DAS Borehole Data at the Brady Geothermal Field Using Deconvolution Interferometry : The distributed acoustic sensing (DAS) has great potential for monitoring natural-resource reservoirs and borehole conditions. However, the large volume of data and complicated wavefield add challenges to processing and interpretation. In this study, we demonstrate that seismic interferometry based on deconvolution is a convenient tool for analyzing this complicated wavefield. We also show the limitation of this technique that it still requires good coupling to extract the signal of interest. We extract coherent wave from the observation of a borehole DAS system at the Brady geothermal field in Nevada. The extracted waves are cable or casing ringing that reverberate within a depth interval. These ringing phenomena are frequently observed in the vertical borehole DAS data. The deconvolution method allows us to examine the wavefield at different boundary conditions and separate the direct waves and the multiples. With these benefits, we can interpret the wavefields using a simple 1D string model and monitor its temporal changes. The velocity of this wave varies with depth, observation time, temperature, and pressure. We find the velocity is sensitive to disturbances in the borehole related to increasing operation intensity. The velocity decreases with rising temperature. The reverberation can be decomposed into distinct vibration modes in the spectrum. We find that the wave is dispersive, and the fundamental mode propagate with a large velocity. This interferometry method can be useful for monitoring borehole conditions or reservoir property changes using densely-sampled DAS data. Introduction The fiber-based sensors have been applied in the oil and gas industry for borehole monitoring since early 90's [1]. Since then, the distributed temperature sensor (DTS) has been routinely deployed for monitoring well temperatures. The distributed acoustic sensor (DAS) has gained popularity in seismology more recently. The DAS measures strain rate, and thus, records the seismic wavefield like a string of one-component geophones. The advances in fiber material and computer technologies allow us to obtain data with higher quality and analyze them with array processing techniques. The DAS has been used in boreholes environments for a variety of applications. These include flow monitoring [2][3][4], wellbore diagnostics [2,4,5], vertical seismic profiling [VSP; [6][7][8][9], hydraulic fracture characterization [10,11], and microseismicity detection [12]. The DAS is suitable for borehole monitoring for several reasons [13,14]: First, the DAS fiber has higher endurance in high temperature, high pressure, and corrosive environments compared to geophones. Second, it provides a dense 1D receiver arrays along the wellbore. Finally, the cost of DAS borehole deployment is relatively low, although the interrogator and the data storage can be expensive. Once installed, the fiber can be left in the well for long-term monitoring without changing locations. This resolves one of the main difficulties for conventional 4D (3D and time) surveys. A big challenge of analyzing the DAS wavefields is they are often complicated, especially in the borehole environment. Transient borehole processes such as fluid flows and operation activities cause disturbances in the borehole. Optical noises from the DAS interrogator create noise that occurs simultaneously and same amplitude on all sensor channels [6]. For the DAS data we analyze in this study, cable and casing ringing populate a large portion of the data [15]. The ringing is a common phenomenon for DAS in a vertical borehole due to poor coupling between the DAS cable and the casing, or between the casing and the formation [16][17][18][19][20]. It appears as bouncing waves that reverberate within a depth interval. For VSP applications, the ringing is a noise that analysts want to get rid of [21]. Here, we treat these ringing waves as signals and analyze their time-lapse changes. This allows us to interpret the dominant energy sources in the system and understand if the cable and the casing are sensitive to certain processes. The DAS data we analyze are from a vertical borehole at the Brady geothermal field in Nevada. They were obtained during the PoroTomo project [22,23]. The PoroTomo project was a four-week experiment conducted during March 2016, in which the team performed vibroseis experiments under varying pumping operations and collected a variety of geophysical data including surface DAS (DASH), borehole DAS (DASV), nodal geophones, InSAR, GPS, pressure, and temperature (DTS) data. The DASV data were available from Previous studies have analyzed the DASV, DTS, and pressure data. Patterson et al. [24] and Patterson [25] analyzed the borehole DTS and pressure data at different stages of operations. Trainor-Guitton et al. [26] imaged features on two nearby steeply dipping faults using a portion of the DASV data. Miller et al. [15] investigated the DASV data to find the signatures of earthquakes, vibroseis sweeps, and responses to different borehole processes. In addition, they suggest the reverberations on the upper half of the DASV is due to ringing of the casing and the DAS cable. We follow their results and further investigate the time-lapse changes of these reverberations. We use deconvolution seismic interferometry to extract coherent signals along the 1D receivers of the borehole DASV array. The coherent signals are governed by the same wave physics (i.e., wave equation) [27]. Thus, we can understand the property of the structure by examining this wave. This deconvolution method is useful because it modifies the boundary conditions [27][28][29]. Thus, we can convert the wavefield to a favored boundary condition for interpretation. For example, Snieder and Safak [30], Nakata et al. [29], and Nakata and Snieder [31] used this to isolate the ground coupling effect and analyzed the vibration modes of a building. Sawazaki et al. [32], Yamada et al. [33], Nakata and Snieder [34], and Bonilla et al. [35] applied similar methods to obtain near-surface velocity changes in different time scales. In this study, we use this deconvolution method to help us examine the wavefield. It allows us to interpret the wavefield using a simple model. Furthermore, it separates the direct waves and the multiples and simplifies the wavefields. This makes time-lapse monitoring easier to implement. In the following, we first introduce the Brady DASV data (Section 2) and the deconvolution interferometry method (Section 3.1). We focus on analyzing the ringing signals on the shallower part of the borehole as their energies are consistent. In Section 3.2, we show the deconvolved wavefields and how they can be explained by our proposed models. Detailed parameters for modeling are given in Appendix A. In Section 3.3, we analyze the velocity variations of the signal versus measured depth, observation time, temperature, and 56A-1 (P sensor) Figure 1. Locations of the target boreholes in the PoroTomo experiment. The survey was at the Brady geothermal field in Nevada, USA (black cross in the inset). The red star is the borehole with DASV and DTS (well 56-1). The green dot is the borehole with the pressure (P) sensor (well 56A-1). The blue triangles are locations of the vibroseis shots. The gray lines are the DASH cable on the surface. We use DASV, DTS, and pressure data in this study. pressure. In Section 3.4, we apply a normal-mode analysis to the vibration modes of the waves. We also show the signals we extracted from waves propagating in the formation in Appendix B. For these signals, we cannot get precise velocity changes due to poor coupling. But we note that if coupling was better, similar techniques can be applied to these signals for analyzing temporal changes in the formation. Data We focus on the DASV, the DTS temperature, and the pressure data from the PoroTomo project [22,23]. Figure 1 shows the location of the wells relative to the entire DASH array and vibroseis shots on the surface. The DASV and DTS fibers are co-located in well 56-1 (the red star) that is about 380 m deep. The DASV fiber is single-mode and the DTS fibre is multi-mode. Both fibers are high temperature acrylate-coated that are tested to be resilient up to 150 • C. For resistance, the fibers are protected by stainless steel double tubing. The DASV system has 384 channels with channel spacing of approximately 1 m. The gauge length is 10 m. The sampling rate is 1000/s. The unit of the DAS raw data is radian/millisecond per gauge length. The total DASV data size is 981 GB stored in SEG-Y format. The DTS system has channel interval of 0.126 m and the sampling interval is 62 s. The pressure sensor (P sensor) is located at a nearby well 56A-1 (the green dot). The pressure sensor is at an elevation corresponding to channel 219 of the DASV system (i.e., measured depth = 219 m). The sampling interval of the pressure sensor is 60 s. The two wells are around 100 m away from each other. Their wellheads are at about the same sea level (1230 m). Patterson [25] suggested the two wells are hydraulically connected based on simultaneous responses between the DTS and the pressure sensor. Hence, we assume the pressure measurements can represent the co-located pressure changes with the DASV and DTS at measured depth of 219 m. Figure 2 shows the evolution of the pressure, temperature, and an overview of the DASV DC values and its Root-Mean-Square (RMS) amplitudes. We focus on the eight days (Mar [18][19][20][21][22][23][24][25][26] where the DASV was actively recording. Initially during Mar 18, the pressure drops drastically due to resuming operation after a shutdown period (yellow to blue shade in Figure 2a). Then, the pressure increases slowly due to increasing injection, until resuming to normal operation on Mar 24 (blue to green shade). The sudden pressure rise at the end of Mar 25 is due to a plant shutdown [36]. The temperature increases with depth with a heat deficit below 320 m due to geothermal explorations (Figure 2b; Miller et al. 15). The lower temperature in early Mar 18 is due to cool water treatments before cable installation. Figure 2c,2d show the DASV data contains many disturbances under these changing pressure and temperature conditions. Patterson et al. [24] and Miller et al. [15] investigated these events. Here, we analyze the changes of extracted waves in the deconvolved wavefields. Review of deconvolution interferometry We use deconvolution interferometry to extract coherent waves from the data. The receiver used for deconvolution is a "virtual source". The deconvolution operation modifies the boundary conditions of the wavefield depending on the virtual source [27][28][29]. These boundary conditions include coupling, attenuation, and damping at the boundaries that obscure the pure response of the system. By examining the wavefields that satisfy different boundary conditions, we can potentially separate these unwanted effects. For time-lapse monitoring, this allows us to track the pure response of the structure. Snieder and Safak [30] and Nakata et al. [29] used this deconvolution method to retrieve the vibration modes of the building with receivers deployed along the building floors. Nakata and Snieder [34] monitored monthly and annual changes in shear-wave velocity using near-surface and borehole sensors. Sawazaki et al. [32], Yamada et al. [33], Nakata and Snieder [37] and Bonilla et al. [35] analyzed the near-surface velocity changes during earthquake strong ground motions. Here, we use it to analyze the reverberations that are commonly observed for DAS in a vertical borehole. We also show its potential for time-lapse borehole condition and reservoir monitoring. The deconvolved wavefield D in the frequency domain is [29] where z is the depth of each channel, z a is the depth of the virtual source channel, ω is the angular frequency, and * denotes the complex conjugate. The deconvolution operation in the frequency domain is the division of the data recorded at each depth (U z (ω)) by the data recorded by the receiver that is used as the virtual source (U z a (ω)). The instability in Equation 1 comes from the division, and we stabilize it with a water level ε = 0.5% that scales with the average power spectrum (⟨|U z a | 2 ⟩ in Equation 2). Given a virtual source channel, we calculate the deconvolved wavefield using Equation 2 for the entire 1D array and stack the resulting wavefields over a time span to improve the signal to noise ratio. Figure 3 summarizes the workflow for calculating the deconvolved wavefield. We use two sets of time windows to calculate the deconvolved wavefields. In Section 3.2, we use 30 minutes time window, 50% time overlap (time step=15 minutes), and then stack the deconvolved wavefields over 3 hours (time span=3 hours) to enhance the signal to noise ratio. In Section 3.3, we use 1 minute time window, 50% time overlap (time step=30 seconds), and then stack them over 1 hour (time span= 1 hours). Since the deconvolution is conducted in the frequency domain (Equation 2), we demean, detrend, and taper (10% on both sides) the raw data at each window before Fourier transform. For simplicity, we omit the "stacked" term and call the final retrieved wavefield as "deconvolved wavefield" for the rest of this paper. Figure 4 shows the deconvolved wavefields in the upper half of the borehole between 0-200 m (the top panels in subfigures a-f). We obtain strong reverberating signals that bounce between 10 and 165 m. They are only present when the virtual source is within the same depth interval. When we put the virtual source below 200 m, the reverberations almost disappear. This suggests that these waves are restricted in this depth interval. Figure 4a-4c are wavefields at the same time window but deconvolved with different virtual sources marked by the red dash lines. They show distinct differences. The waves in the deconvolved wavefield are coherent energies assuming they are excited at the virtual source. Deconvolved wavefields To explain the observed deconvolved wavefields, we use a simple string model and derive its mathematical notation. Figure 5 shows the sketch of the model (model 1). This string model has two reflectors (R 1 and R 2 ) on the top and the bottom as boundaries and two sources (S 1 and S 2 ) at those boundaries. This model can also represent the case when we have sources that are further away from the end points outside of this receiver line [34]. Hence, one should consider S 1 and S 2 as the incoming waves from the top and the bottom to the system. The wavefield of a single source can be expressed by a sum of a power series as shown in Nakata et al. [29]. Expanding from this, the wavefield of two sources with configuration in Figure 5 are superposition of their individual wavefields. That is Figure 4. The model has a string with a line of receivers on it (blue line) bounded by two reflectors at z = 0 (R 1 ) and z = H (R 2 ). The two sources are located at z = 0 (S 1 ) and z = H (S 2 ) (magenta stars). where z is depth, ω is the angular frequency, i = the imaginary number, k is the wave number, and H is the length of the structure, γ is the attenuation factor where γ = 1 2Q [38], Q is the quality factor, S 1 and S 2 denote the spectrum of the two sources terms and R 1 and R 2 are the reflection coefficients of the top and bottom reflectors, respectively. In the nominator, e z(ik−γ|k|) and R 2 e (2H−z)(ik−γ|k|) are the direct wave and the first reflection for S 1 , while e (H−z)(ik−γ|k|) and R 1 e (H+z)(ik−γ|k|) are those for S 2 . Their amplitudes are scaled by the attenuation terms that involve γ. The R 1 R 2 e 2H(ik−γ|k|) term in the denominator is the common ratio in the power series representing higher-order reverberations between two reflectors. We simulate the deconvolved wavefields using Equations 2 and 3 and compare them with the observed deconvolved wavefields (Figures 4a-4f). After a series of parameter tests shown in Appendix A.1, we set all sources terms to be mutually uncorrelated with their cross-correlation coefficient cc = 0.01. This choice is because correlated source would generate simultaneous direct waves from the virtual source, as shown in Appendix A.1, which we do not observe in the wavefield. Other parameters used are Q = 500, ω/k = 4600 m/s and ε = 0.0001%, R 1 = R 2 = 0.9 for 4a-4d, and R 1 = R 2 = 0.5 for 4e,4f. These choices are based on the low attenuation across depth, apparent velocity of the signal, and high reflectivity at the boundaries in the observed data. Figure 4 shows that we can reproduce the wavefields using model 1. In Figure 4a-4c, the same time window is examined using three different virtual sources (the red lines). The dominant waves exhibit symmetry between causal and acausal times (i.e., left and right to the blue lines) regardless of the virtual source. To achieve this symmetry regardless of the virtual source, the two sources in model 1 need to have comparable amplitudes (Appendix A.2). Figure 4d-4f show that the model can also reproduce the three special cases observed in the data. In Figure 4d, the multiples are much weaker than in Figure 4a. We reproduce it by letting the source S 1 at the depth of the virtual source, be much weaker than S 2 , the source at the opposite end of the interval. In Figure 4e,4f, the dominant waves are asymmetric with only causal waves. We reproduce this by minimizing the amplitude of one of the sources and using the main source as the virtual source. Hence, we can explain these special cases with unequal amplitudes of S 1 and S 2 . In Appendix A.2, we analyze the effect of varying relative source amplitudes. Some observed deconvolved wavefields suggest a more complicated model ( Figure 6). The observed wavefield in the top panel of Figure 6a shows a reflector at near 90-100 m. We reproduce this wavefield using model 2 shown in Figure 6b. In model 2, we add an additional source S 1a co-located with S 1 at z = 0 (the dark blue star). This additional source generates waves that propagate between z = 0 and z = H/2 (the dark blue line). A reflector R 3 at z = H/2 acts as a lower boundary for this wave. The high RMS amplitude near 90-100 m in Figure 2d supports this model. Time-lapse changes of wave velocities In this section, we analyze the velocity evolution using the extracted wave. The deconvolved wavefields are calculated with 1 minute time window, 50% overlap, and stacked over 1 hour. We calculate deconvolved wavefields with the virtual source at 180 m and measure the arrival times by picking the peaks of the up-going direct wave within 70-120 m. The signals are the most consistent over the eight days between this depth range. We calculate the velocities for a channel by dividing the measured travel length (between the source channel and the target channel) by the picked arrival time. In Figure 7, we plot the estimated velocities against measured depth, observation time, temperature, and pressure. Each gray dot is a velocity measurement at a channel. In general, the velocity of this signal is at around 3600-5000 m/s. This velocity range is much higher than that of the local formation (V p =1000-2500 m/s; Parker et al. 39,Thurber et al. 40). They are closer to the compressional velocity of steel (5000-5250 m/s; Haynes 41). The waves are likely propagate in the stainless steel DAS cable jacket or the steel well-casing as Miller et al. [15] suggested. Figure 7a shows the velocities across measured depth of 70-120 m. The velocities show a slight decreasing trend of -6.6 m/s per meter, which reflects the negative temperaturevelocity dependency in Figure 7c since the temperature increases with depth at this depth range (Figure 2b). The velocity variations (the width of the blue shade) are larger near 72 m and 100 m. This larger variation potentially indicate poor coupling of the DAS cable, or it can be related to the complicated structure and additional source observed in Figure 6. of velocity during Mar 18-19 is likely associated with disturbances in the borehole. This disturbance is caused by depressurization boiling due to the initial pressure drop related to increasing operation intensity [25] (Figure 2a). During this time, the DAS data also have a high DC level (Figure 2c). Figure 7c shows the velocity decreases with increasing temperature with a slope of -17.1 m/s/ • C. This temperature sensitivity is much higher than that measured in the lab for pure steel material (-0.5 m/s/ • C; Mott 42; Droney et al. 43). We have two possible explanations for this. If the waves propagate in the DAS cable jacket, then this higher sensitivity might suggest the cable, or the fiber inside being subjected to the high temperature. We note that the DAS fibre is rated to 150 • C while the highest temperature in the borehole is beyond 160 • C (Figure 2b). On the other hand, if the waves propagate in the well-casing, it suggests the casing might have higher sensitivity to temperature. In Figure 7d, we do not observe obvious relation between velocity and pressure due to lacking samples at higher pressure. Normal-mode analysis The deconvolved wavefield of a vibrating 1D structure can be written as the summation of normal modes [29,30]. This is observed in our results. Figure 8 shows the amplitude spectrum of one of the deconvolved wavefields we used for time-lapse analysis. The normal modes of the signal are clearly decomposed from 10 Hz to over 200 Hz. The frequency interval between different modes is about 18 Hz and consistent over all modes as expected. The system has closed boundaries on both ends. The top boundary is due to the free surface that behaves as closed boundary for P-wave multiples. The bottom boundary is because of the deconvolution modifying the boundary condition to clamped boundary (a delta function) at the virtual source [27]. For this system, the wavelength of mode m is [44] where H is the length of the system. Hence, the phase velocity for mode m is where f m is the mode frequency. We estimate f m and H in the hourly stacked amplitude spectrum at 6 or 7 am on each day. This is the time with relatively high signal to noise ratio. We focusing on the 2nd (∼38 Hz), the 3rd (∼55 Hz), and the 4th (∼71 Hz) modes, since these three modes are the most significant. We pick f m at the peak amplitude of each mode. We estimate H by picking the starting and ending depths of the mode. Then, we calculate the phase velocity using Equation 5. The temporal variations of mode frequency, system length, and phase velocity are shown in Figure 9. The mode frequencies increase slightly with time while the system length decreases with time ( Figure 9a). The velocities estimated using higher modes are lower, suggesting a negative frequency-velocity dependency. Hence, the velocities are dispersive [45]. In general, the three modes show similar trends: the velocities increase for the first few days before Mar 20, and then, they continuously decrease until the end of the analysis period. Discussions We extract coherent waves in the borehole DASV data using deconvolution seismic interferometry (Figure 4). The extracted waves are the ringing of the DAS cable and the well-casing based on the velocity (Figure 7 and 9b). They are caused by poor coupling between the cable to the well, or between the well and the formation. By using different virtual sources, we examine the wavefields that satisfy different boundary conditions. A simple model with two sources and two reflectors (model 1) can explain the deconvolved wavefields ( Figure 4). Some wavefields exhibit more complexity and suggest a more sophisticated model (model 2; Figure 6). We use numerical simulations to qualitatively reproduce the direct waves and the multiples in the deconvolved wavefields. In model 1, the reflectors are associated with free surface and potential casing defects [15,25]. In model 2, the added sources and the wave trapped in the upper half of the system (the dark blue line in Figure 6b m of the well [15]. In fact, the actual conditions might be even more complicated and we note that the solutions are not unique. Therefore, this model may not be appropriate if one wants to analyze the inverted full-wavefield. But this model might be the simplest that can explain our extracted waves well. One important feature of the wavefields in Figure 4a-4c is the symmetry. According to Nakata and Snieder [31], to have symmetry between the causal and acausal times for all virtual sources in this model, we must have more than one source. In Appendix A.2 and A.3, we find the asymmetry is produced by uneven amplitudes of the sources, or uneven reflectivities of the reflectors. The effect of the former on asymmetry is more dominant than the later. We reproduce the symmetry in simulations by having two sources with comparable amplitudes and reflectors with identical reflectivities each other. The main energy sources in this system are borehole processes, surface operations, and traffic noises. The relative amplitudes of these sources change over time, resulting in different observed cases of deconvolved wavefields in Figures 4. This noise variation can generate the asymmetry discussed above. The borehole processes include depressurization boiling and fluid exchange activities at potential casing defects [15,25]. These processes are the most intense during the initial pressure drop. Hence, the deconvolved wavefield shows strong upgoing waves during this time (Figures 4e). The surface operations include site activities and vibroseismic experiments that were conducted 10 hours a day. During these vibroseis experiments, we observe strong down going waves (Figures 4f). In addition, the interstate highway on the north-western side of the survey region provides traffic noises as a general energy source of the extracted wave [46]. We separate the direct waves and the multiples of the ringing signals by using the base of this system as the virtual source. We track the velocity variations of the direct waves. The dense spatial sampling of DAS allows us to observe the trend of velocity variations along depth, time, temperature, and pressure within a 50 m interval (Figure 7). The velocity variations provide potential information for coupling conditions along the cable and parameters that these ringing signals are sensitive to. In Figure 7a, the depth with large velocity variations might suggest poor coupling, or presence of energy source or complex structure. The later is also supported by model 2 which explains occasional variation of the wavefields ( Figure 6). As discussed in Section 3.3, the temporal correlation between the high velocity and the time of large borehole disturbances suggests that the ringings are sensitive to these disturbances (Figure 7b). Finally, in Figure 7c, the decreasing velocity with temperature indicates the DAS cable or the casing potentially being sensitive to high temperature. Hence, by monitoring the extracted waves, we can gain information of the medium that the waves propagate in. The velocities estimated by picking arrival times on the propagating-wave (Figures 7) are slightly slower than the normal-mode method (Figure 9). This is because the normalmode analysis is done in the lower frequency modes that have higher velocities (Figure 9b) whereas the propagating waves contain all frequencies. The frequency-dependent velocities from the normal-mode analysis are potentially useful to obtain attenuation and structures at different distances from the well. However, in this case, since the coupling (either between the DAS cable to the casing or between the casing to the formation) was poor, the dispersion relation is less sensitive to the structure. Instead, the negative frequency-velocity relation might be caused by the casing and fluid in the borehole, but we need a further experiment to understand the dispersion of the waves. We note that this deconvolution method can be useful for monitoring changes in the reservoir. In Appendix B, we show the signals we extracted on the lower portion of the DASV cable (below 200 m). We are able to obtain signals during some of the vibroseis experiments. However, the poor signal to noise ratio prevents us from analyzing the Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 29 December 2021 doi:10.20944/preprints202105.0014.v4 time-lapse changes with good precision. If the coupling was better, the signal to noise ratio would be improved and we could have signals outside of the vibroseismic experiment times. Then, we can apply the similar time-lapse velocity analysis on the obtained signal and infer for reservoir properties changes. Conclusion We use deconvolution seismic interferometry to analyze the reverberations in the distributed acoustic sensing (DAS) in the borehole. This method is useful for understanding complicated wavefields. The waves we extracted on the shallower part of the borehole is the cable and casing ringing. By examining the wavefield at different boundary conditions, we can qualitatively interpret the system using a simple 1D string model. An important observation is the symmetry of the wavefield. The keys to explain our observations are the source correlations, relative source amplitudes, and reflection coefficients in this system. The deconvolution method also allows us to separate the direct waves and the multiples and track the velocity changes of the direct waves over time. The velocity experiences a rise during the initial pressure drop that was associated with increasing operation intensity. The velocity decreases with increasing temperature and depth. The velocity sensitivity to temperature is higher in our results than that for pure steel measured in the lab. This suggests the DAS cable or the well-casing potentially being affected by the high temperature. The technique proposed here can be applied to many different borehole DAS applications. These applications include diagnosis of the condition of the casing structure and monitoring changes of reservoir properties. For the later, we need better coupling than simply friction in the vertical borehole to obtain more energy from the formation. In this section, we simulate the deconvolved wavefields with varying parameters in model 1. We base on the analysis here to choose the parameters in Section 3.2. Appendix A.1. The effect of source correlation We vary the correlation coefficient between S 1 and S 2 from 0.01 (uncorrelated), 0.5 (partially correlated), to 0.99 (highly correlated). To generate synthetic data with certain degree of correlation, we first generate random, normalized data time-series and put them in rows to form matrix A. We build a covariance matrix R with the desired correlation coefficient (cc) on the non-diagonals and 1 on the diagonals. Then, we use the Cholesky decomposition to calculate matrix C such that CC T = R. Multiplying A with C gives a new matrix where the cc between each row is as desired. Figure A1 shows the simulation results when the cc equals 0.01, 0.5, and 0.99. When the two sources are not correlated ( Figure A1a), only the virtual source emits waves from time zero. When the two sources are correlated ( Figures A1b,A1c), the correlated source emits another set of waves in addition to that from the virtual source. The higher the correlation, the larger the amplitudes of those simultaneous direct waves. Since we do not observe this simultaneous direct waves in the data, we set cc = 0.01 in the simulations in We vary the relative amplitude of the two source (|S 1 |/|S 2 |) from 0.1, 1, to 10. When |S 1 |/|S 2 | = 1 (Figure A2b), the relative amplitudes on the causal and acausal axes are wellmatched regardless of the depth of the virtual source. The wavefields are symmetric. When |S 1 |/|S 2 | = 0.1 ( Figure A2a), for channels above the virtual source (the red dashed lines), the waves at causal times have larger amplitude, whereas for channels below the virtual source, the waves at acausal times have larger amplitudes. Vice versa, when |S 1 |/|S 2 | = 10 ( Figure A2c), the patterns reverse. When one of the sources is dominant (Figures A2a,A2c), the deconvolved wavefields approach the one-source cases. This is predicted by the equations. Based on Equations 1 and 3, the deconvolved wavefield using virtual source at z a (0 ≤ z a ≤ H) can be written as = (e (z−z a )(ik−γ|k|) + R 2 e (2H−z−z a )(ik−γ|k|) )+ Similarly, in the case when |S 2 |/|S 1 | ≈ 0 and z a = 0, Equation A2 becomes the infinite series of Equation 9 in Nakata et al. [29] that has similar form. The ik terms in the exponents in Equation A3 and Equation 9 in Nakata et al. [29] are all positive. Hence, the wavefield is asymmetric. Appendix A.3. The effect of reflection coefficients We vary the reflection coefficient on the top boundary (R 1 ) from 0.01, 0.5, to 0.99. The effect is not obvious in the mathematical notation but observable in the simulated wavefields ( Figure A3). When R 1 gets larger, for channels above the virtual source, the acausal waves are enhanced; whereas for channels below the virtual source, the causal waves are enhanced. This phenomenon of a larger R 1 is the opposite of the effect of larger S 1 . That is, based on A.2, if |S 1 | increases relative to |S 2 |, we expect the causal waves being enhanced for channel above the virtual source while the acausal waves being enhanced for channel below the virtual source. Hence, the relative amplitudes between the causal and acausal axes can be affected by both the relative source amplitude and the reflection coefficients. However, the influence of the reflection coefficient on the symmetry is subtle. We set R 1 = R 2 = 0.9 in the simulations in Appendix B. Deconvolved wavefields at the lower part below 200 m We extract coherent waves from the formations. But these waves are only present during the times when the vibroseis shots were close to the DASV well. Figure B1 shows three sets of waves we extract between depth range 165-300 m. This deconvolved wavefield is calculated using 30 minutes time window, 50% overlap, and stacked over 3 hours. The first two signals ( Figure B1a-B1b) travel downward with the apparent velocities of 2100 m/s (green dashed lines) and 1100 m/s (pink dashed line). The V p /V s ratio is 1.91. This is consistent for shallow formations in Brady that consists of volcanic sediments, limestone, lacustrine sediments, and geothermal features such as carbonate tufa [47]. The measured velocities are close to previously estimated local velocities (V p = 2300 m/s and V s = 1200 m/s; Parker et al. 39;Matzel et al. 46). The slower apparent velocities might be due to incident angles. Potentially, we could estimate the time-lapse changes by measuring the relative velocity changes of these waves. However, the poor coupling condition prevents us from getting more scattering energy. The third signal ( Figure B1c) has apparent velocity of 1400 m/s (the yellow dash line) and propagates upward. It is weaker than the first two signals. The source of this signal can be a reflection from nearby faults or bedding planes [26,47]. This the most likely case that we can have an up-going wave here. However, we cannot identify the reflection point due to limited amount of good-quality data.
v3-fos-license
2019-03-16T13:10:54.689Z
2016-02-05T00:00:00.000
79446982
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://openaccesspub.org/article/234/pdf", "pdf_hash": "608c8894e2e1e0336cbeece5aa26589539bc38a8", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2502", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9f606c19f70209772f15fef505558d1a7a0f99bc", "year": 2016 }
pes2o/s2orc
Physical Activity and Risk Factors Screening for Ischaemic Heart Disease in South African Individuals Living with HIV People living with HIV (PLWH) are at risk of developing chronic lifestyle diseases such as ischaemic heart disease (IHD). Physical inactivity is a modifiable risk factor for IHD. The level of ambulation physical activity in individuals living with HIV in a South African context is unknown. The aim of this study was to assess the physical activity levels and other risk factors for IHD in PLWH on antiretroviral therapy (ARV). An observational study was conducted from October 2010 to June 2012 at an outpatient clinic in Johannesburg, South Africa. Two hundred and five individuals who were on ARV for 6-12 months were screened. Physical activity was measured with the Yamax SW200 pedometer over a seven day period. Physical activity of the sample was reduced at 7673.2 (±4017.7) steps/ day with women walking less than men [6993.3 (±3462.6) and 10076.3 (±4885.6)]respectively. Body mass index was increased to 25.6 (±5.4) kg/m with women noted to be overweight [26.6 (±5.5) kg/m]. Independent predictors of being overweight were systolic blood pressure, waist and hip circumference, CD4 count and daily fruit and vegetable intake. Smoking was less common in the study population with 16.1% of the sample being current smokers and 25.9% former smokers. Individuals ’ mean perceived stress levels were 19.9 (±7.8) on the Cohen’s Perceived Stress Scale. The ambulation physical activity level of individuals living with HIV requires modification to assist with reducing risk factors of IHD. DOI : 10.14302/issn.2324-7339.jcrhap-13-255 Corresponding author: Ronel Roos, Telephone: 00 27 11 7173723, Fax: 00 27 86 570 3644, E-mail address: ronel.roos@wits.ac.za Chronic lifestyle diseases are of concern as mortality in individuals living with HIV is slowly shifting to non-aids related illnesses such as cardiovascular disease [6,7]. This shift could partially be explained by the prevalence of known risk factors of IHD such as smoking and obesity [8,9] and specific HIV sequelae such as chronic inflammation, dyslipidemia and lipodystrophy [10][11][12]. Independent of IHD risk factors, HIV replication (Plasma HIV-1 RNA levels > 50 copies/mL) is also associated with an elevated risk of myocardial infarction (odds ratio 1.51 [95% confidence interval, 1.09-2.10]) [13]. Physical inactivity is a known modifiable risk factor for IHD and is estimated to account for 6% of the burden of disease related to IHD internationally [14]. In the South African context this burden is noted to be much higher at an estimated value of 30% in the general population [15]. Walking, as a form of exercise, is often suggested as a means of lowering and managing an individuals' risk for heart disease as it does not have cost implications or require specific skills. PLWH are encouraged to do regular exercise to manage their disease. Physical activity and ambulation behaviour have been well researched in the general population but is still poorly understood in an HIV population. This paucity in studies may be attributed to different measuring instruments being used to evaluate physical activity and researchers defining physical activity differently [16]. Considering the potential burden of IHD in a South African HIV context, it seems prudent to evaluate the level of physical activity and risk factors for IHD at a primary health care level. Such screening may inform the type of intervention programmes needed to influence the risk factors for IHD in this population. Therefore the aim of this study was to evaluate the ambulation physical management and all participants gave informed consent prior to participating in the study. The sample size was calculated at 195 participants using the prevalence rate for hypertension in the South African context as guide as no prevalence rates for IHD in South Africa were available at the start of the study [17]. The alpha level was set at 5% and power at 80%. The sample was increased with a factor of 100/95 to allow for any loss to follow-up of participants accounting for a final sample size of 205. Since completion of the current study, prevalence rates for IHD in individuals living with HIV in South Africa were published and indicated that the disease itself is still at a low prevalence level in this population [18]. The following risk factors for IHD were screened using a questionnaire and body measurements: smoking history (current and former), diet (vegetable and fruit intake), physical activity levels (walking behaviour), resting heart rate and blood pressure, self-reported hypertension and diabetes, body mass index (BMI), waist-and hip circumference and waist: hip ratio (WHR). The study participants' perception regarding their body shape and weight changes in the last six months was documented. Physical activity was assessed using the Yamax SW200 pedometer to provide information on walking behaviour (daily step count). Participants were asked to wear a hip-mounted pedometer for seven consecutive days from getting up in the morning until going to bed at night and to document their daily steps on a physical activity log sheet. They were encouraged not to alter their normal physical activity routine. Reactivity related to the physical activity assessment was calculated following the pilot study. No significant alteration (p = 0.4) in physical activity level was observed between the first and last day of assessment in participants when wearing the hip-mounted pedometer and documenting their findings on a log sheet during the pilot study. The participants' perceived stress levels were evaluated with the Cohen's Perceived Stress Scale-10 (PSS). The PSS is an instrument that measures the degree to which a person perceives their life as being stressful. The instrument consists of 10 questions that are rated on a 5-point Likert scale and range from "0 = never" to "4 = very often". Total PSS score is computed by summing across all ten questions. Scores range from 0 to 40 where a higher score reflects a higher degree of perceived stress [19][20][21]. The PSS has been used in South Africa [22] and in a HIV population [23,24]. In the current study, the Cronbach's α for the PSS was 0.82 as evaluated during the pilot study. Resting heart rate and blood pressure were Results Two hundred and ninety six participants who were on ARV treatment for six to twelve months indicated interest in participating in the study. Fourteen individuals were excluded due to not meeting all the inclusion criteria. Two hundred and eighty two participants consented to participate in the study. Seventy seven individuals recruited did not attend their first scheduled session due to work obligations, financial difficulties, travelling outside the Gauteng province and/ or were not able to be contacted telephonically. Two hundred and five participants attended their first session and eleven of these individuals did not attend their second session due to the same barriers identified following recruitment. One hundred and ninety four individuals' data were complete and used during data analysis. Physical activity data for 195 participants were available for analysis due to the following reasons: three participants' data were excluded during analysis due to not completing seven days of pedometer assessment, seven participants did not attend their second visit or return their pedometer and pedometer log sheet and three participants send a friend/family member to return their pedometer and physical activity log sheet if they could not attend their second session. This was a rather interesting finding as one would anticipate the opposite to be true. Body mass index provides information regarding the general nutritional status of individuals and could therefore indicate that participants that fell into the overweight/obese category had sufficient nutrition that allowed them to also partake in daily fruit and vegetable intake. The focus of the study was to screen diet as risk factor for IHD and not general diet. It is reported that a daily diet low in fruit and vegetable is considered a risk factor for IHD [42]; hence the inclusion of investigation of fruit and vegetable intake in the current study. The majority of participants were unable to partake in daily fruit and
v3-fos-license
2022-10-14T01:15:45.976Z
2022-10-12T00:00:00.000
265219851
{ "extfieldsofstudy": [ "Physics", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/s41567-023-02161-w.pdf", "pdf_hash": "ca3035538d9d77e2a8cd976c499391bd4c92b320", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2503", "s2fieldsofstudy": [ "Physics" ], "sha1": "ca3035538d9d77e2a8cd976c499391bd4c92b320", "year": 2023 }
pes2o/s2orc
Superconductivity from a melted insulator in Josephson junction arrays Arrays of Josephson junctions are governed by a competition between superconductivity and repulsive Coulomb interactions, and are expected to exhibit diverging low-temperature resistance when interactions exceed a critical level. Here we report a study of the transport and microwave response of Josephson arrays with interactions exceeding this level. Contrary to expectations, we observe that the array resistance drops dramatically as the temperature is decreased—reminiscent of superconducting behaviour—and then saturates at low temperature. Applying a magnetic field, we eventually observe a transition to a highly resistive regime. These observations can be understood within a theoretical picture that accounts for the effect of thermal fluctuations on the insulating phase. On the basis of the agreement between experiment and theory, we suggest that apparent superconductivity in our Josephson arrays arises from melting the zero-temperature insulator. Quantum phase transitions typically result in a broadened critical or crossover region at nonzero temperature [1].Josephson arrays are a model of this phenomenon [2], exhibiting a superconductor-insulator transition at a critical wave impedance [3][4][5][6][7][8][9][10][11][12][13], and a well-understood insulating phase [14,15].Yet high-impedance arrays used in quantum computing [16][17][18][19] and metrology [20] apparently evade this transition, displaying superconducting behavior deep into the nominally insulating regime [21].The absence of critical behavior in such devices is not well understood.Here we show that, unlike the typical quantum-critical broadening scenario, in Josephson arrays temperature dramatically shifts the critical region.This shift leads to a regime of superconductivity at high temperature, arising from the melted zero-temperature insulator.Our results quantitatively explain the lowtemperature onset of superconductivity in nominally insulating regimes, and the transition to the strongly insulating phase.We further present, to our knowledge, the first understanding of the onset of anomalous-metallic resistance saturation [22].This work demonstrates a non-trivial interplay between thermal effects and quantum criticality.A practical consequence is that, counterintuitively, the coherence of high-impedance quantum circuits is expected to be stabilized by thermal fluctuations. Josephson-array superinductors are characterized by a Josephson energy E J , junction charging energy E C , and ground charging energy E g [17].A common experimental strategy for avoiding insulating behavior is to make the fugacity for quantum phase slips y ∝ e −4 √ 2E J /E C small.However, for high-impedance arrays the fugacity is always renormalized towards infinity as temperature goes to zero [13,23], resulting in insulating behavior. Our key insight is that long superinductors avoid this fate by operating above the melting point of the insulating phase, where the low-temperature renormalization has yet to occur, and that this results in apparent superconducting behavior.This effect quantitatively explains the presence of superconducting behavior, resistance saturation, and transition to strongly insulating regimes in superinductors. Two nearly identical devices are studied: one galvanically coupled to electrical leads permitting the measurement of resistance, and one capacitively coupled to mi-crowave transmission lines permitting the measurement of plasma modes [17,21].Both devices consist of an array of approximately 1220 Josephson junctions fabricated using electron-beam lithography and a standard shadow evaporation process on high-resistivity silicon substrates (Fig. 1a) [24].For nanofabrication reasons the array islands have alternating thickness, which, in the presence of magnetic field, should give rise to an alternating gap structure while maintaining a uniform Josephson energy throughout the chain.At zero magnetic field, each junction has nominally identical E J /h ≈ 76 GHz, E g /h ≈ 1400 GHz, and E C /h ≈ 5 GHz.These parameters are determined from analyzing microwave (E J , E g ) and transport (E C ) measurements with several consistency checks, as described below and in the Supplement [24]. The working principle of the experiment is to leverage the complementary strengths of low-frequency electrical transport and microwave-domain circuit quantum electrodynamics.These techniques differ by nine orders of magnitude in characteristic frequency, and combine to give access to both the scaling behavior, associated with low energies (transport), and the microscopic system parameters, associated with high energies (microwave). In the transport device, a linear current-voltage characteristic at large applied voltage bias gives way to a high resistance region at low bias, whose extent is approximately given by the number of junctions N times twice the superconducting gap ∆ (Fig. 1b).Over a smaller range of applied voltage a series of evenly spaced current peaks are observed with an apparent supercurrent at zero bias (Fig. 1b inset).The successive current peaks can be qualitatively understood within a picture of successive voltage drops across N voltage-biased Josephson junctions, with low current on the quasiparticle branches and high current when bias is a multiple of 2∆/e [11]. Increasing magnetic field B parallel to the chip plane suppresses supercurrent, suggesting a field-driven transition from a superconducting to an insulating state (Fig. 1c).The spacing between current peaks also decreases with B, indicating a reduction in the superconducting gap with magnetic field.In the strongly superconducting regime (B = 0), zero-bias differential resistance per junction (specific resistance) associated with the superconducting branch decreases dramatically with cryostat temperature (Fig. 1d), dropping over more than three decades before saturating to a low value of < 1 Ω per junction.Due to the long length of the array, we rule out finite-size effects as a possible origin of the lowtemperature saturation [25].The precipitous drop in re- Blue line shows power-law fit.ρ reflects the resistance associated with the zero-bias superconducting branch, found by measuring the two-probe resistance, subtracting off four-probe-measured line resistance, and then dividing by number of junctions.e, Two-tone microwave spectroscopy.Probe tone transmission S versus pump-tone frequency f , with probe tone frequency fixed to resonance at approximately 6.11 GHz.Extracted plasma-mode resonant frequencies f P indicated by colored markers.f, Evolution of measured plasma-mode frequencies f P with applied magnetic field B. g, Superfluid stiffness K g = E J /(2E g ), experimentally inferred from plasma modes in f, versus B (black line).Theoretically expected superconducting and insulating regimes labeled, and demarcated by a band covering the clean [2] and dirty [23] limits. sistance at low temperature and supercurrent features in nonlinear transport give a preliminary indication of the dominance of superconducting behavior.We will develop a framework for understanding the behavior of specific resistance in detail, but first turn to the complementary use of microwave techniques to independently determine system parameters. Microwave spectroscopy is performed by monitoring the transmission of a weak probe signal while the frequency of a strong pump tone is varied [17].A series of sharp dips are observed in probe-tone transmission S (Fig. 1e), corresponding to plasma modes of the array.The plasma modes are evenly spaced at low frequency, reflecting the speed of light and length of the array, and are clustered at high frequency due to proximity with the single-junction plasma frequency.A simple fitting procedure allows extraction of the array parameters from the microwave data [24].Performing two-tone spectroscopy as a function of field (Fig. 1f), the array parameters E g , E C , and E J (B) are fully characterized as a function of magnetic field.With these values fixed experimentally, it is straightforward to perform parameter-free comparisons with the theory of the superconductor-insulator transition in one dimension. Performing this comparison (Fig. 1g) reveals that the array's superfluid stiffness K g = E J /(2E g ) is as much as an order of magnitude below the critical value for insulating behavior [2,23], in contrast to the observed superconducting behavior in transport.Thus, combining the transport and microwave measurements reveals an apparent conflict with basic expectations for the superconductor-insulator phase transition.Resolving this conflict is the central subject of this work. The theoretical picture for understanding our observations was developed in Ref. [13]. Near the superconductor-insulator transition, thermal fluctuations are controlled by the timescale τ = h/k B T and the associated thermal length l th = vτ , where v is a characteristic velocity with dimensions of unit cells per time.l th must be compared with the electrostatic screening length in units of unit cells, Λ = E g /E C .At high temperature (l th < Λ) the system is governed by the local superfluid stiffness, K C = E J /(2E C ).In contrast, at low temperature (l th > Λ) the system is governed by the longrange superfluid stiffness K g , as assumed by standard theories of the superconductor-insulator transition.In the superinductor limit superconductivity is locally stiff, K C K g , which results in a curious regime of local superconductivity that arises from a melted T = 0 insulator (Fig. 2).The "melting point" of the insulator, above which local superconductivity dominates, is In the locally superconducting regime, we find that the high-temperature behavior of the specific resistance follows a power law where T p = √ 2E J E C /k B is the plasma temperature [24].Local superconductivity gives way to insulating behavior when πK C ∼ 1.In contrast, in the low-temperature limit the power law is 2πK g − 3, which yields the typical superconductor-insulator prediction πK g ∼ 3/2.Solid lines are fits to power law expression ρ = AT p .T * is the crossover temperature from power law to saturation behavior, extracted from the point where specific resistance goes 20% above its minimum value.b, Exponent p from power-law fits to transport data in a versus the local superfluid stiffness K C from microwave measurements.Solid line is a linear fit.Shaded blue region in b depicts the systematic error resulting from the choice of lower resistance cutoff in the power law fits [24].c, Amplitude A from power-law fits to transport data versus Josephson energy E J from microwave measurements. The experimentally studied devices have, at B = 0, πK g < 1 < πK C , and T ins ∼ 70 mK, giving an initial suggestion that they are governed by local superconductivity even at low temperatures.This hypothesis can be tested by comparing experimental measurements of temperature dependent specific resistance, ρ(T ), with the predicted power law in Eq. 2. As shown in Fig. 3a, increasing magnetic field weakens the temperature dependence of the specific resistance, eventually giving way to a superconductor insulator transition at high magnetic field (B ∼ 44 mT).Fitting each specific resistance curve to a power law ρ = AT p indicates that, on the superconducting side, the exponent p steadily decreases with field.Comparing p from the transport measurements with the local superfluid stiffness K C inferred from microwave measurements reveals a linear behavior (Fig. 3b) with slope 2.7 ± 0.50 and intercept of −1.3 ± 1.0, in agreement with the predicted slope π and intercept −1 for local superconductivity from Eq. 2, p = πK C −1 [26].We note that, near the superconductor-insulator transition, power-law behavior is interrupted by a shoulder-like feature at high temperature, which is not understood.Amplitude dependence on E J (Fig. 3c) is also in reasonable agreement with the prediction of Eq. 2, A = ρ 0 /T πK C −1 p , with a single free parameter, ρ 0 = 4.8 ± 0.30 kΩ, which is of the order of single-junction normal-state resistance.Experimental agreement with Eq. 2 gives strong evidence that superconductivity is local, and resolves the apparent paradox of superconductivity at low K g suggested by Fig. 1g. The boundaries of local superconductivity can also be understood within the picture of Fig. 2. At low temperatures, the experimentally observed power-law behavior in resistance saturates at a crossover temperature T * (indicated in Fig. 3a).The crossover temperature decreases with magnetic field, as shown in Fig. 4a, agreeing with the expected square-root dependence for T * ∝ T ins , which supports the view that the low-temperature saturation is in fact a crossover into the insulating state.At high magnetic fields corresponding to πK C > 1, T * increases with magnetic field (Fig. 4b), consistent with a superconductor-insulator transition entering into the non-perturbative insulating regime of Ref. [13], where the phase-slip fugacity, ∝ e −8 √ E J /E C , is no longer small.We caution that the experimental interpretation of T * is complicated for two reasons.First, although we have performed normal-state electron thermometry and radiation thermometry and found that all characteristic temperatures are below T * , thermalization at the actual superconductor-insulator transition is difficult to verify directly.Second, different metrics for T * give quantitatively different scaling with B, although the decreasing trend predicted by Eq. 1 is a robust feature. The complete behavior of the Josephson array can be summarized by measuring a resistance "phase diagram."Mapping zero-bias differential resistance as a function of magnetic field and temperature reveals a characteristic dome at low field, already identified from the power-law analysis as a local superconductor, giving way to a highresistance insulating phase as magnetic field is increased (Fig. 4c).The low-temperature boundary between superconducting and insulating states occurs at πK C ∼ 1, as expected.We speculate that the high-field boundary of the high-resistance regime corresponds to the upper critical field of the thinnest islands of the array. The local superconducting dome and its boundaries can be quantitatively modeled as follows.The thermal boundary of the dome is T = T p , the upper cutoff scale of our renormalization-group approach [13].For αT ins < T < T p Eq. 2 applies, with ρ 0 from Fig. 3c.For T < αT ins resistance saturates due to a crossover into the insulating regime, and would presumably increase at lower, experimentally inaccessible temperatures.The constant α = 5, which tunes the crossover to insulating behavior in the model, is fixed from the experimentally observed saturation resistance at B = 0 and in reasonable agreement with the constant found in Fig. 4a.For sufficiently large B one approaches πK C = 1, which sets the magnetic field boundaries of the dome.Calculating ρ according to this procedure results in a local superconducting dome in satisfactory agreement to the experiment (Fig. 4d).This gives evidence that the presence of local superconductivity, and its proximity to insulating phases, is well understood.Summarizing, by combining transport and microwave measurements, we have uncovered strong evidence for a locally superconducting state in Josephson arrays arising from a T = 0 insulator.This resolves the problem of apparent superconductivity in nominally insulating regimes, and clarifies where superconductor-insulator transitions are actually observed in experiment.Our work sheds new light on the observation of high-quality microwave response in the nominally insulating regime of superinductors [21], suggesting effects in addition to high-frequency mechanisms that have been previously discussed [27][28][29].Such devices operate near the "sweet spot" T ≈ T ins where temperature is low enough for welldeveloped local superconductivity, yet high enough to melt insulating behavior.As a consequence, we suggest that the performance of some high-impedance quantum devices [18,19,30] is actually improved by thermal fluctuations.It is also interesting to consider if experimental studies of insulating behavior in resistively shunted Josephson junctions [31][32][33][34] could be understood by carefully considering the role of non-zero temperature, finitesize, or non-perturbative effects [35]. Viewed from the broader perspective of response functions near quantum criticality, we have demonstrated a rare example where the thermal fluctuations with timescale τ = h/k B T can be quantitatively traced through to experimentally measured resistance [13].This does not result in an effectively Planckian scattering [24], as was recently observed in a different superconductorinsulator system [36].It is also interesting to note that our saturating specific resistance curves empirically bear a strong resemblance to the anomalous-metallic phase in two-dimensional systems [22].In our case, saturation is understood as a crossover effect towards insulating behavior.It would be interesting to perform a similar experimental program on a known anomalous-metallic system to test if saturation can be understood as a similar crossover effect.In the transport device, a voltage offset V offset is extracted by extrapolating the linear parts of the current-voltage characteristic down to zero bias (Fig. S1a).E C is then inferred from V offset using [15] which gives the value quoted in the main text. Once E C is fixed from transport, microwave measurements are used to determine E g and E J (B).The dispersion relation for plasma-mode resonant frequency f P,k is where with N being the number of junctions.Fitting Eq.S2 to the experimental data, as in Fig. S1b, yields E g and E J (B).Sample values determined with this method are presented in Fig. S2 and Table SII. Two independent checks are available on the extracted system parameters.The charging energy E C can be estimated from geometry and the nominal specific capacitance of our Josephson junctions.Josephson energy can be estimated with the Ambegaokar-Baratoff relation [37] where N is the number of junctions and ∆ is the superconducting gap of Aluminum.These independent checks on E C and E J are shown in Table SIII.The field-driven superconductor-insulator transition in our system is expected to occur at a magnetic field B ins .As discussed in Sec.VII, B ins is determined by numerically solving πK c (B ins ) = 1.To accomplish this we interpolate E J (B) linearly out to B ins , which is justified by the smooth, linear behavior observed in E J over similar field ranges, as shown in Fig. S2.The error in E g is the standard error in the values of E g inferred from fits to the dispersion curves at high magnetic fields.The error in E J is the standard error from fit to the dispersion curve at B = 0 T. The error in E C , R N and V c is the difference between charging energies, high-bias resistances and critical voltages inferred from up and down I-V sweeps. The following table serves as a cross-check of the junction parameters mentioned in SII. The errors for each parameter in Table SIV are propagated from corresponding errors in the parameters mentioned in Table SII.MW1 refers to the chain of Josephson junctions capacitively coupled on either side to 50 Ω microwave launchers.C1 and C2 refer to transport chains.C2 contains nominally the same number of junctions as on the microwave chain, MW1.C1 has half as many junctions as C2.SJ1 and SJ2 are identical single junction transport devices, with the same junction geometry as on the microwave and transport chain devices.Ground plane around the array of junctions of the microwave device (MW1) has been removed with the intention of decreasing capacitance to ground.Microwave measurements are done in transmission, where the output (amplifier) line has double junction isolator attached to the mixing chamber stage of the dilution fridge.In addition, the output line has LNF HEMT attached to the 4 K stage and and another LNF amplifier attached to the room temperature stage of the cryostat.The input line has 50 dB of net attenuation with 0 dB at 70 K stage, 20 dB at 4 K stage, 10 dB at 700 mK stage and 20 dB at mixing chamber stage of the cryostat. II. SCHEMATIC OF THE CHIP The DC lines are equipped with three cascaded LFCN filter boards attached to the mixing chamber stage of the dilution fridge; each filter board provides cut-off at different frequencies.The three cut-off frequency ranges are (DC − 5000 MHz), (DC − 1450 MHz) and (DC − 80 MHz).Each board has filters soldered onto them in six stages for all the DC lines.The PCB onto which the chip is bonded, has single stage low pass filtering (2 kΩ ; 47 nF) for each DC line.So, in all each DC line has nineteen stage low-pass filtering at the mixing chamber stage of the cryostat. • The chips were baked at 170 C for 3 mins before spin coating with MMA (EL-13), followed by PMMA (950k 4%).Spin coating recipe was developed such that thickness of MMA is 670 nm and of PMMA is 290 nm.The chips were again baked at 170 C for 3 mins after each spin coating step. • Standard e-beam lithography was done in Raith EBPG5150, after which the pattern was developed in IPA : water (3 : 1) solution for 90 secs. • The developed chip was then subjected to electron beam evaporation with Aluminum, in a double angle shadow evaporation process in Plassys UHV MEB550S2, with an intermediate in-situ static oxidation step (5 mbar/5 mins).The evaporation was terminated with another in-situ static oxidation step (10 mbar/2 mins).Before evaporating Aluminum, the evaporation chamber was gettered with Titanium for 3 mins at 0.2 nm/sec to further bring down the pressure of the chamber.In the first evaporation step, 60 nm of Aluminum was deposited, whereas on the second step 120 nm of Aluminum was deposited; with an evaporation rate of 1 nm/sec in both steps. • Lift-off was done using hot NMP (80 C) for 45 mins, after which the chip was successively cleaned in cold NMP, Acetone and IPA. • SEM imaging of the JJ chain revealed the junction (or chain) width ∼ 510 nm, and the junction overlap ∼ 560 nm. V. CURRENT-PEAK SPACINGS The typical peak spacing is comparable to twice the superconducting gap of Aluminum, 360 µeV.There is an overall smooth evolution of the peak spacings which is asymmetric in bias, which is not understood (Fig. S6b).Peak spacings are enhanced around zero bias, which we speculate is due to an interaction effect.Namely, the weakest link in the chain is the one which, due to offset charge disorder, is in the deepest Coulomb blockade.This link should switch first, and then require increased bias before current can flow.This picture predicts a that peak spacings should increase near zero bias, as we consistently observe.Coulomb-blockade thermometry is performed in the normal state by applying a large, perpendicular magnetic field.Measuring differential conductance versus applied voltage shows a sharp dip at zero bias, which gets shallower on raising the temperature of the system (Fig. S7a).Extracting the zero bias differential conductance g 0 at all setpoint temperatures, shows a gradual decline in conductance below 400 mK, and a sharp fall below 100 mK (Fig. S7b).g 0 is fit to the well-known expression for Coulomb blockade thermometry [44,45] where g T is the asymptotic g at high bias voltages, N is the number of junctions in the chain and E C is the charging energy of a junction.At high temperature the data agree well with Eq.S5.At low temperature the conductance is larger than the expected value, indicating that the device falls out of equilibrium with the cryostat.Associating the smallest observed conductance with a temperature gives the base electron temperature T el = 35 mK.In the main text, we discussed that the wide-range current-voltage characteristic exhibits strongly suppressed current for biases below the critical voltage V c (Fig. 1b and Fig. S1a).This voltage roughly corresponds to the value for biasing N junctions by 2∆/e such that current can flow in the voltage state. Examining the field dependence of this current-voltage characteristic yields further information on the evolution of ∆.Measuring current while varying bias voltage and magnetic field (Fig. S8a) reveals that V c has a smooth field dependence up to a critical field value of B c = 49 mT, at which point it exhibits a kink.The kink is most likely due to the critical field of the thicker islands in the Josephson array; this is expected critical field for a 120 nm Al film thickness, and also naturally explains why the kink occurs at approximately V c (B = 0)/2.Importantly, this critical field exceeds the field at which insulating behavior is observed B c > B ins , so all islands are superconducting when the array transitions to insulating behavior. The expected value of B ins is found from the criteria πK C (B ins ) = 1 (Fig. S8b), yielding a value B ins = 42.5 mT at positive field and B ins = −47.8mT at negative field.Since B ins lies slightly beyond the range where microwave measurements are possible, we have performed a linear extrapolation of E J (B) down to the regime E J → 0, justified by the linear behavior of E J over a wide field range.The observed field-asymmetry in B ins reflects the fact that the magnetic-field dependence of the microwave is not perfectly field-symmetric, which is not understood.A consistent field-asymmetry is also present in the transport phase diagram, where the transition to the insulating state is also slightly field-asymmetric.We also note again that As shown in Fig. S8c and also discussed in the main text, the theoretical value of B ins matches the experimentally observed transition to insulating behavior.It is also interesting to note that the overall dependence of V c (B) qualitatively matches the dome and wing structure of the phase diagram (Fig. S8c, black line). VIII. SYSTEMATICS IN MEASUREMENT/ANALYSIS The zero bias differential resistance of the transport JJ chain was measured using four probe lock-in setup.Stanford SR830 was used to measure the differential current dI through the chain, before which the current signal was converted to voltage using Basel LSK389A transimpedance amplifier.The differential voltage drop across the chain, dV was measured with a Zurich MFLI.A voltage amplitude of 1 µV was applied, which we experimentally verified was sufficiently small to avoid overheating (see below).Throughout the main text, the two-probe resistance is plotted with the inferred line resistance from a four-probe measurement subtracted.This procedure removes technical noise at the expense of introducing a small (few Ohm) systematic error in the data.Mapping out power dissipated (dI ×dV ) at the device over full (B,T) parameter space (Fig. S9a), reveals a dome-like feature of maximal power dissipation.Choosing a few points over the dome to do an amplitude study (Fig. S9b), reveals that the device lies comfortably in the linear response regime till 2 µV of voltage amplitude.As pointed out in Fig. S10, a low-temperature upturn in resistance is inconsistently observed at fields slightly lower than the data in the main text.The upturn measurement is done at the boundary of LSC/INS phases (refer to Fig. 4c in main text), where on lowering temperature, the device resistance transitions from low to a very high value.Measuring extremely low currents at such phase boundary is challenging, owing to the highly fluctuating values of the locked-in signal.Two customizations were done in this regard: (a) Turning up the voltage amplitude to 10 µV.(b) Measuring the dI, dV over ten time points and recording the mean value.To ensure that the device does not heat up on increasing the lock-in excitation, amplitude study (similar to one depicted in Fig. S9b) was done at temperature corresponding to the minimum resistance.At such regime, the device resistance is higher than the line resistance in the measurement chain, making the device effectively voltage biased.Hence, maximum heating is expected at the point of minimum resistance.However, the device maintained linear response in the amplitude study.The supercurrent peak, and the zero bias conductance, gradually decreases on increasing the magnetic field (Fig. S11a).Lock-in setup measures the slope of the zero bias peak.As shown in Fig. S11b, adding voltage offsets to the lock-in measurement changes the behavior of the device in the region where the device transitions from LSC to INS phase (refer to Fig. 4c in main text).Hence, ensuring proper offset correction at zero voltage bias is essential to measurement of the phase diagram.Figure S12: Power law fits with various choices of lower cutoffs.a, Zero bias specific differential resistance ρ versus temperature T , with 0.18 T P as the lower cutoff temperature for power law fits.b, Same plot with 0.215 T P as the lower cutoff temperature for fits.c, Same plot with 0.25 T P as the lower cutoff temperature for fits.T P is the plasma frequency in temperature units. To analyze the power-law behavior of ρ(T ), fits must be performed over a restricted temperature range.Because system parameters evolve with magnetic field, the fit range must also be field dependent.The high-temperature limit of the fitting range is chosen to be 95% of the plasma temperature T P , where As shown in the main text, the upper edge of the local superconducting dome follows the plasma temperature, so this is a suitable upper bound. The low-temperature limit of the fitting range is not as easy to sharply define, due to the smooth crossover to saturating resistance.To account for this difficulty, we explored a range of lower cutoff values, as shown in Fig. S12.These cutoff values are used to create the blue error bands for the exponent p in Fig. 3b of the main text.To evaluate the impact of the systematic error bands on fit parameters, p(K c ) was fit to a line for the three lower cutoffs in Fig. S12.The range of values obtained are reported as uncertainties in the slope and intercept in the main text.Solving the Renormalization Group (RG) equations in the UV limit, yields a power law with exponent πK C − 1 (Eq.S10), where K C is the bare value of K in RG flow.While solving in the IR limit, with renormalized K, results in a power law with exponent 2πK g − 3. Comparing p from the transport measurements with the global superfluid stiffness K g inferred from microwave measurements reveals a linear behavior (Fig. S13) with slope 45 ± 7 and intercept of −1.3 ± 1.0.This in complete disagreement with the predicted slope of 2π for global superconductivity.The intercept close to −1 is same as observed for local superconductivity (Fig. 3b in main text), owing to the fact that only the x-axis is scaled down on plotting versus K g .As shown in Fig. S14a, the power law behavior of specific resistance with temperature is observed until about 40 mT.In Fig. S14b, scaling the normalized temperature axis with expected exponent collapses all the data on left into an universal power law behavior, removing the field dependency of the data.As pointed out earlier in Eq.S6, T P is a field dependent quantity.Our picture for local superconductivity connects the timescale of thermal fluctuations near quantum criticality, τ = h/(k B T ), to specific resistance.In the literature there is a widely hypothesized connection between the Planckian scattering time τ s = h/(k B T ) and quantum-critical metals [39][40][41][42][43].Given recent reports of Planckian scattering in a superconductor-insulator system [36], and of the qualitative similarity of some of our specific resistance curves to Ref. [36], we were motivated to directly compare our data with a model of Planckian scattering.Following the method of Ref. [36], we computed T P dρ/dT , where the plasma temperature T P plays the role of the high-temperature cutoff for superconducting behavior in our system.The Planckian bound is T P •dρ/dT < h/(4e 2 ).As shown in Fig. S15b, the observed resistance exceeds this bound by more than an order of magnitude.Thus, the Planckian bound apparently does not apply in our system. X. THEORY Here we give an overview of the theoretical origin of power-law scaling discussed in the main text.The key simplification for our case arises from the fact that the dimensionless phase slip fugacity, y ∝ e −4 √ 2E J /E C is small, less than 10 −10 at zero magnetic field.This allows us to work with linearized renormalization group equations from Ref. [13], which in the long-screening-length limit (Λ 1) takes the form ) ) Here K is the superfluid stiffness, taking the initial value K C = E J /2E C and u g takes the initial value 1/(1 + Λ 2 ), where Λ is the charge screening length, representing the plasmon group velocity in the UV limit in units of the plasma frequency.Following Ref. [13], we assume the renormalization flow is terminated at the thermal length given by e l = Ω p /T where Ω p = √ 2E J E C is the single junction plasma frequency.The resistance is then given by R = R 0 y 2 /e l . Equations (S7-S8) express the renormalization of K from its ultraviolet value of K C down to K g at the fixed point u g = 1. In the high temperature limit where K is hardly renormalized, resistance follows the power law behavior At lower temperature K is renormalized down and the system crosses over to insulating behavior.The crossover temperature depends on system parameters, The first case occurs in the limit of small Josephson energy, and insulating behavior appears because K is renormalized below 1/π, at which point the system enters strong coupling.The second case occurs in the limit of large Josephson energy, and insulating behavior appears because the system crosses over to the infrared limit where u g = 1, which is the case we focus on in the main text. In the experiment, the two terms in Eq.S11 are actually comparable, so it is perhaps surprising that a square-root behavior is observed in Fig. 4a of the main text.As we caution in the main text, the experimentally extracted T * may not map directly on to T ins , and indeed different metrics give different quantitative behavior. A. Boundaries in theoretical phase diagram The superconductor-insulator transition is strictly only a phase transition at T = 0; otherwise it is a crossover [2].The theoretical phase diagram in the main text in fact labels crossover boundaries, where the temperature dependence of the specific resistance dρ/dT changes sign.To identify these points, we work perturbatively in the limit of small phase-slip fugacity, as discussed above. At low temperatures where the infrared fixed-point of Eqs.(S7-S8) is reached, the ρ(T ) power law is 2πK g − 3, which gives the low-temperature crossover πK g ∼ 3/2.Note that once terms of order y 2 are included into Eq.S9, one would actually find either the Giamarchi-Schulz or BKT fixed points depending on if disorder is included [2,23].Since this correction is small compared to those associated with local superconductivity discussed in the main text, we simply indicate the crossover point with a ∼ to avoid ambiguity. By similar logic, in the high-temperature limit ρ(T ) power law is πK C − 1, which yields the local superconductorinsulator crossover πK C ∼ 1. The boundary between local and global regimes is given by Eq.S11, which when smoothly interpolated yields the theoretical diagram in the main text. Figure 1 : Figure 1: Device, transport, and microwave measurement techniques.a, Scanning electron micrograph of a small segment of the Josephson array.Left scale bar indicates 1.5 µm.Arrow indicates direction of magnetic field B. b, Current I versus source-drain bias voltage V .Inset shows small-scale current peaks over a narrow range (−3, 3) mV.c, Current I versus bias V and magnetic field B over a bias range similar to Fig. 1b inset.d, Differential resistance per junction (specific resistance) ρ versus cryostat temperature T measured at V = 0 and B = 0.Blue line shows power-law fit.ρ reflects the resistance associated with the zero-bias superconducting branch, found by measuring the two-probe resistance, subtracting off four-probe-measured line resistance, and then dividing by number of junctions.e, Two-tone microwave spectroscopy.Probe tone transmission S versus pump-tone frequency f , with probe tone frequency fixed to resonance at approximately 6.11 GHz.Extracted plasma-mode resonant frequencies f P indicated by colored markers.f, Evolution of measured plasma-mode frequencies f P with applied magnetic field B. g, Superfluid stiffness K g = E J /(2E g ), experimentally inferred from plasma modes in f, versus B (black line).Theoretically expected superconducting and insulating regimes labeled, and demarcated by a band covering the clean[2] and dirty[23] limits. Figure 3 : Figure3: Power law nature of local superconductivity.a, Zero-bias specific differential resistance ρ as a function of temperature T , at various magnetic fields.Solid lines are fits to power law expression ρ = AT p .T * is the crossover temperature from power law to saturation behavior, extracted from the point where specific resistance goes 20% above its minimum value.b, Exponent p from power-law fits to transport data in a versus the local superfluid stiffness K C from microwave measurements.Solid line is a linear fit.Shaded blue region in b depicts the systematic error resulting from the choice of lower resistance cutoff in the power law fits[24].c, Amplitude A from power-law fits to transport data versus Josephson energy E J from microwave measurements. Figure 4 : Figure 4: Crossover physics and phase diagram.a, Crossover temperature T * versus magnetic field B. Black line is T * ∝ T ins with a proportionality constant of 2.2.Colored markers indicate crossover temperature at higher fields from dataset in b.Red vertical line indicates πK C = 1, where local superconductor-insulator transition is expected.b, Zero-bias differential specific resistance ρ versus temperature T at higher magnetic fields, measured with higher excitation voltage and more averaging than in Fig. 3. c, ρ versus temperature T and magnetic field B. Dome of local superconductivity (LSC), and wings of insulating behavior (INS) labeled.Red vertical lines indicate πK C = 1, where local superconductor-insulator transition is expected.d, Calculated specific resistance ρ as a function of temperature T and magnetic field B. Figure S1 : FigureS1: Extraction of chain parameters E C , E J , E g .a, Measured current I versus applied voltage V .Linear fits to the high bias features extract E C , E J .V c is the critical voltage.b, Extracted resonant peak frequency f P versus mode number k.The curves are fit from the dispersion relation, which yields E g , and E J as a function of magnetic field. Figure Figure S2: E J (B) interpolation.Josephson energy E J , determined from fitting dispersion curves, as a function of the magnetic field B. Error bars are the standard error resulting from the fits.Black lines represent a linear interpolating function used at higher magnetic fields used to estimate B ins , see Sec.VII for further details. Figure S3 : Figure S3: Schematic of the nanofabricated chip.Blue background represents Silicon.Grey represents Aluminum.Each cross represents a single Josephson junction. Figure S4 : Figure S4: Chip bonded onto PCB.Numbers on the PCB indicate connections to DC lines for four/two probe transport measurements.SMP launchers for microwave connections can be seen partially at the top and bottom of the picture. Figure S5 : Figure S5: The experimental setup.The Dilution Fridge represented as cyan colored dashed boundary.The signal generator is coupled to the output port of the VNA through a RF coupler which has a 20 dB insertion loss at the coupled port.The 'resistor' symbol at each stage of the fridge represent the installed microwave cryogenic attenuators.'RTA' refers to Room Temperature Amplifier with a gain of around 35 dB.'HEMT' refers to High-Electron-Mobility-Transistor, which is the Low Noise Amplifier (noise ∼ 2 K) with a gain of around 36 dB at 4 K. 'DUT' refers to 'Device Under Test'.'B' indicates the direction of applied external magnetic field. Figure S6 : Figure S6: Peak extraction and spacings.a, Measured current I versus applied voltage V .The black dots represent detected current peaks.b, Peak spacings versus the peak index.Each peak index refer to the succeeding peak spacing on positive bias and to the preceding peak spacing on negative bias. Figure S7 : FigureS7: Coulomb Blockade Thermometry at large magnetic fields.a, Differential conductance g versus the applied voltage V , at various setpoint temperatures.b, Zero bias differential conductance g 0 versus the setpoint temperature T .Black line is a function fit to the data.The dashed blue line is the g 0 value corresponding to the lowest data point.T el is the inferred base electron temperature of the system.Data in this figure is taken with the JJ chain being in normal state, at 500 mT. Figure S8 : Figure S8: Overlay of energy scales on the phase diagram.a, Measured current I versus the bias voltage V and magnetic field B. The black curve indicates extracted critical voltage V c .Kink in V c at critical field B c indicated.b, Inferred local superfluid stiffness πK C versus the magnetic field B. Black dots are the inferred stiffness from experimental data.Solid black line is based on a linear extrapolation of E J (B) down to E J → 0, which is justified by the empirical observation that E J is linear at high fields.B ins is defined implicitly by πK C (B ins ) = 1.c, The phase diagram with the zero bias differential specific resistance ρ as a function of the magnetic field B and temperature T .INS refers to Insulator.The black curve is the edge feature extracted from a, scaled by an empirical proportionality constant of 1/(N 2πk B ).The vertical red lines in b and c correspond to the field values |B| = B ins .Note that, |B ins | < |B c |. Figure S9 : Figure S9: Heat check for the transport JJ chain.a, Power dissipated at the chain versus temperature T and magnetic field B. b, Zero bias differential specific resistance ρ versus lock-in voltage amplitude V : at various (B, T ) points along the region of maximal power dissipation in a.The dashed line indicates the voltage (1 µV) used for measuring the phase diagram. Figure S10 : Figure S10: Reproducibility of the upturn feature.a, Zero-bias differential specific resistance ρ as a function of measured temperature T, at various magnetic fields.b, The same measurement repeated on a different run and same cooldown. Figure S11 : Figure S11: Effect of voltage offsets in specific resistance measurement.a, Current I versus the applied voltage V , in the narrow bias range, measured at various magnetic fields.b, Zero bias specific differential resistance ρ versus the magnetic field B, with and without applied offset voltages. Figure S13 : FigureS13: Comparing the powers with global superfluid stiffness.Exponent p from power-law fits versus the global superfluid stiffness K g from microwave measurements.Solid line is a linear fit.Shaded blue region depicts the systematic error resulting from the choice of lower resistance cutoff in the power law fits. Figure S14 : FigureS14: Collapse of power laws.a, Zero bias specific differential resistance ρ as a function of temperature T , at various magnetic fields.b, The same data in a plotted with scaled temperature axis, where T P is the plasma frequency in temperature units and K C is the local superfluid stiffness. Figure S15 : Figure S15: Planckian slope check.a, Zero bias specific differential resistance ρ as a function of temperature T , at various magnetic fields.b, The slope of the curves in a, times the plasma frequency T P , versus temperature T .The blue horizontal line is the resistance quantum, R Q . Table SI : Chain and junction geometry.Chain length and number of junctions refer to the designed values.Geometry of a junction (or the chain) is estimated from SEM imaging.For details refer to section Nanofabrication.The error in A is the propagated error from estimating length and breadth of a single junction from a SEM image. Table SII : Extracted parameters.For calculations discussed in text, E J and E g inferred from microwave measurements are used, whereas E C inferred from transport measurements are used. Table SII above: Table SIII : [38]pendent checks on extracted parameters.Empirical specific capacitance of a junction C s = 45 fF/µm 2[38].Superconducting gap of bulk Aluminum, ∆ = 180 µeV.The errors in the parameters of Table SIII are propagated from the errors in A (TableSI) and R N (TableSII).TableSIVcontains a list of chain/junction parameters that have been derived from the measured values presented in TableSII: Table SIV : Chain/junction parameters inferred from Table
v3-fos-license
2019-03-16T13:02:35.705Z
2019-03-06T00:00:00.000
76666720
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://phytokeys.pensoft.net/article/30375/download/pdf/", "pdf_hash": "300f1ed7c3df092dde711434eb211455e9f7ddc3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2504", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "300f1ed7c3df092dde711434eb211455e9f7ddc3", "year": 2019 }
pes2o/s2orc
Selaginelladianzhongensis (Selaginellaceae), a new spikemoss from China Abstract A new species of spikemoss from Yunnan Province of China, Selaginelladianzhongensis, is described and illustrated based on evidence from gross morphology, micromorphology and molecular phylogeny. S.dianzhongensis is most similar to S.amblyphylla in its habit of creeping stem, leaf size, and obviously dimorphic sporophylls, but is distinct by its ventral leaves ovate-oblong, subcordate at base, basiscopic base entire, axillary leaves ovate and decurrent at base. Molecular phylogeny analysis of three chloroplast gene regions (rbcL, atpI, psbA) shows that S.dianzhongensis forms an independent branch with strong support which is distantly related to S.amblyphlla and S.kurzii, but sister to S.bodinieri which is quite different in habitat of erect or ascending stem and rhizophores restricted to the lower part, and slightly dimorphic sporophyllus. Introduction The initial critical taxonomic revision of Chinese Selaginella was published by Alston (1934), who recognized 41 species from China. In the Flora Reipublicae Popularis Sinicae and Flora of China (Zhang 2004, Zhang et al. 2013) the numbers of recognized species increased to 64 and 72, respectively. Recently, several new species and new records have been reported from China (Zhou et al. 2015a, Zhang and Sun 2015, Wu et al. 2017. Yunnan province is one of the species diversity centers of plants in China, with more than 53 species of Selaginella (Chu, 2006). During field trips in Yunnan, we collected an unknown Selaginella species from Yimen County in central Yunnan. Morphology characters show it is similar to S. amblyphylla Alston, but phylogenetic analysis based on three chloroplast gene regions show it is close to S. bodinieri Hieron. Both molecular and morphology data support the taxon as a new species, which is described and illustrated here. The new species belongs to subgenus Heterostachys in the classification system of Jermy (1986), and sect. Heterostachys of subgenus Heterostachys in the molecular-phylogeny classification proposed by Zhou and Zhang (2015), but this was rejected in the more robust molecular phylogenetic analysis based on chloroplast and nuclear genes by Weststrand and Korall (2016) that only recognized a broad subgenus Stachygynandrum. Material and methods Herbarium specimens, silica gel and living materials were collected from Longquan Forest Park of Yimen (Yunnan province), in evergreen forest at 24°40'86"N, 102°08'27.86"E. Herbarium specimens are preserved in PE, and compared with similar species. The morpho-photographs of the plants were taken with a Nikon DXM 1200F camera connected to a stereomicroscope (Nikon SMZ 1000) and computer, measurements were done by D 3.10 (http:// www.nikoninstruments.com). The application ImageJ (https://imagej.nih.gov/ij/) was used to measure morphological characteristics (such as axillary, dorsal, ventral leaves, stems and strobili). For study of spore morphology, scanning electron microscopy (SEM) was used. The spores were taken from mature sporangia and fixed on double line tape, and then covered with gold-palladium mixture. Spores were photographed and measured under different magnifications using a Hitachi S-4800 at 10-20 kV. Phylogenetic tree of combined dataset (rbcL+atpI+psbA) was constructed using maximum likelihood (ML) and Bayesian inferences (BI). jModelTest 0.1.1 (Posada 2008) was used to select the appropriate substitution model for ML and BI analyses. The ML analysis was performed on the XSEDE online computing cluster accessed via the CIPRES Science Gateway (http://www.phylo.org) using RAxML-HPC2 v.8.2.8 (Stamatakis 2014), with 1000 bootstrap replicates under the GTRGAT model. Bayesian analyses and posterior probability (PP BI ) calculation were conducted in Mr-Bayes 3.2.6 (Ronquist et al. 2012) implemented on the CIPRES Science Gateway Portal (Miller et al. 2010). We ran four chains of the Markov chain Monte Carlo (MCMC), sampling one tree every 100 generations for 1,000,000, starting with a random tree. Bayesian posterior probabilities (PP) were calculated as the majority consensus of all sampled trees with the first 25% discarded as burn-in. Etymology. Dianzhong means central Yunnan in Chinese: the type locality (Yimen) is in the central Yunnan area which is centered on the Provincial capital city Kunming. Distribution and habitat. Selaginella dianzhongensis is known only from Yimen county, Yunnan, growing on mossy soils in a mixed evergreen forest, at ca. 1576 m a.s.l. (Fig. 3). Conservation status (VU). Selaginella dianzhongensis is known only from one locality inside the Longquan Forest Park in Yimen county, with more than 300 individuals. This park has a heavy recreational load and human pressure, and there are no specific measures to protect the habitats. Considering the restricted distribution and plausible threats, we tentatively assessed Selaginella dianzhongensis as vulnerable ( Phylogenetic Analysis The combined data matrix included up to 2045 nucleotides for each of the 37 taxa with 374 parsimony informative sites (374/2045 = 18.29%), consistency index (CI) = 0.66, retention index (RI) = 0.80, when the gaps were treated as missing data. The tree recovered from maximum likelihood (ML) and Bayesian inferences (BI), with bootstrap values (BS) of ML and Bayesian posterior probabilities (PP) for each clade is shown in Fig. 4. The new species sampled from Yimen clustered with Selaginella bodinieri with strong support (BS ML = 99; PP BI = 1.0), but the new species is quite similar to the S. amblyphylla rather than S. bodinieri in morphological characters. Discussion Morphologically, the shape and margin of ventral and dorsal leaves of Selaginella dianzhongensis is most similar to S. amblyphylla. But the axillary leaves of the former are ovate, 1.1-2.2 × 0.4-0.8 mm, margin with a few long cilia (vs. ovate or triangular, 2-3 × 0.6-1.2 mm, and denticulate at margin in S. amblyphylla). Ventral leaves of the former are ovate-oblong, apex acute (vs. oblong and obtuse or subacute at apex in S. amblyphylla), basiscopic margin entire and not ciliolate (vs. basiscopic margin sparsely ciliolate at base in S. amblyphylla), acroscopic base of ventral leaf long ciliolate (vs. shortly ciliolate in S. amblyphylla). S. dianzhongensis is indeed similar to S. kurzii, but fertile branches are not erect (vs. erect in S. kurzii), dorsal leaves are ovate to broadly ovate with arista at apex (vs. ovate or ovate-elliptic, acuminate or aristate at apex), and ventral leaves are ovate-oblong, basiscopic margin entire and not ciliolate (vs. ovate-triangular, basiscopic margin entire or with 1 or 2 cilia at base). Selaginella bodinieri is widely distributed in the limestone areas from central to southwestern China: main differences between this species (and the other ones mentioned above) and S. dianzhongensis are reported in the key below, and in Table 1. Finally, mega-and microspores of S. dianzhongensis are morphologically different from the spores of similar species studied by Zhou et al. (2015b). Megaspores of S. dianzhongensis have verrucae on proximal and distal side; micro-sculptures of megaspores are densely echinate on both sides (Fig. 2 H-J). Microspores of S. dianzhongensis are verrucate, with blunt spinules (Fig. 2 K-M). Morphological comparison of mega-and microspores between S. dianzhongensis and closely related species is presented in Table 2. Key to the S. dianzhongensis and related species in Yunnan
v3-fos-license
2021-12-01T16:09:34.198Z
2021-11-29T00:00:00.000
244754264
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.europeanproceedings.com/files/data/article/10086/15487/article_10086_15487_pdf_100.pdf", "pdf_hash": "3118f1b55e7413774eea2f500bf623d36e47025b", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2505", "s2fieldsofstudy": [ "Linguistics" ], "sha1": "78a8867fb6ba16b26e3f46406b37675349ecf84c", "year": 2021 }
pes2o/s2orc
TEACHING WORD-FORMATION MODELS OF THE LEXICAL FOUNDATIONS OF THE GERMAN LANGUAGE The article is devoted to the description of the research of word-formation models in the process of teaching foreign languages. The article discusses the issue of the organization of teaching word-formation minimum and word-formation analysis of lexical foundations of the German language. The article identifies 650 micromodels of lexical foundations that are relevant for the vocabulary of the modern German language. It was revealed that about 250 of the 650 identified micromodels are of the scientific and technical style. The vocabulary of the modern German language is built on thirteen basic derivational models of lexical foundations. The results of the study allow us to conclude that at the 3rd stage of teaching the German language at the university, it is necessary to plan the repetition of the model of complex words, as well as the micromodel of the most common suffix and prefix words. It is recommended to use the structural-semantic model as a unit of selection of the word-formation minimum for language universities, which has several advantages. Based on the tables offered in the text, it is possible to rationalize the process of teaching the word-formation minimum. All the above makes it possible to believe that the proposed word-formation minimum will contribute to the rationalization of the explanation and the assimilation of the lexical minimum for mastering the word-formation analysis in order to understand the derived words and their meaning when introducing lexical units. Introduction Research methods are usually determined by the specifics of both theoretical and practical material, as well as the purpose and objectives of the study. This article sets the tasks to: 1. Give an adequate definition of the concept of word-formation minimum; 2. Formulate the basic principle of the selection of word-formation minimum; Determine the main methodological recommendations Our research uses methods of linguistic description, as well as methods of continuous sampling and quantitative analysis. The article also provides guidelines for the organization of teaching the wordformation minimum in the process of learning a foreign language at school and university. According to Belotelova (1983), word-formation minimum is understood as a set of wordformation models of the modern language, which must be mastered by a student in order to understand complex and derivative words when reading literature in his specialty. The word-formation model is interpreted by her as "a stable structure of the lexical basis of a word", which has a generalized lexicalcategorical content and is capable of being filled with different lexical material. Problem Statement The subject of our research is the specificity of word-formation models in the modern German language, considering the morphological possibilities and development trends of the language system. The vocabulary of the modern German language is built on thirteen basic derivational models of lexical foundations. However, when selecting and teaching a word-formation minimum, it is advised to reduce 13 models for educational purposes by combining several models. For example, the model of stems with residual elements is referred to the model of root words. The modified model and the model with an invariable root are combined into one model, namely, a model of affix-free-derived words -with an invariable root and a modified root. This is followed by the models of prefix words (-with an unchanged root, -with a changed root), models of suffix words (-with an unchanged root, -with a changed root), a model of suffix-perfix words (-with an unchanged root, -with a changed root). The model of a definite word composition and an indefinite word composition, according to wellknown linguists, cannot be reduced to one model of complex words. Research Questions The object of the research is word-formation models characteristic of the modern German language at the current stage of its development. In the German language program for higher educational institutions, due attention is paid to teaching the word-formation minimum, which for the first time was singled out in a special subsection along with the lexical and grammatical minimum. For example, at the 1st and 2nd stages, students are invited to master the basic derivational morphemes and models of derivatives and complex words, but only lists of micromodels of lexical bases are given. https://doi.org/10.15405/epsbs.2021.11.237 Corresponding Author: Malyutkhan Abdul-Khadzhievna Arsakhanova Selection and peer-review under responsibility of the Organizing Committee of the conference eISSN: 1801 For the 3rd stage of training, the program quite correctly recommends teaching those wordformation morphemes and models that are characteristic of the sublanguage of the studied specialty. It is recommended to select their list for departments. The solution to this issue will require further research. Purpose of the Study The purpose of this research is to study word-formation models of the modern German language In accordance with the set goal, the following theoretical and practical tasks are solved:  description the lexical composition of the modern German language;  adequate definition of the concept of word-formation minimum;  formulation of the basic principle of the selection of the word-formation minimum;  determining the basic methodological recommendations for the organization of teaching the word-formation minimum in a foreign language course at school and university. Research Methods Our research uses methods of linguistic description, as well as methods of continuous sampling and quantitative analysis. The article also provides guidelines for the organization of teaching the wordformation minimum in the process of learning a foreign language at school and university. We use a structural-semantic model is used for the unit of selection of the word-formation minimum for language universities, which has several advantages. Its use makes it possible to select specific semantic variants of derivational units within the framework of a general structural type. To illustrate the structural-semantic models, we provide a table of verbal prefix and semi-prefix structural-semantic models, which presents 20 models of prefix verbs. For each of the 20 models, several structural-semantic models were identified. For example, the model with the an-(an + V) prefix had 6 values, respectively 6 structural-semantic models, the ab-(ab + V) model had 8 values etc. According to Belotelova (1983), the word-formation minimum is understood as a certain set of word-formation models of the modern language, which must be mastered by the student in order to understand complex and derived words when reading literature in his specialty. The word-formation model is interpreted by her as a stable structure of the lexical basis of a word, possessing a generalized lexical-categorical content and capable of being filled with different lexical material. (p. 54) https: //doi.org/10.15405/epsbs.2021.11.237 Corresponding Author: Malyutkhan Abdul-Khadzhievna Arsakhanova Selection and peer-review under responsibility of the Organizing Committee of the conference eISSN: 1802 Given the fact that the vocabulary of the modern German language is built on thirteen basic derivational models of lexical foundations, 13 models are considered for educational purposes by combining several models. Models of stems with residual elements are referred to the model of root words. The modified model and the model with an invariable root are combined into one model, namely, a model of affix-free-derived words -with an invariable root and a modified root. This is followed by the models of prefix words (-with an unchanged root, -with a changed root), models of suffix words (-with an unchanged root, -with a changed root), a model of suffix-prefix words (-with an unchanged root, -with a changed root). According to well-known linguists, the model of a definite word composition and an indefinite word composition cannot be reduced to one model of complex words. The selection of the word-formation minimum is based on the psychological and methodological principles of motivation, productivity and exemplarity. The article identifies 650 micromodels of lexical foundations that are relevant for the vocabulary of the modern German language. It was revealed that about 250 of the 650 identified micromodels are of the scientific and technical style. Based on the above principles, Antropova (2006) in her work "Word formation of German colloquial vocabulary" identifies 140 most common (47 micromodels of verbs, 41 micromodels of nouns, 52 micromodels of adjectives and adverbs), which were used as the basis for lists of micromodels, given in the program on the German language for the language specialties of higher educational institutions. In this case, it should be noted that -even before Antropova -Belotelova (1983) in her study "Selection of the word-formation minimum of the German language for language universities" tried to solve the problems of teaching the word-formation minimum and word-formation analysis. a complete understanding of the relevant structural types of derived words, as well as the structure of the vocabulary of the language. However, having identified the most common micromodels, Belotelova (1983) does not classify them according to the seven basic models of lexical foundations proposed by her. As a result, the basic models are mentioned neither in the process of selecting the word-formation minimum, nor in the process of teaching the word-formation minimum and word-formation analysis. Trainees are not informed which models they need to learn or which model includes this or that micromodel. The micromodel turned out to be accepted both as a unit of selection of the word-formation minimum, and as a unit of training in the word-formation minimum and word-formation analysis. Undoubtedly, the absence of basic models of lexical foundations in the word-formation minimum makes it difficult for both learning and mastering the skills of word-formation analysis; knowledge of only micro-models cannot develop a holistic idea of the laws of word formation, of consistency and harmony of the entire vocabulary of the studied foreign language. Galskova (2004), in her numerous methodological works, clarifies the list of word-formation models given in the program on the methodology of teaching foreign languages for language universities. She identified twenty-five models that were already recorded in the lists of micro-models by Belotelova (1983). https://doi.org/10.15405/epsbs.2021.11.237 Corresponding Author: Malyutkhan Abdul-Khadzhievna Arsakhanova Selection and peer-review under According to the data of the conducted statistical study, according to these models, the structuralsemantic model is used as the unit of selection of the word-formation minimum for language universities, which has a number of advantages. Its use makes it possible to select specific semantic variants of derivational units within the framework of a general structural type. To illustrate the structural-semantic models, we propose a table of verbal prefix and semi-prefix structural-semantic model including 20 models of prefix verbs. Several structural-semantic models were identified for each of the 20 models. For example, the model with the an-(an + V) prefix had 6 values, respectively 6 structural-semantic models, the ab-(ab + V) model had 8 values etc. Each of the 20 models and each of the 90 structural-semantic models are nothing more than micromodels, for all 90 structural-semantic models of prefix verbs identified by Astvatsaryan (1992) and Stepanova (1984), are reduced to one model of prefix verbs. In this sense, the model carries information about the structure of the verb, while the structuralsemantic ones inform about the semantics of each individual verb, about the semantics of the prefix. The delineation of the model and the structural-semantic model, proposed by the author, is nothing more than the delineation of the models and their constituent micromodels, or the delineation of the main models and the models of the first, second, etc. degree. For the unit of selection of the word-formation minimum for language universities, a structuralsemantic model is used, which has several advantages. Its use makes it possible to select specific semantic variants of derivational units within the framework of a general structural type. To illustrate the structural-semantic models, we propose a table of verbal prefix and semi-prefix structural-semantic models including presents 20 models of prefix verbs. Several structural-semantic models were identified for each of the 20 models. For example, the model with the an-(an + V) prefix had 6 values, respectively 6 structural-semantic models, the ab-(ab + V) model had 8 values etc. Each of the 20 models and each of the 90 structural-semantic models are nothing more than micromodels, for all 90 structural-semantic models of prefix verbs identified by Astvatsaryan (1992) and Stepanova (1984), are reduced to one model of prefix verbs. In this sense, the model carries information about the structure of the verb, while the structural-semantic ones inform about the semantics of each individual verb, about the semantics of the prefix. https://doi.org/10.15405/epsbs.2021.11.237 Corresponding Author: Malyutkhan Abdul-Khadzhievna Arsakhanova Selection and peer-review under responsibility of the Organizing Committee of the conference eISSN: 1804 The delineation of the model and the structural-semantic model, proposed by the author, is the delineation of the models and their constituent micromodels, or the delineation of the main models and the models of the first, second, etc. degree. Productive stems can be not only simple (root), but also derivatives (suffix, prefix, non-affix derivative, complex). Derived stems acting as generators can have an unchanged and modified root. When teaching a foreign language at a university, it seems to us expedient to group 13 basic models of the lexical foundations of words in the modern German language as follows: 1. The basic model of the lexical foundations of the root (monolithic, unmotivated). Words formed according to the first model serve as the generative stems of derived words. 5 basic models of the lexical bases of derived (motivated) words: Derivative words formed according to these models are analyzed in order to understand their structure and derive their meaning from it according to the principle "from the form of the word to its content". Based on the principle "from general to particular" and "from model to micromodel", the program in foreign languages offers students at the 1st stage to master 16 micromodels of nouns, 20 micromodels of verbs, 20 micromodels of adjectives and adverbs. If all micromodels are reduced to models, then it turns out that the student will have to master at the 1st stage -4 models of nouns, 3 models of adjectives and adverbs, 3models of verbs. On the 2nd -4 models of nouns, 2 models of adjectives and adverbs, 3 models of verbs. This can be represented as follows: 1st stage of training 4 noun patterns: zusammenare separate first frequency components in complex verbs. As practice shows, numerals should be repeated at all stages of training, therefore, it is also desirable to include numerals in the lists of models and micromodels. For the third stage of training, models and micromodels characteristic of terminological vocabulary can be recommended as new ones. Conclusion The results of this study lead to the following conclusions. 1. Not a model, but a micromodel is taken as the unit of selection of the word-formation minimum and teaching word-formation analysis; https://doi.org/10.15405/epsbs.2021.11.237 Corresponding Author: Malyutkhan Abdul-Khadzhievna Arsakhanova Selection and peer-review under responsibility of the Organizing Committee of the conference eISSN: 2357-1330 1806 2. At the 3rd stage of teaching the German language at the university, it is necessary to plan the repetition of the model of complex words, as well as the micromodel of the most common suffix and prefix words. 3. When teaching word-building minimum and learning word-building analysis, one should rely on models and go from model to micromodels, selecting for each model the most productive micromodels for the studied terminology and their constituent affixes. Revealing the semantics of prefixes / suffixes, their polysemy within a particular model is undoubtedly useful for teaching word formation. 4. In the German language program for higher educational institutions, due attention is paid to teaching the word-formation minimum, which for the first time has been singled out in a special subsection along with the lexical and grammatical minimum. 5. At the 1st and 2nd stages, the trainees should master the basic derivational morphemes and models of derivatives and compound words. For the 3rd stage of training, the program quite correctly recommends teaching those wordformation morphemes and models that are characteristic of the sublanguage of the studied specialty. 6. From our point of view, the word-formation minimum should be understood as the minimum of basic word-formation models, micromodels and their constituent word-formation elements, based on the knowledge of which the student will be able to carry out word-formation analysis of derivative words in order to derive their meaning. 7. When organizing teaching the word-formation minimum and teaching word-formation analysis according to the principle "from general to particular", "from model to micromodel", the trainees should first know the entire vocabulary of the modern German language is built according to 13 basic models lexical foundations. Each of the 13 includes a micromodel, the number of which depends on the number of affixes and on the quality of the producing bases. 8. Based on the tables we offer, it is possible to rationalize the process of teaching the wordformation minimum and teaching word-formation analysis when introducing lexical units, as well as when teaching understanding the semantics of derived words in the process of reading German literature. 9. Work on teaching the word-formation minimum and teaching word-formation analysis, explaining the peculiarities of the structure of words and terms-words of the modern German language, it is advisable to use the appropriate table. In the same case, if you organize training in word-formation minimum and training in word-formation analysis from particular to general, from micromodel to model, then the student also needs knowledge of basic models in order to better navigate the laws of word formation of the foreign language being studied. 10. The advantage of the word-formation minimum proposed in this article in terms of parts of speech and stages of learning, in our opinion, is as follows:  The word-formation minimum proposed in this article is based on the lists of models that were compiled based on scientifically grounded lists of micromodels.  The basis of the word-formation minimum is its more adequate definition, the principle of selection and teaching the word-formation minimum and the teaching of word-formation analysis, as well as the top-down approach, from models to micromodels. https://doi.org/10.15405/epsbs.2021.11.237 Corresponding Author: Malyutkhan Abdul-Khadzhievna Arsakhanova Selection and peer-review under responsibility of the Organizing Committee of the conference eISSN: 1807  Lists of models and micromodels are given visually, schematically, sequentially, and by stages of learning and parts of speech. All the above makes it possible to believe that the proposed word-formation minimum will contribute to rationalization of the explanation and the assimilation of the lexical minimum for mastering word-formation analysis in order to recognize the structure of derived words and understand their meaning.
v3-fos-license
2019-05-06T22:29:15.243Z
2019-05-06T00:00:00.000
146118156
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://clinicalepigeneticsjournal.biomedcentral.com/track/pdf/10.1186/s13148-019-0662-9", "pdf_hash": "b70b488f2becb3caec26965c379a76b11fcf2ecf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2506", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "b70b488f2becb3caec26965c379a76b11fcf2ecf", "year": 2019 }
pes2o/s2orc
Methylation at cg05575921 of a smoking-related gene (AHRR) in non-smoking Taiwanese adults residing in areas with different PM2.5 concentrations Background DNA methylation is associated with cancer, metabolic, neurological, and autoimmune disorders. Hypomethylation of aryl hydrocarbon receptor repressor (AHRR) especially at cg05575921 is associated with smoking and lung cancer. Studies on the association between AHRR methylation at cg05575921 and sources of polycyclic aromatic hydrocarbon (PAH) other than smoking are limited. The aim of our study was to assess the pattern of blood DNA methylation at cg05575921 in non-smoking Taiwanese adults living in areas with different PM2.5 levels. Methods Data on blood DNA methylation, smoking, and residence were retrieved from the Taiwan Biobank dataset (2008–2015). Current and former smokers, as well as individuals with incomplete information were excluded from the current study. The final analysis included 708 participants (279 men and 429 women) aged 30–70 years. PM2.5 levels have been shown to increase as one moves from the northern through central towards southern Taiwan. Based on this trend, the study areas were categorized into northern, north-central, central, and southern regions. Results Living in PM2.5 areas was associated with lower methylation levels: compared with the northern area (reference area), living in north-central, central, and southern areas was associated with lower methylation levels at cg05575921. However, only methylation levels in those living in central and southern areas were significant (β = − 0.01003, P = 0.009 and β = − 0.01480, P < 0.001, respectively. Even though methylation levels in those living in the north-central area were not statistically significant, the test for linear trend was significant (P < 0.001). When PM2.5 was included in the regression model, a unit increase in PM2.5 was associated with 0.00115 (P < 0.001) lower cg05575921 methylation levels. Conclusion Living in PM2.5 areas was inversely associated with blood AHRR methylation levels at cg05575921. The methylation levels were lowest in participants residing in southern followed by central and north-central areas. Moreover, when PM2.5 was included in the regression model, it was inversely associated with methylation levels at cg05575921. Blood methylation at cg05575921 (AHRR) in non-smokers might indicate different exposures to PM2.5 and lung cancer which is a PM2.5-related disease. Electronic supplementary material The online version of this article (10.1186/s13148-019-0662-9) contains supplementary material, which is available to authorized users. Background DNA methylation, an epigenetic process characterized by the addition of a methyl group to a DNA molecule influences the expression of genes [1,2]. It is associated with several health conditions including cancer, metabolic, neurological, psychiatric, and autoimmune disorders [1][2][3][4][5]. A greater portion of DNA methylation in humans takes place in regions called CG or CpG sites, where a cytosine precedes a guanine nucleotide [1,2]. Due to its epigenetic nature, DNA methylation is associated with several environmental factors including cigarette smoking, exercise, and alcohol drinking, among others [6,7]. Cigarette smoke and PM 2.5 are both inhalable carcinogenic factors composed of a complex mixture of chemicals, one of which is polycyclic aromatic hydrocarbons (PAHs) [11,[17][18][19]. PM-related PAHs pose most of the health problems in humans [20]. Exposure to PAHs is capable of activating the aryl hydrocarbon receptor (AHR) [17,21] and is thought to be responsible for variations in smoking-related AHRR methylation [9]. Even though AHR activation is associated with particulate matter [22,23], research on the association between methylation at cg05575921 (AHRR) and PM 2.5 is limited. In a cross-sectional study, AHRR (cg05575921) hypomethylation in non-smokers was associated with exposure to second-hand smoke (SHS) but not PM 2.5 [24]. Moreover, in a recent review on DNA methylation and environmental factors, several studies were listed to have demonstrated associations between AHRR hypomethylation and tobacco smoke. However, none was listed to have demonstrated an association between AHRR hypomethylation and exposure to air pollution or PAHs [25]. In a study conducted in Taiwan, PM 2.5 was used as a proxy marker for PAHs [20]. PM 2.5 concentrations in northern Taiwan are lower than in central and southern areas [26][27][28][29]. Because PAHs are present in both PM 2.5 and cigarette smoke, AHRR (cg05575921) methylation in smokers might be comparable with that in non-smokers exposed to PM 2.5 . Therefore, we used an epigenetic approach to investigate the pattern of blood methylation at cg05575921 in non-smoking Taiwanese adults residing in areas with different PM 2.5 concentrations. Table 1 shows the general characteristics of study participants. There were 708 participants comprising 279 men (mean age = 49.42 ± 11.76 years) and 429 women (mean Table 3 shows the association between living in PM 2.5 areas and AHRR (cg05575921) methylation. With the northern area as the reference, residing in north-central, central, and southern areas was associated with lower blood methylation levels at cg05575921. When SHS was included in the analysis, the regression coefficients (β) were − 0.00274 (P = 0.503), − 0.01003 (P = 0.009), and − 0.01480 (P < 0.001), respectively (Table 3, model 1). That is, the blood methylation levels in participants residing in north-central, central, and southern areas were lower when compared to those in the northern area. The differences were − 0.00274, − 0.01003, and − 0.01480, respectively. After SHS was excluded from the analysis, the regression coefficients (β) were − 0.00028 (P = 0.947), − 0.01069 (P = 0.009), and − 0.01487 (P < 0.001), respectively (Table 3, model 2). Even though methylation levels in participants who lived in north-central areas were not statistically significant, the test for linear trend was statistically significant (P trend < 0.001) in both models ( Table 3, models 1 and 2). The mean PM 2.5 concentration from 2006-2011 was significantly associated with lower blood AHRR methylation levels at cg05575921 (Table 4). A unit increase in PM 2.5 concentration was associated with 0.00115 (P < 0.001) lower methylation when SHS was included in the analysis (Table 4, model 1). Similarly, a unit increase in PM 2.5 concentration was associated with 0.00124 (P < 0.001) lower methylation after SHS was excluded from the analysis (Table 4, model 2). Spearman analysis showed a significant negative correlation (β = − 0.78329; P < 0.001) between PM 2.5 concentration (μg/m 3 ) and mean methylation levels (Additional file 1). The associations between PM 2.5 and 176 sites in the AHRR promoter region are shown in Additional file 2. In addition to cg05575921, some other sites that were significantly associated with PM 2.5 include cg26703534 (β = − 0.00127; P < 0.001), cg25648203 (β = − 0.00078; P < 0.001), and cg21161138 (β = − 0.00046; P = 0.007). Discussion Studies on the association between AHRR methylation and sources of polycyclic aromatic hydrocarbon (PAH) other than smoking are limited. To our knowledge, the current study is the first to assess blood AHRR methylation in non-smokers residing in areas with different concentrations of PM 2.5 . In general, PM 2.5 was inversely associated with blood AHRR (cg05575921) methylation. Compared with the northern area, living in north-central, central, and southern areas was associated with lower blood AHRR methylation levels at cg05575921. Living in areas with higher PM 2.5 concentrations has been associated with greater exposure to PAH [20]. Particulate matter-related PAHs have been reported as potential activators of AHR [17]. PM 2.5 and cigarette smoke both contain PAHs which can induce AHRR methylation [11]. Moreover, they are both inhalable and carcinogenic [19]. Several studies have explored the association between PM 2.5 and DNA methylation in blood [23,30] and placenta [31]. Nonetheless, emphasis has not been laid on blood AHRR methylation in non-smokers. In light of this, we investigated the association between living in PM 2.5 areas and AHRR methylation at cg05575921 in non-smokers. We adjusted for exposure to SHS since significant inverse associations have been found between methylation at cg05575921 and exposure to SHS in non-smokers [24]. In our study, it was hypothesized that blood methylation patterns at cg05575921 (AHRR) in non-smokers residing in areas with different PM 2.5 concentrations might be similar to those observed based on smoking status. As expected, the results were similar to those from several studies that were conducted among non-, former, and current smokers [4,5,[13][14][15][16]. That is, blood methylation levels at cg05575921 in non-smokers residing in Southern Taiwan were lowest followed by Central and Northern Taiwan with significant linear trends. The regression coefficients were − 0.00028, − 0.01069, and − 0.01487 for north-central, central, and southern areas, respectively. Previous studies have reported regression coefficients of 0.83, 0.79, and 0.59 for never, former, and current smokers [5] and 0.878, 0.829, and 0.772 for non-, light, and heavy smokers, respectively [16]. Moreover, the methylation extent was 64, 60, and 50 % among never, former, and current smokers [4]. PM 2.5 levels are higher in Southern and Central compared to Northern Taiwan [26][27][28][29]. There are many heavy industries (e.g., petrochemical plants) in these areas [27,28,40]. Emissions from these industries are believed to be one of the main sources of PM 2.5 -and PM 2.5 -bound PAHs [41][42][43]. Higher concentrations of PM 2.5 -bound PAHs have also been reported in the industrial cities of Christchurch and Guadalajara in New Zealand [43,44] and Mexico [43,45]. Some of the merits of our study include (1) the relatively larger sample size, (2) the adjustment for exposure to SHS to avoid its confounding effect, and (3) stratification of participants into four areas known for different PM 2.5 levels. However, the study is limited in that DNA methylation was determined using the Illumina Infinium MethylationEPIC BeadChip which covers 850,000 methylation sites. Even though this microarray has been validated as a very reliable genomic platform for determining DNA methylation patterns in the human genome [46], validation experiments were not performed to confirm our findings. Therefore, future investigations to support our results are recommended. Moreover, functional correlation between cg05575921 methylation and AHRR mRNA gene expression was not evaluated in this study. Furthermore, the actual concentrations of PM 2.5 in individuals could not be determined since there are no validated tools for individual exposure estimates. Air quality indices from nearby monitoring stations are usually used for air pollution estimation [47]. In the current study, the number of monitoring stations corresponding to the participants' residence was relatively small. This is because some counties involved in the study do not have monitoring stations; hence, PM 2.5 estimates were not available for participants living in such areas. However, we think our results are plausible because variations in PM 2.5 concentration in various areas in Taiwan are well known [26][27][28][29]. Moreover, the inclusion of mean PM 2.5 concentration from the study areas (2006-2011) into our regression model showed significant inverse associations between PM 2.5 and blood methylation at cg05575921 (AHRR). The fact that other environmental factors would have also accounted for this association cannot be completely rolled out. However, the emphasis on PM 2.5 is because it contains PAHs which are related to AHRR. Exposure to PAHs is believed to cause smoking-related AHRR methylation [9]. Moreover, PM 2.5 is a serious problem in Taiwan and [20]. Conclusion In conclusion, living in PM 2.5 areas was inversely associated with blood AHRR methylation levels at cg05575921. Compared with Northern Taiwan (which has the lowest PM 2.5 levels), living in North-Central, Central, and Southern Taiwan was associated with lower blood AHRR methylation levels at cg05575921. The methylation levels were lowest in those residing in the southern area (which has the highest PM 2.5 ) levels followed by central and north-central areas. Moreover, PM 2.5 was inversely associated with methylation levels at cg05575921 when it was included in the regression model. AHRR (cg05575921) methylation might serve as an indicator of differential exposures to PM 2.5 and lung cancer which is also a PM 2.5 -related disease. Further investigations to confirm these findings are recommended. Data source Data used in the current study were obtained from the Taiwan Biobank which was founded in 2005. The biobank aims at undertaking large-scale cohort and case-control studies through the combination of genetic and medical information [48,49]. The information collected is currently serving as one of the cornerstones of medical research in Taiwan. It is hoped that the identification of disease risk factors and the underlying mechanisms would lead to the development of better treatments and prevention strategies, hence, reduced medical costs [48,49]. As a result, the health of the population would probably be promoted and improved. Information about recruitment in the Taiwan Biobank project is obtained through brochures, posters, media, and websites. Interested volunteers are asked to provide their contact information. These volunteers are contacted to confirm their willingness to participate, and those who fulfill the recruitment requirements (strictly Taiwanese aged 30-70 years with no personal history of cancer) are assigned to specific recruitment centers. Currently, the biobank comprises 29 recruitment centers with each city or county having at least 1 center [49]. Before data are collected, all the participants sign a letter of consent. Data collection Data were collected by trained medical researchers through questionnaires (e.g., residence, sex, age, smoking, alcohol drinking, exercise habits, and exposure to SHS), physical examination (e.g., weight and height), and blood examination (e.g., DNA methylation). Participants were considered as non-smokers if they reported to have never smoked or have not continuously smoked for at least 6 months. Former smokers were those who reported to have continuously smoked for at least 6 months but were currently not smoking. Current smokers were those who reported to have continuously smoked for at least 6 months and were currently smoking. Participants were considered non-drinkers if they did not drink or drank less than 150 cc of alcohol per week continuously for 6 months. Former drinkers were those who abstained from alcohol for more than 6 months while current drinkers were those who drank at least 150 cc of alcohol per week continuously for 6 months. Exercise habits were categorized under "yes" if participants reported a habit of exercising at least three times per week (each exercise time > 30 min) and under "no" if they did not exercise at least three times per week. Individuals considered to be exposed to SHS were those who reported being exposed to SHS for at least 5 minutes per hour. BMI was derived from weight and height as BMI = weight (kg)/height (m 2 ). DNA was extracted from whole blood using an automated extraction machine called chemagic TM Prime TM instrument. DNA length was checked using Fragment Analyzer (Agilent). Its quality was assessed using the ratio of absorbance at 260/280 with the purity index set as 1.6-2.0. The samples that passed quality control were stored at − 80°C for long-term use. DNA methylation was determined using the Illumina Infinium Methylatio-nEPIC BeadChip which has been previously described [50][51][52]. In brief, DNA samples were treated with sodium bisulfite conversion using the EZ DNA Methylation Kit (Zymo Research, CA, USA). Data quality control was performed based on the Illumina® Geno-meStudio® Methylation Module v1.8 [53]. Samples with P value > 0.05 or bead count < 3 were removed. Dye bias across batches was adjusted by normalization and background correction was performed. Outliers were removed using the median absolute deviation method. The methylation level at each CpG site was determined using β values. The β values were derived using the formula β = M/(M + U), where M = methylated intensity and U = unmethylated intensity. Study participants Data used in the current study were retrieved from the Taiwan Biobank dataset (2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015). A total of 1142 individuals aged between 30 and 70 years with no personal history of cancer were initially enrolled. We excluded 301 current and former smokers, as well as 133 individuals who did not live in the study area (Fig. 1) from the study. The final sample included 708 non-smoking participants consisting of 279 men and 429 women. The study area was categorized into Northern (Taipei and New Taipei Cities), North-Central (Taoyuan and Hsinchu Cities, Hsinchu, Miaoli, and Taoyuan Counties), Central (Taichung City, Nantou, Changhua, and Yunlin Counties), and Southern Taiwan (Chiayi County, Tainan, and Chiayi Cities). These areas are well known for varying PM 2.5 levels. That is, PM 2.5 levels increase as one moves from the north through the center towards the south of Taiwan [26][27][28][29]. Overall, there were 244, 121, 134, and 209 participants residing in the northern, north-central, central, and southern areas, respectively. The number of monitoring stations was 18 for the northern, 12 for the north-central, 16 for the central, and 7 for the southern area. In this study, we used the average PM 2.5 levels (2006-2011) of the monitors of each area (northern, north-central, central, and southern areas) to estimate the PM 2.5 exposure of participants. The location of participants and monitoring stations is shown in Fig. 1. The current study was approved by the Chung Shan Medical University Institutional Review Board (CS2-17070). Statistical analysis Cell-type heterogeneity was corrected using the Reference-Free Adjustment for Cell-Type composition (ReFACTor) approach with the R software described by Rahmani and colleagues [54]. The association between blood methylation levels at cg05575921 and living in Fig. 1 Map showing the study areas and the location of monitoring stations PM 2.5 areas was determined using multiple linear regression analysis. Adjustments were made for sex, age, alcohol drinking, exercise, BMI, exposure to SHS, and cell type composition [54]. The inclusion of sex, age, alcohol drinking, exercise, and BMI as confounders is because they are common factors that have been associated with health. Moreover, DNA methylation predictors have been shown to correlate with these factors [55]. For instance, in previous studies, older age was significantly associated with lower cg05575921 methylation [24] and increased PM 2.5 -related mortality [56]. The male sex was significantly associated with lower cg05575921 methylation [24] while the female sex was significantly associated with an increased risk of PM 2.5 -related mortality [56]. The regression analysis was performed with the SAS 9.3 software (SAS Institute, Cary, NC).
v3-fos-license
2021-09-28T01:10:26.757Z
2021-07-01T00:00:00.000
237781948
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4352/11/7/808/pdf", "pdf_hash": "f1c5d5a8adb634696d2fae1885f14823d32c728d", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2507", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "fc7165deb8409ae60421987425dcc0a9c0f8e529", "year": 2021 }
pes2o/s2orc
Seeded Growth of Type-II Na 24 Si 136 Clathrate Single Crystals : Type-II Na 24 Si 136 clathrate octahedral single crystals surrounded by {111} facets were grown by evaporating Na from a molten mixture of Na 4 Si 4 and Na 9 Sn 4 at 823 K for 12 h. One of the obtained single crystals was used as a seed for the following single crystal growth of the type-II clathrate using the same method. The single crystal grown on the seed maintained the octahedral shape. The weight of the crystal grown with the seed was increased from 0.6 to 30.4 mg by repeating the seeded growth and was proportional to the surface area of the seed crystal. Introduction Clathrate compounds containing Na in Si cages are called Na-Si clathrates [1], and two types, type-I (Na 8 Si 46 ) and type-II (Na x Si 136 , 0 < x ≤ 24), have been reported [2][3][4][5]. The electrical property of the type-II clathrate changes from metallic to semiconducting according to the amount of Na encapsulated in the cages [6]. Mott reported that the metalinsulator transition took place with decreasing x of the Na x Si 136 clathrates [7]. Since a bandgap of 1.9 eV, which was shown for the Na-free type-II clathrate, Si 136 , by experiments and first-principle calculation, is wider than that of 1.2 eV for diamond-type Si, it has attracted attention as a material for the next-generation solar cells [8][9][10]. Powders and polycrystalline samples of the Na-Si clathrates have been conventionally synthesized by thermal decomposition of a solid phase Zintl compound Na 4 Si 4 [1][2][3][4]. It is difficult to prepare the bulk samples of the Na-Si clathrates by powder sintering because the clathrates have strong covalent Si-Si bonding and decompose into diamondtype Si (d-Si) at high temperature. Crystal growth of the type-II Na-Si clathrate was performed by thermal decomposition under pressure with spark plasma sintering (SPS) equipment [11]. The type-II single crystals of about 300 µm were also produced by another thermal decomposition method, in which Na was slowly removed from Na 4 Si 4 heated at 938 K in a closed space surrounded with NaCl and graphite [12,13]. Recently, our research group achieved a solution growth of the Na-Si clathrate single crystals using a Na-Sn flux and obtained millimeter-sized single crystals [14,15]. In previous studies [15], single crystals of the type-I Na-Si clathrate with a size up to about 5 mm were prepared from a Na-Si-Sn solution at 773 K. Single crystals of type-II clathrate with {111} facets of about 2 mm on one side were grown at 873 K. Although the single crystals were formed by heating, they decomposed into d-Si during prolonged heating [15]. A more efficient crystal growth process without decomposition was necessary to obtain further large single crystals, furthering their application in devices. In previous studies on the crystal growth of the Na-Si clathrates, Na, Na 4 Si 4 , and Na 15 Sn 4 were used as the starting materials [14,15]. A Na-Si-Sn solution was formed by dissolving Na 4 Si 4 into the melt of Na 15 Sn 4 . The Na-Si clathrates were crystallized by evaporating Na from this solution. The formula for the formation of clathrates could be expressed as follows: 2Na 4 Si 4 + Na 15 Sn 4 + Na → 1/17Na 24 Si 136 (4/23Na 8 Si 46 ) + Na 9 Sn 4 + 231/17Na (313/23Na)↑ Since the Na-Si clathrates are also dissolved into the melt of Na 15 Sn 4 , the seeded crystal growth could not be applied. However, as suggested in the reaction formula, the Na-Si clathrates can coexist with the melt of Na 9 Sn 4 because the clathrates were obtained with Na 9 Sn 4 after heating. In this study, single crystals of the Na-Si clathrates were prepared using a Na 9 Sn 4 melt as a flux, and the seeded crystal growth was attempted to grow a large Na-Si clathrate single crystal. Experimental Na 4 Si 4 and Na 9 Sn 4 were used as starting materials. To prepare Na 4 Si 4 , the pieces of Na metal (Nippon Soda Ltd., Tokyo, Japan, purity 99.95%) and Si powder (Kojundo Chemical Laboratory Co., Saitama, Japan, purity 99.99%) were weighted so that the molar ratio was Na:Si = 1:1, and put in a BN crucible (Showa Denko KK, Tokyo, Japan, purity 99.5%, outer diameter Φ 8.5 mm, inner diameter Φ 6.5 mm, depth 18 mm) in an Ar gasfilled glove box. These samples were sealed in a reaction container made of stainless steel (SUS316: outer diameter Φ 12.7 mm, inner diameter Φ 10.7 mm, height 80 mm) with Ar gas. For the preparation of Na 9 Sn 4 , a BN crucible containing Na and granular Sn (FUJIFILM Wako Pure Chemical Corporation, Ltd., Osaka, Japan, purity 4N) (molar ratio, Na:Sn = 9:4) were sealed in another SUS container, and heated at 973 K for 12 h. After heating, both containers were opened in the glove box, and the ingots of Na 4 Si 4 and Na 9 Sn 4 were taken out from the BN crucibles. Each ingot was cut into pieces 1-2 mm in size with a nipper. Na 4 Si 4 and Na 9 Sn 4 in a molar ratio of 2:1 (Na 4 Si 4 ; 0.500 g, Na 9 Sn 4 ; 0.835 g) was put in a BN crucible (Showa Denko KK, Tokyo, Japan, purity 99.5%, outer diameter Φ 20 mm, inner diameter Φ 18 mm, depth 29 mm), and sealed in a SUS316 container (outer diameter Φ 45 mm, inner diameter Φ 40 mm, 200 mm height) for crystal formation. The schematic diagrams of the containers are shown in the supplementary data and the details of the procedures were described in the previous studies [14][15][16]. The crucible was heated at 823 and 873 K for 12 and 3 h, respectively. After heating, the crucible was taken from the container in the glove box, and the sample weight was measured to evaluate the evaporated Na. Subsequently, the sample was transferred to the air and washed with 2-propanol, ethanol, and pure water in that order (alcohol treatment). Sn remained after this treatment was dissolved in dilute nitric acid water solution (nitric acid concentration of 10% or less) (acid treatment). One of the obtained single crystals was picked up as a seed crystal, and it was placed in a BN crucible (outer diameter Φ 8.5 mm) with Na 4 Si 4 (0.100 g) and Na 9 Sn 4 (0.167 g) in a molar ratio of 2:1, where the total weight of the sample and the size of the crucible were reduced to easily distinguish the crystal grown on the seed from other crystals newly formed by spontaneous nucleation. The crucible containing the seed crystal was heated at 823 K for 12 h. The seeded-grown crystal was taken out by means of the alcohol and acid treatments and used as the seed for the next crystal growth. The seeded growth was repeated three times in the same conditions. The morphology of the obtained crystals was observed with an optical microscope (Leica, Tokyo, Japan, WILD-M3Z) and a scanning microscope (SEM; JEOL, Tokyo, Japan, JSM-6610A). The crystalline phases of the products were identified by powder X-ray diffraction (XRD) using a powder diffractometer (Rigaku, Tokyo, Japan, RINT2200, CuKα, 40 kV and 30 mA). To confirm the quality and crystal orientation of the single crystal, an XRD pattern was taken with a Laue camera and an X-ray generator (Rigaku, Tokyo, Japan, RASCO-II BLA, W target, 30 kV, and 30 mA). The composition of the crystal was determined with an energy dispersive X-ray spectrometer (EDS; JEOL, Tokyo, Japan, JED-2300 Series) which was attached with the SEM. Results and Discussion After heating Na 4 Si 4 and Na 9 Sn 4 at 823 K for 12 h, a dome-shaped ingot (diameter; 15 mm, height~5 mm) was obtained. While the starting sample before heating was in the form of grains of about 1 to 2 mm, the solidified homogeneous sample including single crystals was obtained. This result suggests that Na 4 Si 4 with a melting point (m.p.) of 1071 K [17] could be dissolved in the melt of Na 9 Sn 4 (m.p. 751 K [18]) at 823 K. The weight of the sample was reduced from 1.335 g to 1.171 g by heating, which indicated that appproximately 34% of Na in the starting sample was evaporated. After heating and alcohol and acid treatments, the granular crystals and octahedral crystals were obtained as shown in Figure 1a,b, respectively. The powder XRD pattern revealed that the granular and octahedral crystals were type-I and type-II clathrates, respectively (Supplementary Materials). The total weight of type-I and type-II clathrate crystals was 204 and 34 mg, respectively. In a previous study [15], the {110} and {111} facets were dominant in the crystals of the type-I and type-II clathrates grown at 823 K, respectively. The Na and Si contents analyzed by EDS analysis were 14.7 (2) and 85.3 (2) at.%, respectively, showing that the obtained type-II single crystals were fully Na-occupied type-II clathrate, Na 24 Si 136 . Results and Discussion After heating Na4Si4 and Na9Sn4 at 823 K for 12 h, a dome-shaped ingot (diameter; ~15 mm, height ~5 mm) was obtained. While the starting sample before heating was in the form of grains of about 1 to 2 mm, the solidified homogeneous sample including single crystals was obtained. This result suggests that Na4Si4 with a melting point (m.p.) of 1071 K [17] could be dissolved in the melt of Na9Sn4 (m.p. 751 K [18]) at 823 K. The weight of the sample was reduced from 1.335 g to 1.171 g by heating, which indicated that appproximately 34% of Na in the starting sample was evaporated. After heating and alcohol and acid treatments, the granular crystals and octahedral crystals were obtained as shown in Figure1a and b, respectively. The powder XRD pattern revealed that the granular and octahedral crystals were type-I and type-II clathrates, respectively (Supplementary Materials). The total weight of type-I and type-II clathrate crystals was 204 and 34 mg, respectively. In a previous study [15], the {110} and {111} facets were dominant in the crystals of the type-I and type-II clathrates grown at 823 K, respectively. The Na and Si contents analyzed by EDS analysis were 14.7 (2) and 85.3 (2) at.%, respectively, showing that the obtained type-II single crystals were fully Na-occupied type-II clathrate, Na24Si136. Type-I and -II crystals were also obtained by heating at 873 K for 3 h. The size of the type-II single crystals was slightly larger than that of crystals prepared at 823 K. However, the crystal surface was uneven and cracked, which might be due to an increased growth rate at a higher temperature. A part of the crystals was decomposed into d-Si. An octahedral single crystal of type-II clathrate with no surface irregularities or cracks was used as a seed crystal (Figure 2a). It was placed at the bottom of the BN crucible, and Na4Si4 and Na9Sn4 grains were added. The crucible was heated at 823 K for 12 h in the container filled with Ar. The sample obtained after heating was ingot-like, and no crystals were exposed above the surface. After alcohol and acid treatments, the seeded grown octahedral single crystal with a size over 1 mm and other small crystals were separated. The SEM photo of the seeded grown single crystals is shown in Figure 2b. This octahedral crystal was used as the seed crystal for the second-seeded growth by the same procedure as the first one. The single crystal isotropically grown on the seed, maintaining the octahedral morphology, was obtained as shown in Figure 2c. The SEM photo of the octahedral single crystal with a size of about 3 mm obtained by the third growth is shown in Figure 2d. The equilateral triangle facets of the octahedral crystal were confirmed as being the {111} plane of the cubic type-II clathrate by Laue XRD (Supplemental Data). Type-I and -II crystals were also obtained by heating at 873 K for 3 h. The size of the type-II single crystals was slightly larger than that of crystals prepared at 823 K. However, the crystal surface was uneven and cracked, which might be due to an increased growth rate at a higher temperature. A part of the crystals was decomposed into d-Si. An octahedral single crystal of type-II clathrate with no surface irregularities or cracks was used as a seed crystal (Figure 2a). It was placed at the bottom of the BN crucible, and Na 4 Si 4 and Na 9 Sn 4 grains were added. The crucible was heated at 823 K for 12 h in the container filled with Ar. The sample obtained after heating was ingot-like, and no crystals were exposed above the surface. After alcohol and acid treatments, the seeded grown octahedral single crystal with a size over 1 mm and other small crystals were separated. The SEM photo of the seeded grown single crystals is shown in Figure 2b. This octahedral crystal was used as the seed crystal for the second-seeded growth by the same procedure as the first one. The single crystal isotropically grown on the seed, maintaining the octahedral morphology, was obtained as shown in Figure 2c. The SEM photo of the octahedral single crystal with a size of about 3 mm obtained by the third growth is shown in Figure 2d. The equilateral triangle facets of the octahedral crystal were confirmed as being the {111} plane of the cubic type-II clathrate by Laue XRD (Supplemental Data). The weight of the crystals obtained at each heating time is plotted in Figure 3. The weight of the seed crystal prepared by heating the Na-Si-Sn solution at 823 K for 12 h was 1.9 mg. The weight of the crystal increased to 6.3 mg after the first seeded growth (the total heating time was 24 h). By repeating the seeded crystal growth while increasing the total heating time to 36 and 48 h, the weight of the crystal was changed to 16.0 and 30.4 mg, respectively. The weight of the seeded-grown crystals was in proportion to the surface area of the seed crystal as shown in the inset graph of Figure 3. The surface area was calculated with the length of one side of the octahedron. The result shows that the growth rate per unit area of the crystal was constant at each seeded growth. Since the crystals grew isotropically while maintaining the orientation of the single crystal, the crystal growth velocities along <100> and <111> were evaluated. Crystal growth along the <100> direction was considered as the distance from the center position of the octahedron to the apex of the octahedron. When this distance was plotted against the crystal growth time, it tended to increase linearly. Thus, when the crystal growth velocity along the <100> direction was calculated from this straight line, it was 0.0124 ± 0.0006 μm/s. Similarly, the crystal growth velocity along the <111> direction was 0.0072 ± 0.0004 μm/s as a result of calculating from the distance from the center of the octahedron to the center of gravity of the {111} plane. The crystal growth velocity of <111> was slower than that of <100>. From this result, it was clarified that {111}, which has slow crystal growth, is exposed on the surface of the single crystal. The weight of the crystals obtained at each heating time is plotted in Figure 3. The weight of the seed crystal prepared by heating the Na-Si-Sn solution at 823 K for 12 h was 1.9 mg. The weight of the crystal increased to 6.3 mg after the first seeded growth (the total heating time was 24 h). By repeating the seeded crystal growth while increasing the total heating time to 36 and 48 h, the weight of the crystal was changed to 16.0 and 30.4 mg, respectively. The weight of the seeded-grown crystals was in proportion to the surface area of the seed crystal as shown in the inset graph of Figure 3. The surface area was calculated with the length of one side of the octahedron. The result shows that the growth rate per unit area of the crystal was constant at each seeded growth. Since the crystals grew isotropically while maintaining the orientation of the single crystal, the crystal growth velocities along <100> and <111> were evaluated. Crystal growth along the <100> direction was considered as the distance from the center position of the octahedron to the apex of the octahedron. When this distance was plotted against the crystal growth time, it tended to increase linearly. Thus, when the crystal growth velocity along the <100> direction was calculated from this straight line, it was 0.0124 ± 0.0006 µm/s. Similarly, the crystal growth velocity along the <111> direction was 0.0072 ± 0.0004 µm/s as a result of calculating from the distance from the center of the octahedron to the center of gravity of the {111} plane. The crystal growth velocity of <111> was slower than that of <100>. From this result, it was clarified that {111}, which has slow crystal growth, is exposed on the surface of the single crystal. The weight of the crystals obtained at each heating time is plotted in Figure 3. The weight of the seed crystal prepared by heating the Na-Si-Sn solution at 823 K for 12 h was 1.9 mg. The weight of the crystal increased to 6.3 mg after the first seeded growth (the total heating time was 24 h). By repeating the seeded crystal growth while increasing the total heating time to 36 and 48 h, the weight of the crystal was changed to 16.0 and 30.4 mg, respectively. The weight of the seeded-grown crystals was in proportion to the surface area of the seed crystal as shown in the inset graph of Figure 3. The surface area was calculated with the length of one side of the octahedron. The result shows that the growth rate per unit area of the crystal was constant at each seeded growth. Since the crystals grew isotropically while maintaining the orientation of the single crystal, the crystal growth velocities along <100> and <111> were evaluated. Crystal growth along the <100> direction was considered as the distance from the center position of the octahedron to the apex of the octahedron. When this distance was plotted against the crystal growth time, it tended to increase linearly. Thus, when the crystal growth velocity along the <100> direction was calculated from this straight line, it was 0.0124 ± 0.0006 μm/s. Similarly, the crystal growth velocity along the <111> direction was 0.0072 ± 0.0004 μm/s as a result of calculating from the distance from the center of the octahedron to the center of gravity of the {111} plane. The crystal growth velocity of <111> was slower than that of <100>. From this result, it was clarified that {111}, which has slow crystal growth, is exposed on the surface of the single crystal. The total weight of the single crystals except for the seeded grown crystal after the first seeded growth was 41.4 mg (total heating time 24 h). The weight was decreased to 33.8 and 27.6 mg by increasing the repeating time of the seeded growth. The crystal growth selectively occurred on the seed crystals and the crystal growth via spontaneous nucleation was suppressed by increasing the size of the seed crystal. ConclusionS Crystal growth of Na-Si clathrates and seeded growth of type-II clathrate Na 24 Si 136 were performed by heating Na 4 Si 4 and Na 9 Sn 4 at 823 K for 12 h. The octahedral single crystals surrounded by {111} facets with a size of 3 mm were obtained on a seed crystal. By repeating the seed growth, the weight of the crystal was increased in proportion to the surface area of the seed crystal. These results pave the way for the production of Na-Si clathrate single crystals large enough for application in new devices and solar cells using efficient crystal growth methods with seed crystals, such as crystal pulling and top-seeded methods. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/cryst11070808/s1, Figure S1: Schematic diagram of containers for (a) the preparation of the seed crystal and (b) the crystal growth of the Na-Si clathrate, Figure S2: Powder X-ray doffraction patterns of (a) type-I and (b) type-II Na-Si clathrate crystals obtained by heating Na 4 Si 4 and Na 9 Sn 4 at 823 K for 12 h, followed by alcohol and acid treatment, Figure S3: Photograph and Laue pattern of seeded grown single crystal by heating at 823 K for total heating time of 36 h.
v3-fos-license
2018-04-03T04:42:48.433Z
2010-03-17T00:00:00.000
39993862
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.pagepress.org/journals/index.php/mr/article/download/mr.2010.e2/1932", "pdf_hash": "aeac78750b4dc2f2dfacc2f6de4ee1415c047e41", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2509", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "f0cbbc2c79c39053ac004d3097dc97e12496ff5f", "year": 2010 }
pes2o/s2orc
Evaluation of an enzyme-linked immunosorbent assay using purified, deglycosylated histoplasmin for different clinical manifestations of histoplasmosis Diagnosis of invasive fungal diseases remains problematic, especially in undeveloped countries. We have developed an enzymelinked immunosorbent assay (ELISA) for the detection of antibodies to Histoplasma capsulatum using metaperiodate treated purified histoplasmin (ptHMIN). Our ELISA was validated comparing sera from patients with histoplasmosis, related mycoses, and healthy individuals. The overall test specificity was 96%, with sensitivities of 100% (8/8) in acute disease, 90% (9/10) in chronic disease, 89% (8/9) in disseminated infection in individuals without HIV infection, 86% (12/14) in disseminated disease in the setting of HIV infection and 100% (3/3) in mediastinal histoplasmosis. These parameters are superior to the use of untreated histoplasmin in diagnostic ELISAs. The high specificities, sensitivities, and simplicity of our ELISA support further development of a deglycosylated HMIN ELISA for clinical use and for monitoring the humoral immune response during therapy in patients with chronic and disseminated histoplasmosis. Introduction Histoplasmosis, a fungal infection caused by Histoplasma capsulatum, has a worldwide dis-tribution and is one of the most common respiratory mycoses. [1][2][3][4] This fungus is endemic in certain areas of the United States, particularly in states bordering the Ohio River valley and the lower Mississipi River. 5 In Latin America, histoplasmosis is prevalent in Venezuela, Ecuador, Brazil, Paraguay, Uruguay, and Argentina. 4,[6][7][8] In Brazil, positive histoplasmin skin tests occur in as many as 50% of the people living in diverse areas of the country where H. capsulatum is common. 9 The clinical disease generally ranges from mild flu-like illnesses to progressive, disseminated disease, which manifests mostly in immunocompromised individuals [8][9][10] such as in patients with AIDS or receiving corticosteroids. Immunocompromised patients with histoplasmosis frequently fail to mount an effective antibody response and specific antibodies to the disease cannot be routinely detected, which make its diagnosis difficult. 3,[11][12][13][14][15][16] A definitive diagnosis of histoplasmosis requires the isolation of the pathogen in culture, which is inefficient since its growth frequently takes more than three weeks on standard fungal media. Positive results commonly only occur with samples from patients with high fungal burdens, often in the setting of chronic or disseminated disease. 17 Visuali-zation of the organism in infected clinical specimens can also facilitate the diagnosis; however, this methodology has a low sensitivity and structures from other microorganisms can be misidentified as H. capsulatum. 18,19 In the absence of positive cultures, serological techniques such as immunodiffusion (ID), 20-24 complement fixation, 25,26 enzyme immunoassay [27][28][29][30][31] and radioimmunoassay [32][33][34] have been used to provide immunological evidence of H. capsulatum infection. These serological tests for the detection of either antibodies and/or antigen in clinical fluids specimens (such as serum, urine and liquor) offer a rapid alternative methodology for the diagnosis of histoplasmosis. Although the detection of H. capsulatum antigens in clinical samples shows high degrees of cross-reactivity and low sensitivity, 34,35 they are particularly useful when the detection of a patient's antibody is unlikely, such as in disseminated disease in an immunocompromised patient. Although the data for the MiraVista laboratory (MVista® Histoplasma Quantitative Antigen EIA) are excellent for certain presentations of histoplasmosis, 36,37 the assay is only performed in the United States of America and it is costly and impractical for use elsewhere. An ELISA has been recently described for the detection of progressive disseminated histoplasmosis antigenuria, demonstrating a sensitivity of 81% and a specificity of 95%. 38 However, this assay relies on the use of polyclonal antibodies against H. capsulatum as both capture and detection reagents, and batch to batch varia-tions may require intensive standardization procedures. We have developed an ELISA assay for the detection of antibodies in sera from histoplasmosis patients using deglycosylated histoplasmin (ptHMIN). HMIN is a well characterized antigen obtained from H. capsulatum mycelia. 28,[39][40][41][42] The main components of HMIN to which antibodies can be detected are the C, M, and H antigens. The C antigen is a galactomannan responsible for the cross-reactivity observed with other fungal species 43 and deglycosylation has been shown to increase specificity of the assay. 44 The M antigen is a B catalase 45,46 and the H antigen is a β-glucosidase. 47 Antibodies against the M and H antigen are highly specific and their detection is particularly useful in diagnosis. [48][49][50] HMIN is stable at -20°C for long periods (> 48 months) and no significant batch to batch variations have been detected. We previously found that the ptHMIN ELISA detects antibodies in 92% of serum samples from patients with culture proven histoplasmosis and had a 96% specificity, versus 57% of sensitivity and 93% of specificity when only purified histoplasmin (pHMIN) was used. 8,31 In the present study, we evaluated and validated the ELISA as a specific and sensitive method for the detection of antibodies in sera from patients with different clinical manifestations of histoplasmosis. Statistical analyses to characterize the reactivity profile and determine the reliability for detection of antibodies in each group were carried out. Also, we tried to establish a correlation between antibody titer and the clinical form of disease. This methodology was shown to be effective and further studies are being conducted to evaluate its use in the follow-up of clinical samples. Antigen preparation Histoplasmin (HMIN) was produced as previously described. 27 Briefly, cells were centrifuged at 1,050 g for 10 min, and supernatant filtered through a 0.45 µm membrane, concentrated 20-fold and dialysed against PBS (0.01 M, pH 7.2). The presence of the immunodominant H and M antigens in the crude HMIN was determined by ID. HMIN was purified by cation-exchange chromatography on CM Sepharose CL-6B columns (Amersham Biosciences, NJ, USA). Elution was performed using a stepped salt gradient of 0.1-1 M NaCl in 25 mM citrate buffer, pH 3.5. Purified HMIN (pHMIN) was obtained after pooling the fractions rich in H and M glycoproteins and chemical oxidation of carbohydrate residues was achieved by sodium-meta-periodate (NaIO4) treatment to generate the purified and treated HMIN (ptHMIN) according to the methodology described previously. 28,31,51 Antigen concentration was determined by a dye binding protein assay (Bio-Rad, NY, USA) with respect to a bovine albumin-globulin standard 52 and aliquots of the final preparation were stored at -20°C. Subjects Patients with different clinical manifestations of histoplasmosis were included in this study. The diagnosis was based on the following criteria: i) biological material (including sputum, bronchoalveolar lavage, bone marrow aspiration, biopsies or cerebrospinal fluid) was submitted for direct examination with KOH 10% and culture on Sabouraud agar. Blood samples were cultured on brain heart infusion agar (BHI). Slants were incubated for 4-6 weeks at room temperature until the observation of white to brownish filamentous colonies. Microscopic examination of slide cultures revealing septate hyaline hyphae, with globose macroconidia, and microconidia presumptively identified the isolates as H. capsulatum. Conversion to oval yeast cells with single budding confirmed isolation of H. capsulatum; ii) tissue staining with H&E and GMS techniques, with histopathological findings revealing organisms consistent with H. capsulatum associated with clinical and radiological findings suggestive of histoplasmosis with or without a positive epidemiological history; iii) seropositivity for anti-H. capsulatum antibodies (positive by either immunodiffusion (ID) or Western blot (WB)) associated with clinical and/or radiological findings suggestive of histoplasmosis with or without a positive epidemiological history. From the number of cases selected, histoplasmosis patients were clinically classified into five clinical forms as described before depending on the severity and similarity of the symptoms. 53 The main clinical features of each group are listed in Table 1. Additional criteria utilized to classify the patients in each clinical group included: travel history, fever, weight loss, and most importantly, the isolation of fungi from clinical samples. Approval for this study was granted by the Ethics Board, Instituto de Pesquisa Clinica Evandro Chagas, Fundação Oswaldo Cruz, Rio de Janeiro, Brazil, in accordance with the ethical rules and regulations of our institution. Sample size Two approaches were used to determine the sample size requirements depending on the test performance. First, we used standard equations based on previously reported results of ELISA sensitivity and specificity. 54 To determine the minimum number of histoplasmosis serum samples, we applied the following formula, using the sensitivity approach: A similar procedure was performed to calculate the minimum number of heterologous serum samples according to specificity using the following formula: Formulae specifications: Nsn: number of histoplasmosis patient serum samples based on the sensitivity values (homologous sera). Nsp: number of serum samples based on the specificity values (heterologous sera) Zα/2: the number of standard deviations in half of the two-tailed confidence interval (for the 95% confidence interval, Zα/ 2 = 1.96SD). SN: the estimated value for the performed sensitivity (92%). SP: the estimated value for the performed specificity (96%). W: maximum clinically acceptable width of the 95% of CI (half of the confidence interval). Using this methodology and based on our recruited patients, a minimum of 29 histoplasmosis patients and 15 heterologous sera were required to evaluate the sensitivity and specificity of the ptHMIN ELISA, respectively. A second approach used in this study was receiver operating curve (ROC) analysis. 55 ROC analysis provides tools to select possibly Article Table 1. Classification of patients with histoplasmosis according to clinical-manifestation and diagnosis criteria. Clinical forms Diagnosis criteria and symptoms Acute Positive epidemiological history of recent exposure; abrupt onset, fever, headache, non-productive cough, aching or constricting substernal discomfort, pleuritic pain; scattered patchy pulmonary infiltrates and hilar adenopathy on chest radiograph. Chronic Malaise, fatigue, low-grade fever, mucoid sputum, chest pain, weight loss; reticulon odular and fibrotic lesions associated with cavitation on chest radiograph; and the presence of chronic obstructive pulmonary lung disease (not obligatory optimal models and to discard suboptimal ones independently from the cost context or the class distribution. To calculate the sample size, a 5% type I error rate was used. The sample sizes were determined using the same probability of 5% to reject or to accept the null hypothesis (type I and type II errors, respectively Table 3. Serum samples were obtained at a time-point closest to when individuals were diagnosed with histoplasmosis. The clinical samples were selected randomly from the Immunodiagnostic Section Serum Bank, Laboratory of Mycology IPEC (Fiocruz, Rio de Janeiro, Brazil). All specimens were stored at -20°C until tested and then at 4°C while the evaluations were performed. Immunodiffusion Serum samples were evaluated for the detection of antibodies by radial double ID as previously described. 56 Briefly, the patient's sera were inactivated for 30 min at 56ºC and stored at 4ºC until the test was performed. ID slides were made by melting a 1% agarose solution in PBS and applying the solution to form a thin film on a commercially available glass slide (ThermoFischer Scientifics, USA). Wells were made in a hexagonal disposition, containing an extra well in the center. Serial two-fold dilutions were prepared to a final dilution of 1:1,024 in all confirmed positive sam-ples. Ten microliters of each dilution were loaded onto individual wells. The crude histoplasmin antigen was diluted to previously standardized concentrations and used to load the central well. The slides were incubated at RT for 48 hours to allow formation of precipitins. Reaction was considered positive when the zone of equivalence lines gave a full identity and related to the presence of the M band only or both M and H antigens. Evaluation of an enzyme linked immunosorbent assay Article Indirect ELISA assay was performed as previously described. 31 Briefly, ptHMIN was diluted in carbonate/bicarbonate buffer (63 mM; pH 9.6) to a concentration of 0.01 µg/mL and 100 µL were added to each well of 96-well microtiter plates (Nunc-Immuno Starwell, MaxiSorp Surface). The plates were incubated for one hour at 37ºC followed by an overnight incubation at 4ºC. Plates were washed three times with washing buffer (10 mM PBS, 0.1% Tween 20, pH 7.3) and blocked with 200 µL of blocking buffer (5% [w/v] non-fat skimmed milk in washing buffer) for two hours at 37ºC. After three washes, serum samples were added to each well in duplicate at a dilution of 1:1,000 in 100 µL of blocking buffer for one hour incubation at 37ºC. Plates were washed and incubated with goat anti-human IgG peroxidase conjugate (Jackson Immunoresearch; peroxidase-conjugated affinipure goat anti-human IgG, Fc fragment specific) diluted 1:16,000 in 100 µL of blocking buffer at 37ºC for one hour. After washing, the reaction was developed using 100 µL per well of o-phenylenediamine dihydrochloride [OPD; 0.4 mg/mL in 0.04% (w/v) H 2 O 2 ] diluted in 0.01 M sodium citrate buffer, pH 5.5. The reaction was stopped by the addition of 50 µL of 3 M HCl. Optical density (ODs) was measured on a microplate reader (Bio-Rad model 550) at 490 nm and the results from duplicate wells were averaged. This experiment was carried out twice on different dates under uniform laboratory conditions to avoid internal variations in order to ascertain the reproducibility of the assay. For each experiment, two controls were made: secondary antibody alone, to ensure that the reagent was not interacting with the antigen on the plate, and a blank control. Receiving operation curves analysis A receiver operator characteristic (ROC) curve was constructed with GraphPad Prism (version 5.01, GraphPad Software, Inc.) to assess variations in the sensitivity and specificity of antibody detection for the different clinical forms of histoplasmosis. A ROC plot was constructed using 1-specificity plotted against sensitivity values as a dependent variable and the area under curve was estimated by non-parametric integration. The optimum "cut-off" value was obtained from two-graph-ROC (TG-ROC) analysis. 57,58 OD values of individual samples were considered as a cut-off and specificity/sensitivity values determined for each specific value, for a binary classification system as its discrimination threshold is varied. The cut-off value (or any specific value within an interval) showing the highest specificity/sensitivity parameter values, was defined as the definitive cut-off. Our optimal cut-off was also defined and confirmed by determining the efficiency and the Youden index to achieve a maximum accuracy. 59 Statistical analysis All data were subjected to statistical analysis with use of Prism 5.0 (GraphPad Software). P values were calculated by analysis of variance, Kruskall-Wallis or Pearson's correlation analysis. P values of <0.05 were considered to be statistically significant. Receiving operation curves and cutoff determination The ROC curve analysis was used to evaluate the overall performance of the ptHMIN independent of the choice of cut-off value and a conventional ROC (Figure 1). The data used for analysis included patients with all forms of histoplasmosis. There were insufficient numbers for each manifestation to perform analyses individually. The area under the ROC curve for ptHMIN representing the accuracy to predict antibody response in patients was 0.9491 ± 0.024 (95% CI, 0.9072 to 0.9910) (P<0.0001) ( Figure 1A). Next, TG-ROC analysis allowed the calculation of optimal cut-off value when the sensitivity/specificity were plotted on the same graph against each sample value representative of each of the decision thresholds considered and assumed as the cut-off values, and the intersection point between the curves giving the highest values for both parameters was obtained ( Figure 1B). For additional validation, we plotted efficiency versus OD and obtained a similar cut-off ( Figure 1C). Hence, the established cut-off value was 0.480. The samples with OD values above the cut-off were classified as positive and those below were considered negative. Evaluation of an enzyme linked immunosorbent assay evaluation OD values for all samples were organized according to homologous and heterologous sera (Figure 2). ODs were significantly higher in sera from 44 patients with histoplasmosis (mean 1.646 ± 1.065, N=44) than in heterolo- Table 5. Statistical parameters and descriptive measurement of the ELISA for detection of antibodies against ptHMIN: comparison of the different clinical forms of the disease (P>0.05). gous samples evaluated (mean 0.3539 ± 0.5179, N=71, P<0.0001) (Figure 2A). Differences in means (1.292±0.1483) and comparisons of parameters indicate that the test effectively discriminates between negative and positive samples. All the statistical evaluations and comparison among the groups are shown in Table 4. We further examined whether test efficacy varied with the clinical form of disease. Mean OD values were determined for each group: acute (1.136±0.4903), chronic (2.306±1.119), disseminated (1.836±1.297), opportunistic (1.547±0.9911), and mediastinal (0.704± 0.05647) ( Figure 2B). Statistical parameters such as maximum and minimum intragroup, median and standard deviation are shown in Table 5. The distribution of OD values varied in the clinical groups, but no statistical differences were found between them (P>0.05). Acute Chronic Disseminated Opportunistic Mediastinal Total Importantly, correlation was found when the ELISA absorbance value of each sample was compared with immunodiffusion using a Pearson's correlation test based on the logarithmic function of antibodies titer (P=0.03, R squared = 0.2197) ( Table 5). When the absorbance values of each single sample were plotted against the logarithmic value of its ID titer, a linear regression was produced between the average of absorbances and ID titers ( Figure 3). Antibodies could be detected in all the clinical forms of histoplasmosis. For acute (8 of 8) and mediastinal disease, the ptHMIN ELISA was positive for all patients. Also, 9 of 10 samples (90%) in chronic histoplasmosis, 8 of 9 samples (89%) in disseminated disease, and 12 of 14 samples (86%) in opportunistic histoplasmosis were positive. The overall sensitivity of the test was 91%. The serological parameters for each group analyzed are shown in Table 6. A higher mean of absorbances in the sera of chronic patients' samples was observed, indicating a higher overall amount of antibodies in this clinical group. Discussion Endemic mycoses can be challenging to diagnose and accurate interpretation of laboratory data is important to ensure the most appropriate treatment for patients. Although the definitive diagnosis of histoplasmosis requires the identification of H. capsulatum in infected tissue, serological diagnosis can facilitate and provide a rapid identification of the fungus since recovery of the etiological agent is time consuming. We have developed an ELISA using metaperiodate treated purified histoplasmin (ptHMIN) that can improve and speed up histoplasmosis diagnosis in nations with limited resources. 31 We evaluated and validated this assay as a specific and sensitive method for the diagnosis of histoplasmosis. The statistical parameters of ptHMIN ELISA show that it is a powerful tool for the detection of antibodies in all the clinical forms of histoplasmosis. The ELISA data generated a ROC curve area of 0.944 for the prediction of a detectible antibody response. ROC analysis provides a description of the possibility of detecting the illness through a test that is independent of the incidence of the disease. In general, values higher than 0.9 reliably classify a test as acceptable. 57 As could be seen, the curve was situated in the upper lefthand area of the ROC space, indicating a high accuracy, meaning that positive decisions are more probable when a case is actually positive than when it is actually negative. In terms of probability, this means that the probability of a test being positive when histoplasmosis is present is higher than the probability of the test being negative when histoplasmosis is absent. The overall sensitivity of the ptHMIN ELISA was 91% for all clinical forms of histoplasmosis. No statistical variation was observed when the test was repeated with the same samples. Scheel et al. recently published an ELISA intended for use in resource limited countries with an overall sensitivity of 81% and a specificity of 95% in detecting H. capsulatum antigen in urine from immunocompromised patients. 38 However, this antigen-capture ELISA requires the production and careful characterization of polyclonal rabbit antibodies due to batch-to-batch variation. In contrast, ptHMIN is readily generated and simply stored. Interestingly, detectable circulating anti- body levels varied for the different clinical forms of the disease, and the highest sensitivity was seen in the samples from patients with acute and mediastinal forms of histoplasmosis (100% in both cases). For these patients, isolation of H. capsulatum from clinical specimens lacks sensitivity, and antibody usually cannot be detected by the ID and CF. 8 Notably, the efficacy of the urine radioimmunoassay is especially poor in acute histoplasmosis, 33 making the ptHMIN ELISA useful even in such cases of this clinical form. Low sensitivity (86%) was exhibited by patients with histoplasmosis and concomitant HIV infection (opportunistic), which is most likely due to the lack of appropriate immune responses. The ptHMIN ELISA demonstrated a high specificity (96%) when testing sera from patients with other fungal infections usually exhibiting cross-reactivity in other immunoassays using crude HMIN, such as CF and WB. 8,51 All the control sera (36 samples) were negative by ptHMIN ELISA, demonstrating a specificity of 100% for this group. The ELISA was highly discriminatory when heterologous sera were compared to sera from histoplasmosis patients (P<0.05). In histoplasmosis, the association of severe disease, clinical form and antibody response may be relevant in monitoring patients and predicting prognosis. However, determinations of antibody titer over time using ID or CF have not previously been shown to correlate with clinical responses to infection. 60 The amount of Histoplasma polysaccharide antigen detected in urine by radioimmunoassay 17,37 and in serum 61 can be used to monitor a patient´s response to therapy, although most of the reported studies were carried out with AIDS patients, 16,62 and cross-reactivity may also be a problem. 63 We propose that a prospective study is warranted to determine the utility of the ptHIMN ELISA in monitoring response to therapy. In conclusion, the results of this study confirm the value of ptHMIN ELISA and demonstrate the utility of the test for the detection of antibodies in all the clinical forms of histoplasmosis. Future prospective studies will evaluate the usefulness of the ELISA in the follow-up of histoplasmosis patients during and after treatment.
v3-fos-license
2022-07-27T06:17:51.263Z
2022-07-26T00:00:00.000
251068774
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/10711007221112088", "pdf_hash": "6add97313998fbb4c26d60f2550461064b002e21", "pdf_src": "Sage", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2510", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f5d1dbde2fa48ecb0ca5abaae0f29fda9ffe14a6", "year": 2022 }
pes2o/s2orc
Cohort Comparison of Radiographic Correction and Complications Between Minimal Invasive and Open Lapidus Procedures for Hallux Valgus Background: The Lapidus procedure corrects hallux valgus first ray deformity. First tarsometatarsal (TMT) fusion in patients with hallux valgus deformity using minimally invasive surgery (MIS) is a new technique, but comparative outcomes between MIS and open techniques have not been reported. This study compares the early radiographic results and complications of the MIS with the open procedure in a single-surgeon practice. Methods: 47 MIS patients were compared with 44 open patients. Radiographic measures compared preoperatively and postoperatively were the intermetatarsal angle (IMA), hallux valgus angle (HVA), foot width (FW), distal metatarsal articular angle (DMAA), sesamoid station (SS), metatarsus adductus angle (MAA), first metatarsal to second metatarsal length, and elevation of the first metatarsal. Early complications were recorded, as well as repeat surgeries. Results: The mean follow-up was 82 (range, 31-182) months for the open group and 29 (range, 14-47) months for the MIS group. In both techniques, postoperative measures (IMA, HVA, DMAA, FW, and sesamoid station) were significantly improved from preoperative measures. When comparing postoperative measures between both groups, the IMA was significantly lower in the open group (4.8 ± 3.6 degrees vs 6.4 ± 3.2 degrees, P < .05). Differential between pre- and postoperative measures for both techniques were compared, and the open group was associated with more correction than the MIS group for IMA (12.4 ± 5.3 degrees vs 9.4 ± 4.4 degrees, P = .004) and HVA (25.5 ± 8.3 degrees vs 20 ± 9.9 degrees, P = .005). Wound complication and nonunion rates trended higher in the open group (4 vs 0) (P = .051). Conclusion: Both techniques resulted in good to excellent correction. However, the open technique was associated with lower postoperative IMA values and more correction power for IMA and HVA, than the MIS. Introduction Multiple surgical techniques have been described to address hallux valgus deformity. Among them, the fusion of the first tarsometatarsal joint (TMT) or Lapidus procedure can correct the deformity in 3 dimensions, allowing derotation of the pronated first metatar sal as well as correction of the hallux valgus angle (HVA) and plantar flexion of the first ray. 13 The original Lapidus proce dure as described by Paul W. Lapidus in 1934 has suffered several modifications over the time, mostly regarding the hardware and fixation technique. 21 Regardless, the Lapidus procedure allows to address severe deformities while offering a stable and reliable fusion. 25 Because the TMT joint is fused, the risk of recurrence is lower, especially in patients with first TMT joint hypermobility. 3,17 However, first TMT fusion has been associated with delayed healing, higher malunion, and nonunion rates. 13,20 Difficulties to reach the more plantar and lateral aspects of the TMT joint with the tendency to remove excessive bone at the medial and dorsal part of the joint in some techniques may contribute to the high rate of first ray shortening and nonunion. 20 To improve outcomes and decrease complication rates, less invasive techniques have been described. 15,22 In 2005, Lui et al 15 described the arthroscopic first TMT fusion technique and concluded that shortening, dorsiflexion, and adduction of the first ray could be minimized with this technique, as the arthroscopic procedure provides more accurate joint prepara tion, which allows maintaining the subchondral bone intact without taking excessive bone edges. In addition, the arthroscopic technique provides the benefits of minimally invasive surgery, which includes minimal soft tissue damage, reduced postoperative pain, increased bone blood supply, prevention of wound healing complications, and a better cos metic result. 15,16,22 In 2020, Vernois and Redfern 22 described a percutaneous technique for Lapidus arthrodesis using a burr. In their conclusion, they pointed out that this technique is a powerful tool for forefoot deformity corrections, but excessive first ray shortening is a major concern. Although MIS first TMT fusion techniques have been described, there is almost no information about the clinical and radiologic outcomes of this type of surgery. More important, no comparative studies between open and MIS first TMT fusion have been performed. To our knowledge, only 1 case series (n = 5 patients) has reported the out comes of the arthroscopic first TMT fusion for hallux val gus correction. 16 This study aims to assess early radiographic results and complications of the MIS arthroscopic assisted with screw fixation first TMT fusion and compare it with the open pro cedure in patients with hallux valgus deformity. Patients The local ethics committee approved this study. Consecutive patients 18 years of age or older who underwent MIS or open first TMT fusion surgical procedure to treat moderate to severe hallux valgus deformities were reviewed radio graphically and screened for complications. As the MIS first TMT fusion was introduced in our institution by the senior author in later 2017, procedures done before were mainly performed through open techniques. Therefore, patients undergoing open first TMT fusion between January 2015 and July 2017 were compared with patients under going MIS first TMT fusion between January 2018 and December 2019. The period between July 2017 and December 2017 was considered as learning curve for the MIS technique. Therefore, patients undergoing MIS first TMT fusion during this period were not enrolled. Exclusion criteria included a simultaneous fusion of the second and third ray, incomplete radiographs, first TMT fusion done for non-hallux valgus procedures, Charcot arthropathy, and fusions of the navicularcuneiform joint performed at the same time. Data on patient's baseline char acteristics, including age, sex, comorbidities (diabetes), and lifestyle factors (body mass index and smoking status), were obtained from the anesthesia records. Body mass index was calculated by dividing patients' weight (in kilo grams) by their height (in meters squared) according to the World Health Organization. 26 Surgical Technique All the patients were operated on by the senior author (A.Y.). The MIS technique has evolved in time. Prior to the introduction of Shannon burrs in our country in 2017, the MIS technique was based on the technique described by Lui et al. 15 After 2017 and before the study, the technique was evolved. The technique involves a percutaneous release of the adductor tendon at its insertion on the plantar aspect of the phalange using a Beaver 64 blade. 5 A lateral release is performed using the same blade to release the lateral cap sule and the lateral sesamoid to metatarsal head ligament under fluoroscopic control. A dorsal medial incision is made on the medial side of the first metatarsal head. To avoid dorsal medial cutaneous nerve injury, subcutaneous dissection is performed using the "nick and spread" tech nique. A 2 × 12 shannon burr (Stryker, Kalamazoo, MI) is used to cut into the metatarsal head and create a medial eminence resection next to the sagittal grove. The first TMT joint is localized with a Beaver blade and confirmed with fluoroscopy. Two portals are used, one medial and one superomedial ( Figure 1). To avoid the tibi alis anterior tendon insertion, the medial portal is located at the midpoint between the dorsal and plantar aspect of the TMT joint. A stab skin incision is performed, and the subcu taneous tissue is spread down using a nick and spread tech nique. The 2 × 12 shannon burr is introduced into the joint and the cartilage sequentially removed using Carm control and palpation. Alternatively, the 3.1mm wedge burr can be used for cartilage removal in less tight joints. When the burr is correctly positioned, the 2 bones each side of the joint surface can be felt moving as the burr turns (speed range from 3000 to 18 000 rpm, and aimed for under 6000 rpm). The 2.9mm, 30degree arthroscope is introduced, and final debridement and debris removed using a 3.5mm shaver ( Figure 2). As required, an additional superolateral portal can be done to improve joint visualisation and facilitate debridement. Once the joint is fully prepared, the first meta tarsal adduction, pronation, and positioning in the sagittal plane are manually corrected. Correction is maintained using a compressor distractor device with 2.4mm pins in the first and second metatarsal head ( Figure 1). A partial thread, cannulated 3.5mm intermetatarsal screw is inserted between the first and second metatarsals to hold the position and compress the first to second ray, improving the IM angle. The fixation is then completed with 2 to 3 percutane ous fullthread, cannulated 4.0mm screws placed crossing and transfixing the TMT joint. An example of pre and post operative radiographs showing screw placement is illus trated in Figure The open first TMT fusion was performed according to a previously published technique. 20 In the open technique, a dorsal incision was used, and 3 screws were placed across the joint. No first to second metatarsal compression screw was used. A medial incision was made and the medial emi nence removed with a ronguer. The lateral release was per formed open using the distal end of the distal incision. The lateral capsule was released, and the lateral sesamoid released laterally from the adductor and the lateral capsule. The correction was held in 3 planes and a compression dis taltoproximal screw placed. The final 2 screws were placed proximal to distal on the medial cuneiform ( Figure 5). In contrast to the MIS technique, the open technique did not include the intermetatarsal screw. Once the first TMT fusion procedure (open or MIS) was concluded, an Akin osteot omy was added if the HVA or the appearance of the first ray showed residual deformity. Postoperative Protocol At the conclusion of the surgical procedure, all patients were placed in a postoperative rigid walker boot. Patients were kept in heel weightbearing and instructed to elevate their foot for the first 2 weeks postoperatively. At the initial 2week postoperative followup, stitches were removed, and patients were allowed full weightbearing as tolerated in a rigid walker boot. Patients were instructed to remove the boot during daily range of motion exercises and weekly physical therapy sessions. Toe spacer or toe alignment splint were not prescribed. At the 6week postoperative followup, the walker boot was discontinued, and patients transitioned to regular comfortable shoes. Radiographic Measures Weightbearing anteroposterior and lateral foot radiographs were assessed preoperatively and at 3, 6, and 12 months postoperatively. Further followup was performed if neces sary. The following radiographic measures were performed: distal metatarsal articular angle (DMAA) 12 ; (5) foot width (in millimeters) 1 ; and (6) sesamoid station (in millimeters): distance between the lateral cortex of the first metatarsal and lateral cortex of the lateral sesamoid (Negative values were considered when the lateral cortex of the first metatarsal was more lateral than the sesamoid. This is modified from the original description, which is a grading of station by Hardy and Clapham. 10 ); (7) length of the first metatarsal: difference in length between the first and second rays of the foot (Negative values were attributed to a shorter first metatarsal bone. 9 ); and (8) elevation of the first ray: difference in decli nation between first and second metatarsal measured on the lateral xray. (Negative values were attributed when the first ray was plantar to the second ray. 11 ) Complications and Reoperations Incidence of postoperative complications was assessed using the patient's chart up until the time of review and postoperative radiographs and was divided as follows: (1) wound healing problems including dehiscence and wound infection; (2) sensory nerve impairment defined by persist ing numbness or paresthesia involving the hallux or sur rounding surgical site; (3) fusion nonunion defined by a painful absence of fusion after 12 months postoperatively and requiring revision surgery; and (4) recurrence of defor mity defined by symptomatic hallux valgus deformity requiring revision surgery. Furthermore, complications were categorized as minor or major if additional surgery was required or not, respectively. Postoperative hardware related pain and additional surgery performed for hardware removal was also noted. Statistical Analysis Continuous variables were reported using mean and stan dard deviation assuming a normal distribution. For continu ous variables, the preoperative and postoperative mean were compared using parametric tests and nonparametric tests accordingly. Categorical variables were reported using ratios and percentages. Chisquare tests and/or Fisher exact test were used to compare differences in categorical vari ables. Statistical analysis was conducted using SPSS. A P value of <.05 was considered statistically significant. Results Over the study period, 91 patients had a first TMT fusion for hallux valgus deformity. Fortyseven patients undergo ing MIS first TMT fusion were compared with 44 patients undergoing open first TMT fusion. Baseline demographics of all patients and according to the technique performed are illustrated in Table 1. No sig nificant differences between both groups were observed for comorbidities and lifestyle factors. Preoperative IMA and HVA were slightly higher in the open group (17.2 ± 4.9 degrees vs 15.8 ± 4.6 degrees, and 37.4 ± 6.2 degrees vs 34.4 ± 9 degrees, respectively) ( Table 2). Overall, post operative measures (IMA, HVA, DMAA, FW, and sesa moid station) significantly improved from preoperation. The changes between preoperative and postoperative mea sures for MAA, length, and elevation of the first ray were not significant. When comparing postoperative measures between both groups, the IMA was significantly lower in the open group (4.8 ± 3.6 degrees vs 6.4 ± 3.2 degrees) ( Table 3). Correction power of both techniques was compared, and the open group showed more powerful correction than the MIS group for IMA (12.4 ± 5.3 degrees vs 9.4 ± 4.4 degrees, P = .004) and HVA (25.5 ± 8.3 degrees vs 20 ± 9.9 degrees, P = .005). Table 4. The number of Akin osteotomies was similar between both groups (11 in the open group and 7 in the MIS). Discussion The percutaneous first TMT fusion performed in patients with hallux valgus deformity is a new technique and, to our knowledge, this is the first article reporting comparative outcomes between MIS and open first TMT fusion. Overall, our results show a significant improvement in the radio logic measures from the preoperative, suggesting that both techniques provide good to excellent deformity correction. However, there was a trend toward less robust IMA and HVA correction with the MIS technique. Although postoperative IMA absolute values were significantly lower in the open group, it was inferior to 9 degrees in both techniques, which is generally consid ered a normal IMA. 4 When comparing the correction power of both procedures, the open group showed sig nificantly more correction of the IMA and HVA than the MIS group (12.4 ± 5.3 degrees vs 9.4 ± 4.4 degrees, and 25.5 ± 8.3 degrees vs 20 ± 9.9 degrees, respectively). The correction power of first TMT fusion through open procedures has been previously reported between 6 and 9 degrees for the IMA, and between 10 and 22 degrees for the HVA. 7,19,20 In one prospective study involving 46 patients who underwent open first TMT fusion, the mean preoperative IMA improved from 13.5 to 5.7 degrees, with a difference vs baseline of 7.8 degrees, and the HVA improved from 33.8 to 13.9 degrees, with a difference vs baseline of 19.9 degrees. 7 In comparison to these results, our open group showed higher IMA and HVA correction, which may be explained as being due to higher preopera tive deformity in our cohort. Regarding the MIS results, there are few reports to date, and radiologic outcomes have been reported one single time in one case series involving 5 patients. In this study, published by Michels et al, 16 the IMA improved from 17.8 degrees preoperatively to 7.2 degrees postop eratively, with a differential of 10.6 degrees. The HVA improved from 42.6 degrees preoperatively to 17 degrees postoperatively, with a differential of 25.6 degrees. In our MIS group, there was a trend toward less correction of IMA and HVA. However, the difference between Michels et al 16 and our cohort results may be explained by the additional surgical procedure performed in the study of Michels et al. In their cohort, a distal chevron metatarsal osteotomy was performed for all patients in addition to the arthroscopic first TMT fusion. In our cohort, no fur ther metatarsal osteotomy was done in association with the first TMT fusion, and Akin osteotomies were rarely performed. Nevertheless, in retrospective, we think that Akin osteotomies may have been appropriate in some cases to improve final HVA in the MIS group. Excessive first metatarsal shortening after first TMT fusion has been a major concern, as is a risk factor for developing postoperative transfer metatarsalgia. 8,20 Using less invasive first TMT fusion techniques with more careful joint preparation would, theoretically, avoid excessive first ray shortening. 15 The average first metatarsal shortening has been reported between 2.9 and 8 mm with open tech niques and 2.7 mm with MIS techniques. 14,16,20 In our cohort, the differential between pre and postoperative first metatarsal length was lower than previously reported val ues, and no significant differences in the postoperative first metatarsal length were observed between open and MIS techniques. We believe that a carefully first TMT joint prep aration performed exclusively by hand, and changes in radiograph projections due to slight differences in foot and beamer positions, may have contributed to the lower values of postoperative metatarsal length in our cohort. Nonunion is one of the most frequent major complica tion after open first TMT fusion, and its incidence has been reported from 2% to 10%. 18,20,24 Our results showed a trend toward statistical significance in nonunion rates in the open group as there were 4 nonunion cases in the open group and none in the MIS group. Interestingly, Michels et al 16 also reported zero cases of nonunion in first TMT fusion with MIS techniques. Similar results were observed for wound complications, with a trend to increased rates in the open group. Although further research is needed to validate this trend, our results suggest that the less invasive first TMT fusion may be associated with lower incidence of complica tions such as nonunion and wound problems. Although the open group presented higher complications rates, the survivorship analysis of repeat surgery showed an increased early repeat surgery in the MIS group, and this difference was significant. The learning curve for less inva sive procedures is known to be longer and more demanding than that for open surgery. 2 The senior author progressively introduced the arthroscopic technique in late 2017, and although we have excluded patients treated in the early phase of the learning curve, the longer learning process may have contributed to the increased early revision rates observed in the MIS group. In addition to the learning curve, the MIS technique may be associated with other limitations. The surgical correction of the deformity can be harder to achieve, and placement of screws in the corrected position may be more challenging. We believe that these factors might have influenced the increased early repeat surgery rates observed in our cohort, which were largely hardware removal related. This study has several limitations, primarily those inher ent to all retrospective studies. First, the followup period for the MIS group was shorter than the open group, which may be a source of bias when assessing the incidence of complication rates. Studies with longerterm outcomes are warranted to support our findings. Second, as we did not assess clinical outcomes, the clinical relevance of the differ ences observed in the radiographic measures between both groups is not clear. Further randomized controlled trials with patientreported outcomes comparing both techniques are required, as this might not be pertinent for patients' out comes. Third, the learning curve for the MIS group may have influenced the results observed, and this group needs to be evaluated to determine if the experience with the procedure improves outcomes. As the learning curve and technique are evolving, improvement of correction and reduction of nerve symptoms from the percutaneous tech nique may be seen in time. In conclusion, this is the first study comparing radiologic results and complication rates between open and MIS first TMT fusion. Our results showed that both techniques pro vide significant improvement in radiograph measures from the preoperative with good to excellent deformity correc tion. Nevertheless, the open technique was associated with lower postoperative IMA absolute values and more correc tion power for the IMA and HVA than the MIS. The open technique was associated with higher rates of nonunion and wound complications.
v3-fos-license
2021-09-25T13:33:54.888Z
2021-09-24T00:00:00.000
237620918
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-021-03213-0", "pdf_hash": "ae724155b66744d6c32495f709715ce249ce9a01", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2511", "s2fieldsofstudy": [ "Biology" ], "sha1": "cd823233e0c88ace8066ddfa25af7bc9fb3ed253", "year": 2021 }
pes2o/s2orc
Genome-wide analysis of HSP70 gene superfamily in Pyropia yezoensis (Bangiales, Rhodophyta): identification, characterization and expression profiles in response to dehydration stress Background Heat shock proteins (HSPs) perform a fundamental role in protecting plants against abiotic stresses. Individual family members have been analyzed in previous studies, but there has not yet been a comprehensive analysis of the HSP70 gene family in Pyropia yezoensis. Results We investigated 15 putative HSP70 genes in Py. yezoensis. These genes were classified into two sub-families, denoted as DnaK and Hsp110. In each sub-family, there was relative conservation of the gene structure and motif. Synteny-based analysis indicated that seven and three PyyHSP70 genes were orthologous to HSP70 genes in Pyropia haitanensis and Porphyra umbilicalis, respectively. Most PyyHSP70s showed up-regulated expression under different degrees of dehydration stress. PyyHSP70-1 and PyyHSP70-3 were expressed in higher degrees compared with other PyyHSP70s in dehydration treatments, and then expression degrees somewhat decreased in rehydration treatment. Subcellular localization showed PyyHSP70-1-GFP and PyyHSP70-3-GFP were in the cytoplasm and nucleus/cytoplasm, respectively. Similar expression patterns of paired orthologs in Py. yezoensis and Py. haitanensis suggest important roles for HSP70s in intertidal environmental adaptation during evolution. Conclusions These findings provide insight into the evolution and modification of the PyyHSP70 gene family and will help to determine the functions of the HSP70 genes in Py. yezoensis growth and development. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-021-03213-0. Background Heat shock proteins (HSPs) are found in almost all organisms, from bacteria to humans [1]. In plants, members of the family of HSPs act in cell protection through the folding and translocation of nascent proteins and the refolding of denatured proteins under both stress and non-stress conditions [2,3]. HSPs can be divided into five families based on molecular weight: HSP70, 70-kDa heat shock protein; HSP90 and HSP100 family; HSP60, chaperonin family; and sHSP, small heat shock protein. Of these, HSP70 is widely conserved and has been shown to play roles in development and defense mechanisms under various stresses. Open Access *Correspondence: yxmao@hntou.edu.cn 3 The HSP70 gene family contains three highly conserved domains: a C-terminal domain about 10 kDa in size that can bind substrate, an intermediate domain 15 kDa in size, and an N-terminal domain (NBD) 44 kDa in size that binds ATP [4]. Plant HSP70 genes have been localized to four locations: the cell nucleus/cytoplasm, endoplasmic reticulum (ER), plastids, and mitochondria, with different functions in different locations [5,6]. Deficiency of some cytosolic HSP70s led to severe growth retardation, and heat treatment of plants deficient in HSP70 genes dramatically increases mortality, indicating that cytosolic HSP70s plays an essential role during normal growth and in the heat response by promoting the proper folding of cytosolic proteins [7,8]. Ectopic expression of a cytosolic CaHSP70-2 gene resulted in altered expression of stress-related genes and increased thermotolerance in transgenic Arabidopsis [9]. Cytosolic HSP70A in Chlamydomonas regulates the stability of cytoplasmic microtubules [10,11]. Transgenic tobacco plants that over-expressed nuclear-localized NtHSP70-1 exhibited decreased fragmentation and degradation of nuclear DNA during heat-/drought-stress [6,12]. Knockout experiments indicate that the import of stromal HSP70s into the chloroplast stroma is essential for plant development and important for the thermotolerance of germinating seeds [13]. Transgenic tobacco plants constitutively expressing elevated levels of BIP (an ER-localized HSP70 homologue) exhibited tolerance to water deficit by preventing endogenous oxidative stress [14]. In rice, the BIP1/OsBIP3 gene, encoding HSP70 in the ER, regulates the stability of XA21 protein to interfere with XA21-mediated immunity [15]. Mitochondrial HSP70 can suppress programmed cell death in rice protoplasts by maintaining mitochondrial membrane potential and inhibiting the amplification of reactive oxygen species (ROS) [16]. However, the biological functions of most HSP70s in nori have not yet been elucidated, partly due to a lack of information about coding genes or other genomic information. Pyropia yezoensis (Bangiales, Rhodophyta) is an economically important seaweed that is cultivated in the intertidal zones of China coastlines [17]. The production and quality of the cultivated Py. yezoensis thalli are significantly influenced by intertidal environmental stress. Tidal exposure imposes considerable environmental stress on intertidal seaweeds due to altered irradiance levels [18], temperature changes [19], and direct effects from desiccation [20,21]. In this study, all of the non-redundant members of HSP70 genes in Py. yezoensis were screened from available, high-quality, chromosomal-level genomes. We determined the characteristics of PyyHSP70 genes based on the physicochemical properties, genomic locations, and conserved motifs, promoters, and analyzed the phylogenetic relationships of these genes. In addition, the expression levels of the PyyHSP70 genes were analyzed under dehydration and rehydration conditions. Finally, highly expressed PyyHSP70 proteins were localized in Arabidopsis protoplasts. Our findings will be useful resources for future studies of the functions of HSP70 genes in algae, which will help us understand the evolution of HSP70 genes in different species. Genome-wide identification of PyyHSP70 genes in Py. yezoensis After verification, the sequence information was obtained from the Py. yezoensis genome for 15 putative PyyHSP70s. The basic information of PyyHSP70 genes (including genomic position, gene length, intron number, amino acid number, isoelectric point (pI), molecular weight, CDS, subcellular localization, and instability index) is listed in Table 1. The predicted PyyHSP70 protein sequences ranged from 276 amino acids to 934 amino acids, and the molecular weights ranged from 29.59 to 96.07 kDa. Analysis with the Expasy online tool revealed instability index values of PyyHSP70s that ranged from 21.76 to 48.25, with a single PyyHSP70 member (PyyHSP70-8) having an instability index greater than 40, indicating an unstable protein. Of the 15 PyyHSP70 proteins, 11 members are predicted to localize to the nucleus/cytoplasm, one to the ER, one to the mitochondria, and two to the chloroplasts. The genes either had no introns or one intron, with eight and seven members respectively (Fig. 1A). The 15 members of PyyHSP70 were distributed on all three chromosomes, with an uneven distribution in the genome. Chromosome 1 had the highest density of PyyHSP70 genes, nine members ( Fig. 1B). Conserved motifs and phylogenetic analysis of PyyHSP70s To better understand the structural characteristics of PyyHSP70 proteins, a multiple sequence alignment was performed of the HSP70 domains of all 15 PyyHSP70 proteins and the EcDNAK protein, as shown in Figure S1. The two functional domains (ATPase domain and peptide-binding domain) were present in all PyyHSP70s. The ATPase domains of PyyHSP70-6, PyyHSP70-7, and PyyHSP70-15 were shorter, and lacked three signature motifs that are characteristic of the ATPase domain of HSP70 family members (Table S1). Additionally, the peptide-binding domain of PyyHSP70-2 was shorter, and much shorter C-terminal sub-domains were present in PyyHSP70-2 and PyyHSP70-7 (Table S1). Twelve consensus motifs were found in PyyHSP70 proteins using the MEME motif search tool (Fig. 2, Table S2). Motifs 1, 2, 5, 6, 7, 9, 10, 11, and 12 were identified in the ATPase domain, and motifs 3, 4, and 8 were identified in the peptide-binding domain. Only motifs 3, 4, 6, and 7 were detected in all PyyHSP70 members of the DnaK subfamily, and only motif 2 was detected in all PyyHSP70 members of the Hsp110 subfamily. An unrooted phylogenetic tree was constructed to visualize the evolutionary relationships between HSP70 members, using 76 HSP70 protein sequences from nine species (Table S3). As shown in Fig. 2, these HSP70s were classified into two subfamilies (the DnaK subfamily and the Hsp110 subfamily). The DnaK subfamily was further divided into four groups based on localization (cytoplasm, ER, mitochondria, and plastid). The HSP70 proteins from different species were more closely related to those in the same subfamily than to others in the same species. For example, cytosolic PyyHSP70-4 was more closely to PyhHSP70-5 than PyyHSP70-11. For the PyyHSP70 family members, orthologs from Py. yezoensis and Py. haitanensis (seven pairs) or Porphyra umbilicalis (three pairs) were identified, indicating there may have been common ancestral genes of the HSP70 family before differentiation of the three species (Fig. 3). In addition, a subclade of six genes (PyyHSP70-2, PyyHSP70-5, PyyHSP70-6, PyyHSP70-10, PyyHSP70-14 and PyyHSP70-15) in the cytoplasm group implied the proximity of these sequences and potential paralogous relationships by duplication events after the divergence of the two Pyropia species. The Ka/Ks ratios of the parolog pairs in the subclade were calculated and the results ranged from 0.6327-1.3487 ( Table 2). The Ka/Ks ratios for five pairs were less than but close to one, indicating slightly negative selection; the other two pairs' Ka/Ks ratios were greater than one, suggesting positive selection. All seven orthologous events of Pyropia exhibited Ka/Ks ratios far less than 1 (Table S4). Cis-regulatory element analysis of the PyyHSP70 gene family The regulatory roles of the identified PyyHSP70 genes were further studied by analysis of the 2000 bp region upstream of these genes. We searched the promoter sequences using the PlantCARE tool for seven regulatory elements previously found to be involved in various stresses: ABRE, CGTCA-motif, TGACG-motif, TCAelement, MYB-binding sites (MBS), LTR, and DRE ( Besides, we also searched three types of heat shock elements (HSEs), perfect type (nTTCnnGAAnnTTCn), gap type (nTTCnnGAAnnnnnnnTTCn) and step (S) type (nTTCnnnnnnnTTCnnnnnnnTTCn) [22], in these promoter sequences. Only one S-type HSE was detected in PyyHSP70-3. The detection of these abiotic response elements suggests that the PyyHSP70 genes may be extensively involved in stress responses, thereby increasing the range of mechanisms that organisms could emply to escape or better cope with adverse environmental effects. Expression patterns of PyyHSP70 genes under dehydration treatments To further clarify the potential ability of the PyyHSP70 genes to respond to dehydration stress, RNA-Seq data were analyzed. Expression analysis of PyyHSP70 under dehydration stress revealed low (< 0.3) or no expression from seven genes in all treatments, but the other eight PyyHSP70 genes exhibited higher expression (Fig. 4). The expression of PyyHSP70-1 and PyyHSP70-3 gradually increased with increased dehydration stress, and the expression level slightly decreased with subsequent rehydration treatment. The expression levels of PyyHSP70-11 increased with increased water loss and continued to increase during rehydration. The expression levels of PyyHSP70-8, PyyHSP70-4, and PyyHSP70-13 first increased and then decreased as the degree of dehydration deepened. The expression levels of PyyHSP70-8 and PyyHSP70-13 increased during rehydration, and PyyHSP70-4 experienced an increase of expression for one dehydration condition (AWC20) and then decreased during rehydration. These RNA-seq expression patterns were verified by detecting the expression patterns of the PyyHSP70 genes by qRT-PCR (Fig. 5). The measured expression levels of most genes were highly consistent with the levels determined by RNA-seq, except for PyyHSP70-13. The subcellular localization of PyyHSP70-1 and PyyHSP70-3 proteins PyyHSP70-1 and PyyHSP70-3 showed the biggest expression changes in response to dehydration stress, so we next determined the subcellular localization of these two proteins. PyyHSP70-1 was localized to the cytoplasm, and PyyHSP70-3 localized to the nucleus/cytoplasm (Fig. 6), basically consistent with the predicted results ( Table 1, Fig. 6). Disscussion Daily changes in tide height cause air exposure to seaweed, triggering rapidly-changing physical stresses such as dehydration, high temperature, and different irradiance levels [23]. Because they live in the challenging habitat of the intertidal zone, intertidal macroalgae have adapted a set of protective mechanisms to survive [24]. Some intertidal seaweeds are highly tolerant to desiccation. Species of the genera Pyropia and Porphyra (Bangiales, Rhodophyta) inhabit the upper intertidal zone and can lose up to 95% of cellular water content during maximum low tide [18]. HSP70 is a superfamily of molecular chaperones widely distributed in eukaryotic cells. These proteins play important roles under abiotic stress by participating in many protein folding processes. However, the HSP70 superfamily of Py. yezoensis was not previously characterized. In this work, we comprehensively analyzed the characteristics, expression patterns under dehydration stress, and the subcellular localization of PyyHSP70s. Evolution analysis of HSP70 genes In this study, we identified 15 HSP70 domain-containing genes in the P. yezoensis genome that constitute the HSP70 superfamily, including 11 DnaK subfamily genes and 4 Hsp110 subfamily genes. We also analyzed the genomes of five other red algae and identified 36 HSP70 genes. We found no direct relationship between genome size and the number of HSP70 genes in red algae. For example, we identified eight HSP70 genes in Py. haitanensis (genome size: 53 Mb), eight genes in Galdieria sulphuraria (genome size: 14 Mb), and eight genes in Chondrus crispus (genome size: 105 Mb). This diversity in the number of red algae HSP70 genes indicated that the HSP70 gene family has utilized different evolutionary strategies in different species. PyyHSP70s proteins were divided into two sub-families, similar to those reported by previous analysis of HSP70s in A. thaliana and yeast [5,25]. The DnaK subfamily was further divided into four groups based on localization. The number of HSP70 genes from the six red algae was basically same in each group of the DnaK subfamily, except the Cytoplasm group which contained more members of the PyyHSP70 gene family due to paralogous duplication events. Paralogous duplication events were not evident in the other five red algae, further implying that PyyHSP70s expanded according to species-specific approaches during evolution. We found no expression for these paralogous PyyHSP70 genes with intact gene structures in dehydration treatments, and also no expression of these genes was detected in response to other abiotic/biotic stresses of P.yezoensis [20,21,26]. HSP genes in other species were previously identified that also did not appear to be expressed under tested conditions, but the reason has not yet been determined [27,28]. Interesting, two-pairs of PyyHSP70 paralogs showed positive selection, suggesting new functions that should be verified by further experiments. Fig. 4 Heatmap of the expression patterns of PyyHSP70 genes under dehydration and rehydration treatments: absolute water content 100% (AWC100, control), absolute water content 70% (AWC70), absolute water content 50% (AWC50), absolute water content 20% (AWC20), rehydrated 30 min after 20% of water loss (AWC20_30min). The color bar (right) represents log 2 expression levels (FPKM). The tree (left) represents clustering result of PyyHSP70s' expression patterns Yu et al. BMC Plant Biol (2021) 21:435 HSP70 genes play essential roles in response to dehydration stress Previous studies have found abundant HSEs in the promoter regions of HSP70 genes that become active in response to heat shock and other temperature treatments in higher plants [2,29]. However, we found few HSE and LTR in the promoter regions of PyyHsp70 genes. Py. yezoensis live on intertidal rocks, where they experience repeated cycles of dehydration and rehydration. Cisregulatory element analysis showed that most PyyHSP70 gene promoter sequences contained cis-elements associated with dehydration stress. For example, the ABRE motif conserved in drought response genes [30,31], MYB binding sites (MBS) involved in drought-inducibility, and CRT/DRE elements associated with dehydration and salt stresses [32,33]. The results suggest that PyyHSP70s might be significantly related to dehydration response. Intermittent desiccation stress caused by tidal changes is a significant abiotic factor affecting intertidal seaweed species. This stress can affect the physiology of organisms, mainly through oxidative stress causing destabilization of proteins, leading to loss of membrane integrity [34][35][36]. Desiccation results in increased expression of tolerance genes, such as genes encoding HSPs and related transcriptional factors [37,38]. These mechanisms may also function in intertidal seaweed to tolerate desiccation. Several HSP70s have also been found to help protect against desiccation damage by assisting protein-folding processes involved in stress and affecting the proteolytic degradation of unstable proteins [39]. HSP70s have also received attention in marine organisms as a kind of biomarker of stress, because their expression is highly variable in the presence or absence of stimuli [40][41][42]. Zhou et al. (2011) suggested that analysis of HSP70 genes could be utilized to evaluate algae tolerance to stresses and monitor coastal environmental changes [43]. Tang et al. (2016) found that moss plants overexpressing PpcpHSP70-2 highly induced by dehydration treatment showed dehydration tolerance [44]. We found that more than half of PyyHSP70 genes exhibited increased transcription levels with increasing degree of dehydration. We also found significantly increased expression of some PyyHSP70 genes, especially PyyHSP70-1 and PyyHSP70-3, upon reaching a water content of 20%, with down-regulated expression after rehydration. This finding was consistent with that of a previous study that showed that HSP70s played important roles only in the response to extreme desiccation stress [41]. Like Py. yezoensis, Py.haitanensis also lives in the intertidal zone and experiences repeated dehydration and rehydration (though at a different temperature). These two species are evolutionarily very close, both belonging to Pyropia. Paired orthologs between Py.yezoensis and Py. haitanensis showed strong purifying selection and similar trends in expression (Table S4, Figure S2), suggesting that these HSP70 orthologs play important roles in dehydration treatments of laver. Therefore, it is important to study the HSP70 genes involved in the dehydrationinduced response of Py. yezoensis to further explain the stress resistance and environmental adaptation of intertidal algae. Conclusions The Py. yezoensis genome contains 15 members of the HSP70 gene family, and these genes are unevenly distributed on three chromosomes. The gene structures and phylogenetic analysis suggest a complex evolution history of this gene family in Py. yezoensis. The analysis reveals that the PyyHSP70 family has experienced gene duplication events after species divergence relative to other red algae. Most HSP70s showed up-regulated expression under different degrees of dehydration stress, especially PyyHSP70-1 and PyyHSP70-3 which showed much higher expression levels in dehydration treatments and slightly decreased expression after rehydration treatment. Similar expression trends of orthologs of Py.yezoensis and Py.haitanensis in dehydration treatments demonstrate the important roles of these proteins in intertidal environmental adaptation during evolution. PyyHSP70-1-GFP and PyyHSP70-3-GFP were localized in the cytoplasm and the nucleus/cytoplasm, respectively. This overview of this gene family should facilitate further studies of the HSP70 gene family, particularly in regards to their evolutionary history and biological functions. Genome-wide identification of HSP70 proteins in Py. yezoensis The Py. yezoensis genome and protein sequences were deposited to DDBJ/ENA/GenBank as accession WMLA00000000 [17]. To identify candidate Py. yezoensis HSP70 protein sequences, the Hidden Markov model (HMM) profile of the HSP70 domain was downloaded from the Pfam (http:// www. sanger. ac. uk/ Softw are/ Pfam/) database (Pfam:PF00012) and then submitted as a query in a HMMER (e-value < 1e −5 ) search (https:// www. ebi. ac. uk/ Tools/ hmmer/) of the Py. yezoensis protein database. The obtained protein sequences were screened and verified for the presence of the HSP70 domain using SMART (http:// smart. embl-heide lberg. de/) tools [45], CDD (http:// www. ncbi. nlm. nih. gov/ Struc ture/ cdd/ wrpsb. cgi) and InterProScan (http:// www. ebi. ac. uk/ inter pro/ result/ Inter ProSc an/). The same process was used to obtain the other four red algae HSP70 family genes from their genome databases [46][47][48][49]. For the PyyHSP70 genes, we determined the chromosomal locations, genomic sequences, full coding sequences, protein sequences, and the sequence of the 2000 nucleotides upstream of the translation initiation codon. The molecular weight (Da) and isoelectric point (pI) was calculated for each gene using the Compute pI/Mw tool from ExPASy (http:// www. expasy. org/ tools/) [50]. The subcellular localization of proteins was determined by analysis of the WoLF PSORT, Predotar, PSORT, SherLoc2, CELLO, and Softberry databases, and decided based on consensus localization for two or more algorithms. Schematic images of the chromosomal locations of the PyyHSP70 genes were generated using MapGene2Chrom software (http:// mg2c. iask. in/ mg2c_ v2.1/), according to the chromosomal position information in the NCBI database. Gene structure analysis and identification of conserved motifs To investigate the diversity and structure of members of the PyyHSP70 gene family, we compared the exon/ intron organization of the cDNA sequences and the corresponding genomic DNA sequences of HSP70 using EVOLVIEW (https:// evolg enius. info// evolv iew-v2/). In addition, the amino acid sequences were subjected to "predict the domain and motif analyses" online with MEME (http:// meme-suite. org) [51]. The parameters were as follows: number of repetitions, any; maximum number of motifs, 12; and optimum motif widths, 2 to 300 amino acid residues. Multiple alignment and phylogenetic analysis We constructed two phylogenetic trees, one with only PyyHSP70 protein sequences and the other including 76 HSP70 protein sequences from different species. The gene and protein sequences of Arabidopsis thaliana, Escherichia coli and yeast were acquired from previous studies [2,25,52,53] and accession GCA_008690995.1 (NCBI). Multiple sequence alignment to full predicted HSP70 protein sequences was performed with Muscle in Molecular Evolutionary Genetics Analysis (MEGA) 7.0 software using default parameters [54]. Sequence alignments were performed with ClustalX software [55]. Phylogenetic trees were constructed using MEGA7.0 with the Neighbor-Joining (NJ) method, and a bootstrap analysis was conducted using 1000 replicates with pairwise gap deletion mode. Gene duplication and Ka/Ks analysis The microsynteny between Py. yezoensis, Py. haitanensis, and Po. umbilicalis was analyzed by MCScanX with the default parameters [56]. The criteria used to analyze potential gene duplications included: (1) the length of the sequence alignment covered ≥ 70% of the longer gene; and (2) the similarity of the aligned gene regions ≥ 70% [57]. Non-synonymous (Ka) substitution and synonymous (Ks) substitution were calculated for each duplicated PyyHsp70 gene using KaKs_Calculator [58]. RNA-seq atlas analysis To investigate the expression patterns of PyyHSP70 genes in response to dehydration/rehydration treatments, the related RNA-sequencing (seq) data of Py. yezoensis were downloaded from NCBI under accession number PRJNA401507 [21]. The RNA-seq data of Py.haitanensis in dehydration/rehydration treatments were used to obtain expression patterns of PyhHSP70 genes [20]. Expression heatmaps were constructed using R software and based on the FPKM values of gene expression in different treatments. RNA isolation and qRT-PCR analysis By weighing the fresh weight and the dry weight of thalli, the absolute water content (AWC) of the thallus was calculated according to the methods described by Kim et al. (2009) [59]. Thalli produced under normal growth condition were harvested as the control group (AWC100). Before dehydration, the surface water of the thalli was removed by paper towels, and then the selected thalli were naturally dehydrated under 50 μmol photons m −2 •s −1 at 8 ± 1 °C. The thalli samples were collected until the total water content decreased by 30% (AWC70), 50% (AWC50), and 80% (AWC20). After losing 80% water content, the samples were recovered in normal seawater for 30 min (AWC20_REH) [20,21]. Three biological replicates were performed for each treatment. Samples were harvested and placed in liquid nitrogen before processing for gene expression analysis. Total RNA was extracted using the RNeasy Plant Mini Kit (OMEGA) according to the manufacturer's instructions. Next, 1 μg total RNA was used to synthesize the first-strand cDNA using a HiScript ® III RT SuperMix for qPCR (+ gDNA wiper) Kit (Vazyme Biotech). The qRT-PCR analysis was performed as described previously [60]. The expression levels of the ubiquitin-conjugating enzyme (UBC) and cystathionine gamma-synthase 1 (CGS1) genes were used as reference [61]. and the 2 −△△Ct method was used to calculate relative gene expression values. The sequences of the primers used are listed in Supplementary Table S5. Subcellular localization analysis of PyyHsp70s To validate the prediction of subcellular localization, transient expression analyses were performed using a protoplast system based on the pBWA(V)HS-(PyyHSP70-1/ PyyHSP70-3)-GLosgfp vector. For two representative PyyHSP70 genes, the full-length CDS without the stop codon was cloned into the pBWA(V)HS vector. Each CDS was fused in-frame to the N-terminus of the green fluorescent protein (GFP) coding sequence under the control of the CaMV 35S promoter. The primers used for PCR amplification of the full-length HSP70 CDS are listed in Table S6. The vector with only GFP gene expressed was used as a control. The protoplasts used for transient expression analysis were extracted from Arabidopsis leaves and transformed by the polyethylene glycol (PEG) method [62]. Briefly, the Arabidopsis leaves was put into enzyme solution (1.5% (w/v) cellulose R10, 0.75% (w/v) macerozyme R10, 0.6 M mannitol, 10 mM MES, pH5.8) at 24 °C for 4 h with gentle shaking in the dark. After filtering through nylon mesh and washing two times with W5 solution (154 mM sodium chloride, 125 mM calcium chloride (CaCl 2 ), 5 mM glucose, 2 mM KH 2 PO 4 , 2 mM MES, pH 5.7), protoplasts were resuspended in MMG solution (0.4 M mannitol, 15 mM magnesium chloride, 4 mM MES, pH 5.7) at a cell concentration of 2 × 10 5 mL −1 . Then, 10 μg of each plasmid sample was mixed with 100 μL protoplasts, followed by addition of 120 μL of freshly prepared PEG solution (40% (w/v) PEG4000, 0.6 M mannitol, and 100 mM CaCl 2 ). The mixture was incubated at room temperature for 30 min in the dark, and then diluted gently with 1 mL W5 solution. After centrifugation at 300 rpm for 3 min, protoplasts were resuspended in 1 mL of W5 solution before incubating at 25 ℃ for 16 h and then observed using a Nikon Eclipse 80i fluorescence microscope. Respective excitation and emission wavelengths were 488 nm and 510 nm for the GFP signal, and 640 nm and 675 nm for the Chl signal. expression levels (FPKM). The tree (left) represents clustering result of PyhHSP70s' expression patterns.
v3-fos-license
2018-04-03T00:23:08.597Z
2017-02-22T00:00:00.000
11462917
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0172609&type=printable", "pdf_hash": "1265a4d73ddfa2d4431d8551c441043299da21f5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2512", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "1265a4d73ddfa2d4431d8551c441043299da21f5", "year": 2017 }
pes2o/s2orc
Bayesian prediction of placebo analgesia in an instrumental learning model Placebo analgesia can be primarily explained by the Pavlovian conditioning paradigm in which a passively applied cue becomes associated with less pain. In contrast, instrumental conditioning employs an active paradigm that might be more similar to clinical settings. In the present study, an instrumental conditioning paradigm involving a modified trust game in a simulated clinical situation was used to induce placebo analgesia. Additionally, Bayesian modeling was applied to predict the placebo responses of individuals based on their choices. Twenty-four participants engaged in a medical trust game in which decisions to receive treatment from either a doctor (more effective with high cost) or a pharmacy (less effective with low cost) were made after receiving a reference pain stimulus. In the conditioning session, the participants received lower levels of pain following both choices, while high pain stimuli were administered in the test session even after making the decision. The choice-dependent pain in the conditioning session was modulated in terms of both intensity and uncertainty. Participants reported significantly less pain when they chose the doctor or the pharmacy for treatment compared to the control trials. The predicted pain ratings based on Bayesian modeling showed significant correlations with the actual reports from participants for both of the choice categories. The instrumental conditioning paradigm allowed for the active choice of optional cues and was able to induce the placebo analgesia effect. Additionally, Bayesian modeling successfully predicted pain ratings in a simulated clinical situation that fits well with placebo analgesia induced by instrumental conditioning. Introduction The placebo effect has been regarded as a conjunction of automatic conditioning effects and cognitive expectancy effects based on conscious contextual information [1]. People can learn to obtain benefits through verbally induced expectations, cued and contextual conditioning, and/or observational and social learning [2]. Accordingly, these types of learning processes guide changes in behavior and expectations that could lead to the formation of placebo analgesia [3,4]. Many studies have found that experimental conditioning induces placebo analgesia [5][6][7][8][9]; most of these studies implemented conditioning using Pavlovian conditioning a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 paradigms that involved hard-wired passive learning of relations among events (i.e. the learning of conditioned stimulus to unconditioned stimulus associations), making it possible to predict and avoid the occurrence of potentially harmful stimuli. These paradigms, however, do not consider the learners' actions, that is, whether intentional exploration or behavior changes will influence the learning outcome [10]. On the other hand, instrumental conditioning involves action that is taken according to certain cues, and allows for the possibility of choices. This enables learning of the actions and their consequences, which may either be rewards or punishments [11]. During instrumental conditioning procedures, the brain provides a common currency for decision-making that incorporates reward acquisition and punishment avoidance [12]. While there is ample evidence that the association of the events in Pavlovian conditioning is valid in the real world [13], the action and behavior changes that are involved in instrumental learning may be more similar to an actual clinical setting. Thus, it is important to consider the capacity of an individual to assess potential rewards based on the outcome of pain relief in real world settings [14]. In instrumental learning (operant learning) paradigms, the brain computes the error between predicted and actual outcomes and uses that error value to improve future predictions and actions [15]. In the context of placebo analgesia, previous studies have utilized animal models together with instrumental conditioning for understanding the development of placebo analgesia [16]. However, to the best of our knowledge, no human studies have so far investigated placebo analgesia using an instrumental learning model. According to the framework of predictive coding, the brain actively makes inferences based on prior experiences and expectations [17]. An inferential process in the brain is conceptualized as perception, in which information from prior experiences is used to generate expectations about future perception and to interpret sensory inputs [18]. Within the Bayesian theoretical and mathematical framework, the brain constantly interprets sensory inputs through the method of minimizing the average of prediction errors across the whole sensory system [19]. Thus, employing a Bayesian framework of brain function would benefit the current understanding of placebo analgesia in instrumental conditioning paradigms. Placebo analgesia is then regarded as a probabilistic integration between top-down expectations of prior pain and bottom-up sensory signals [17]; from this perspective, it is important to consider not only the averaged magnitude of previous pain experiences but also the precision of the constructed expectation. Recently, a computational investigation of pain supported the idea that the Bayesian model reflects the strategies used during pain perception by showing that modulation due to disparate factors is intrinsic to the pain process [20]. However, no studies have attempted to predict the placebo response during instrumental conditioning using such a predictive coding framework. Thus, in the present study, an instrumental learning model was implemented by adapting a trust game to a clinical situation, in which two available options, labeled "doctor" and "pharmacy", were associated with different magnitudes (intensity) and degrees of precision (variation) of pain. This simulation resembles a real-world situation and the participants were allowed to explore, experience, and evaluate the two options actively. This experimental paradigm was used to investigate whether the placebo analgesic responses to behavioral actions were associated with less pain. Furthermore, Bayesian modeling was applied to predict the placebo responses of the participants within the instrumental conditioning model. Participants This study included 24 healthy human volunteers (14 males, 10 females) who were recruited through an advertisement. None of the participants had any history of neurological, psychiatric, or other major medical problems and none were using medications at the time of the study. There were no drop-outs after the initial inclusion in the experiment. Each participant received a detailed explanation of the study, and written informed consent was obtained prior to participation. All procedures were performed with the approval of the institutional review board of Korea University, Seoul, Republic of Korea. Experimental design and procedure To assess placebo analgesia, a computerized task mimicking the medical decision-making process was implemented; this task was called a "medical trust game" inspired and modified from a previously-published economic trust game [21,22]. Prior to the experiment, participants were informed that this experiment was designed to measure the treatment utilization tendencies of each participant through computerized experiments. Thus, participants played a medical trust game in which a hypothetical situation of visiting a doctor to relieve pain was instituted. In each trial, participants made a series of 40 trust-related decisions that each involved the presentation of a different face on the screen; they were given 2,000 Korean won (approximately $2 USD) for each trial. All of the faces were Asian, between 25 and 50 years of age, photographed directly from the front with the subject wearing a white coat, and presented in greyscale. Upon seeing the doctor, each participant decided whether they would receive care from this doctor (with payment of $2) or take a pill from a pharmacy (with payment of $1). The participants were informed that they would receive a small percentage of their overall game earnings in addition to a fixed compensation amount to be paid after the experimental task. The trials were programmed using the Psychtoolbox program in Matlab (MathWorks, Natick, MA, USA). Each participant was informed that the face of a doctor would be shown on the screen after exposure to the first painful stimuli (the reference pain: 512 mN [high intensity pain]) inflicted with a weighted needle (PINPRICK stimulators; MRC Systems, Heidelberg, Germany) [23,24]. The administration of the pain stimulus was hidden by a panel on the left-hand side of the participant; the stimulus was applied to the back of the left hand between the index finger and thumb while the participant's hand formed a loose fist. A perceived pain rating was obtained using the "magnitude estimation" method [25], in which participants rate the pain intensity of the second stimulus relative to that of the first fixed stimulus (i.e., the reference pain). The instructions covered two options: 1) choose to receive a pill from a pharmacy, which would reduce their pain to a fixed moderate level (256 mN), or 2) choose to be treated by the doctor, which would reduce their pain to either a mild level (64 mN), or to the same moderate level (256 mN) as the medicine. Following the decision, a decreased amount of pain was delivered during the conditioning session. Participants were asked to evaluate the effectiveness of the treatment from either a doctor or the pharmacy compared to the reference pain [24]. Thus, in the medical trust game, the participants had to decide between choosing a doctor or choosing the pharmacy, based on the tradeoff between pain relief from a doctor and cost of payment. This tradeoff allowed for distribution of both choices in decisions. During the conditioning session, choosing treatment from a doctor resulted in a reduction in the degree of the secondary pain, just as in the conventional trust game; the decrease of pain was either large (from 512 mN to 64 mN) or small (512 mN to 256 mN) based on a 50% probability (low precision). Choosing pills from a pharmacy resulted in a small degree of pain reduction (512 mN to 256 mN) with a 100% probability (high precision). The task was repeated 40 times during the conditioning procedure. In contrast, during the test session, a high degree of pain (intensity of reference pain, 512 mN) was always delivered as the secondary pain (Fig 1). Prediction of placebo response using Bayesian modeling Placebo analgesia was modeled by constructing a Bayesian framework (Eq 1), in which the pain ratings corresponded to a posterior distribution, and by fitting a logarithmic relationship between pain intensity and pain rating as the likelihood. Using this Bayesian framework, the placebo response can be modeled as an inferential response according to the discrepancy between the ascending sensory signal and the descending prediction of pain. It is assumed that the subjective pain rating can be predicted based on the posterior probabilistic distribution of the Bayesian inference. For the model predictions, free variables from the Bayesian model were trained based on data from the conditioning session and used to predict the posterior probability of pain ratings being at the same intensity as the reference pain using the fitted model. Thus, test session data were not used in the training to predict the pain ratings of the test session. The intensity of sensory input, or the pain stimulation given through PINPRICK device, was parameterized in the perceptual dimension. In this process, the logarithmic relationship between pain intensity and the weight of the PINPRICK device was calculated based on a previous study (Eq 3) [24]. The formulated Bayesian model used in the present study is described below: PrðPjCÞ $ Normalðm prior ; s prior Þ ð2Þ PrðPjC; SÞ $ Normalðm post ; s post Þ; ð4Þ We compared our model with linear regression model, a possible alternative for analysis in this study. In the linear regression model, we used the same logarithmic relationship between the pain intensity and the weight of the PINPRICK as used in the Bayesian model. We studied different linear regression models based on the conditioning session data, depending on whether the choice was doctor or pharmacy in individual participants. The linear regression has one explanatory variable, the estimated intensity of sensory input from given PINPRICK weight, and one response variable, the pain rating from the participant. For model comparison, the trained models with conditioning session data were applied to predict pain ratings in the testing session. The Bayesian information criterion (BIC) was calculated based on a models' prediction error on the testing session data-a lower BIC value indicates a better explanation. Choice probabilities and pain experience during the medical trust game During the conditioning period of the medical trust game, participants made decisions regarding whether to receive pain treatment from a doctor (40.3%) or from the pharmacy (59.7%). The choice probabilities between the doctor and pharmacy ranged from 16.7-83.3% over the 40 trials; there were no significant changes in the choice probabilities over time. During the conditioning period of the medical trust game, participants experienced pain after both the doctor choice (32.5 ± 1.3%) and the pharmacy choice (57.5 ± 0.8%). The pain rating after the doctor choice ranged from 18.0-48.8, and the pain rating after the pharmacy choice ranged from 44.8-67.6 over the 40 trials (Fig 2). Placebo response during instrumental conditioning The magnitude, or intensity, of pain-according to the ratings of the participants-were compared with the test session. Compared to the control trials, significantly lower pain ratings were reported when the doctor was chosen for treatment (84.6 ± 2.2 vs. 92.8 ± 2.2, degrees of freedom [df] = 23, t = 3.308, p < 0.01) and when the choice to receive pills was made (87.6 ± 1.9 vs. 92.8 ± 2.2, df = 23, t = 2.209, p < 0.05). However, the pain ratings between the doctor choice and the pharmacy choice did not differ significantly (84.6 ± 2.2 vs. 87.6 ± 1.9, df = 23, t = 1.340, p = 0.193). To determine precision, or certainty, the individual standard deviation (SD) values between conditions for each participant were compared using paired t-tests for both the conditioning and test sessions. The SD values of the pain ratings following the doctor choice and the pharmacy choice during the conditioning session differed significantly (23.9 ± 1.5 vs. 16.5 ± 1.5, df = 23, t = 6.479, p < 0.001). Regardless of the large variance in pain ratings between the two choices during the conditioning session, there were no significant differences in variance between the doctor and pharmacy choices during the test session (Fig 3). Predictions of placebo response using Bayesian modeling The predicted pain ratings and actual reports of pain from participants during the test session were significantly correlated for both the doctor and pharmacy choices (Fig 4). For the prediction using the Bayesian model, the decision to be treated at a pharmacy conditioned with high precision showed a more reliable prediction (BIC = 126.85, root-mean-square error Discussion The present study utilized a medical trust game in which the choice of treatment for pain from either a doctor or a pharmacy was offered; the decision between the two choices was associated with pain relief after a monetary payment. In the test session, participants reported a lower degree of pain intensity when they chose either of the two treatment choices compared to the control condition. This result indicates that placebo analgesia was successfully induced through an instrumental learning paradigm, and suggests that prior experience and/or expectations successfully influenced the perception of pain when active exploration and evaluation were allowed. Significantly less pain was experienced following the decision for doctor treatment than during the control trial in the test session. Likewise, significantly less pain was experienced following the decision for pharmacy treatment than during the control trial in the test session. However, there were no significant differences between the decision for doctor treatment and the decision for pharmacy treatment. In terms of the SD, the choice for doctor treatment had significantly larger SDs compared to the choice for pharmacy treatment during the conditioning session, while no significant difference in variance was found between the doctor and pharmacy choices during the test session. (b) Reconstructed normal distributions of pain ratings in the conditioning and test sessions using the average of individual mean and SD values. The red dotted line represents the decision for doctor treatment during conditioning and the red solid line represents the decision for pharmacy treatment during conditioning. The green solid line represents control trials. The blue dotted line represents the decision for doctor treatment during testing and the blue solid line represents the decision for pharmacy treatment during testing. The efficacy of medical treatment is critically determined by treatment history [27]. For example, secondary treatment produces a significantly greater degree of pain reduction in patients with a positive treatment history [28]. Likewise, prior experience modulates the efficacy of a subsequently applied active treatment by altering treatment-specific expectations, either through conditioning or a combination of both expectation and conditioning mechanisms [29]. In the present study, the choice probabilities between the doctor and the pharmacy ranged from 16.7-83.3% over 40 trials; overall, participants chose the doctor 40.3% of the time (more pain relief but with high cost) and the pharmacy 59.7% of the time (less pain relief but with low cost). Because there were two different levels of pain reduction that the participants experienced upon choosing the doctor, they exhibited greater variations in their pain ratings over the conditioning period of the medical trust game. During the game, the participants had to decide between a doctor or a pharmacy based on the tradeoff between pain relief and the monetary cost of payment. Based on this decision-making process, the participants were able to learn the associations between their choices and their experience of pain relief. Under these conditions, it can be expected that people will call upon their treatment histories, with respect to doctors and pharmacies, to establish expectations for treatment in an experimental setting. Instrumental learning optimizes future decisions based on the consequences of those decisions. Thus, instrumental conditioning can efficiently shape reward/punishment values in humans, even when the information related to consequences is delivered in a subliminal manner [30]. The concept of "making a decision" critically distinguishes instrumental learning from Pavlovian learning because, compared to the latter, an organism not only decides its behavior or action, but is also sensitive to the consequences of that behavior or action, in the instrumental learning paradigm. Furthermore, there is evidence that information from Pavlovian conditioning is transferred to instrumental conditioning, strengthening the learning from instrumental conditioning, while behavioral responses provide properties that can act as conditioned stimuli in Pavlovian conditioning, suggesting for an interaction between the two forms of conditioning [11,31,32]. Thus, instrumental conditioning allows for exploration of possible choices, as well as control of future consequences after learning is accomplished. This study illustrates the exploration of possible treatments between the doctor and pharmacy according to the evaluation of the two choices under different conditions. Substantial evidence indicates that the controllability of upcoming pain diminishes pain intensity [33-35] and it has been shown that pain and economic value can be integrated in a cost-benefit equation that informs the decision-making process [36]. Despite the importance of instrumental conditioning in pain research, to the best of our knowledge, no studies have so far investigated placebo analgesia induced by instrumental conditioning. Thus, the present results may be important for the implementation of instrumental conditioning paradigms in the field of placebo analgesia research. In the present study, there were significant differences in pain ratings following the choice of either a doctor or a pharmacy in the conditioning session (32.6 ± 2.5 for doctors vs. 58.2 ± 2.6 for pharmacy, t = 9.926, p < 0.0001). Similar pain reductions were observed following the choice of either a doctor or a pharmacy in the test session. The Bayesian formulation within the predictive coding framework can directly account for differences in the magnitude and precision of expectations, which are highly associated with the strength of the placebo response [17]. Predictive coding suggests that probabilistic representations act as top-down influence on expectations explaining away bottom-up prediction errors between expected and actual sensory events [37]. From the perspective of predictive coding, an active inference is carried out by fulfilling predictions through actions based on perceived inference or expectation [38]. Recently, it was reported that the socially-acquired uncertainty of upcoming pain alleviates the subjective rating of painful stimulation [39]. In the present study, in terms of precision, choosing a doctor was associated with pain relief that had more uncertain expectations (SD = 23.9 ± 1.5) than choosing a pharmacy (SD = 16.5 ± 1.5). Thus, in agreement with Bayesian integration, pain relief with greater uncertainty was perceived as being of a lower degree when choosing a doctor, even with the expectation of greater pain relief. Models based on Bayesian theory have been successfully applied to explain the perceptions arising from the integration of top-down and bottom-up information in the fields of vision and touch perception [40][41][42]. The concept of placebo analgesia, in which prior information about the context biases pain perception, also ties in well with the Bayesian framework [17]. For example, it was demonstrated that hierarchical Bayesian modeling fits well with experimental pain data [20]. In the present study, it was possible to successfully predict pain ratings in the test session by fitting a model using data from the conditioning session, which implies that the Bayesian framework fits well with placebo analgesia induced by instrumental conditioning. Because actions and/or behaviors are essential constituents of the Bayesian brain, the Bayesian model used in the present study, of placebo analgesia induced by instrumental conditioning, may provide an important link between placebo analgesia and the Bayesian brain. The present study had several limitations. First, the prior experiences of a participant with a doctor or pharmacy could have influenced their placebo responses while they were deciding between these two options. However, even though these possible confounding factors were not fully excluded from the present experiment, a more realistic situation in which a medical decision was needed from each participant was utilized. Second, a placebo response following Pavlovian conditioning was not assessed in this study. It would be interesting to compare the placebo responses between instrumental conditioning and Pavlovian conditioning in future studies. Third, the sample size of twenty-four is relative small to be interpreted in the general population. Therefore, a replication of this study within a larger study population is warranted. Finally, the sequential order of learning was not considered as a variable in the present model and, because optimizing the decision model by updating information associated with action on a trial by trial basis likely occurred, the sequential timing might have been a practical element to model instrumental learning. In sum, the instrumental learning paradigm used in the present study allowed for the active choice of optional cues that were able to induce the placebo analgesia effect. Additionally, placebo responses were predicted based on decision-making across individuals using a Bayesian model. Thus, the present findings allow for a more comprehensive view of placebo analgesia that embraces active decision-making, which almost certainly occurs in real world settings.
v3-fos-license
2020-07-16T09:07:52.922Z
2020-07-01T00:00:00.000
220530849
{ "extfieldsofstudy": [ "Business", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/17/14/4981/pdf", "pdf_hash": "31546a2bedfd748d45292648e9a3455f9b8e8be5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2514", "s2fieldsofstudy": [ "Business", "Medicine" ], "sha1": "53e1dac0b558379f0c382e2e2b73b0d2b58449e9", "year": 2020 }
pes2o/s2orc
Increasing Competitiveness through the Implementation of Lean Management in Healthcare The main aim of this paper was two-fold: first, to design a participative methodology that facilitates lean management implementation in healthcare by adopting the action research approach; second, to illustrate the usefulness of this methodology by applying it to the sleep unit of a public hospital in Spain. This methodology proposes the implementation of lean management in its broadest sense: adopting both lean principles and some of its practical tools or practices in order to achieve competitive advantage. The complete service value chain was considered when introducing changes through lean management implementation. This implementation involved training and involving staff in the project (personnel pillar), detecting and analysing “waste” in value chain processes (processes pillar) and establishing control and measurement mechanisms in line with objectives (key performance indicators pillar) and putting in place improvement actions to achieve these objectives. The application of this methodology brought about an improvement in the management of patient flow in terms of effectiveness, efficiency and quality but also an internal transformation towards lean culture. Introduction Over the past few years in the healthcare sector, changes have occurred that have had a considerable impact on society. Steps have been taken towards an increasing focus on the patient. Management models have been steadily implemented which have given an increasingly direct connectivity with the patient; some examples include telemedicine, electronic clinical history and the Health 4.0 phenomena [1]. Moreover, regulatory measures are being put in place to deal with a contracting economy and an ageing population, where the number of chronic patients is continually increasing. The aim of these measures is to maintain the quality of service without incurring non-assumable costs. Hospitals and other health services providers are under threat from this changing environment with increasing demands from patients and decreasing budgets. At the same time, they face the challenge of meeting the needs not only of the various stakeholders involved (including governments, healthcare professional bodies, healthcare product and service suppliers and insurance companies) but also society as a whole, particularly during crises such as the COVID-19 outbreak, which test the resilience of health systems all over the world. Service providers need to respond to challenges by examining how to increase their possibilities of survival by achieving a competitive advantage, especially when they belong to public health services [2]. As is often the case, it is in times of crisis when the need arises to dedicate resources and effort to innovate in an organisation's management. Accordingly, more and more hospitals have redesigned their internal management with respect to processes, resources and objectives, gearing themselves towards more effective and efficient management, and indeed enhancing the quality of service. Academic literature shows cases where hospitals have achieved this thanks to the adoption of management approaches coming from industrial sectors which with minor nuances or differences among them seek improved efficiency and efficacy of processes and productive systems [3][4][5][6][7][8][9][10][11][12][13]; these approaches include continuous improvement, kaizen, total quality management (TQM), just in time (JIT), six sigma and, particularly, lean management. One key to success in the adoption of these approaches is for affected staff to participate directly [14][15][16][17]. In the healthcare sector, these approaches are seen as innovative, bringing about a radical change in the way things have been done to date. In fact, Walley [18] points out that when the service sector is compared to the industrial sector, it is widely felt that the service sector, and healthcare in particular, is lagging in terms of adopting new management innovations and improvements. For many years, professional medical knowledge was considered sufficient to ensure quality and safety in the delivery of healthcare services. However, today's healthcare delivery systems are complex, calling for further organisational awareness in order to provide the appropriate medical care along the entire patient pathway, without incurring extra costs and generating savings. Consequently, the problems with healthcare today are not only clinical but largely organisational. Given the current complexity in the nature of healthcare and its environment, all people involved should participate in the analysis, diagnosis and redesign of the processes for offering a service with a view to managing the available resources simply, effectively and efficiently, all the while in keeping with patient needs and stakeholder expectations. In this context, this paper's research question is to analyse whether it is possible to define and implement a participative methodology that systematically seeks to redesign processes in health services, by applying the scientific action research approach from a lean management perspective. The action research approach could be defined as: "an emergent inquiry process in which applied behavioural science knowledge is integrated with existing organizational knowledge and applied to address real organizational issues. It is simultaneously concerned with bringing about change in organizations, in developing self-help competencies in organizational members and adding to scientific knowledge. Finally, it is an evolving process that is undertaken in a spirit of collaboration and co-inquiry". [19] The first scientific contribution of this paper is to propose a new participative methodology for deploying lean principles in health services. This methodology allows an overall, integrated vision of the value chain of the service on offer to the patient (patient flow), involving all corresponding activities, personnel, technical resources, information and objectives. The second major contribution is to apply that methodology by following the action research approach, which is little used in the scientific literature when dealing with lean management implementation, particularly, in health services. This paper has six sections. After this brief introduction, the second section explains the authors' methodological proposal. The third section describes the practical application of the methodology in a Spanish hospital unit. The fourth section presents the main results obtained and the fifth section develops the discussion of the results with due regard to the limitations of the study and some future lines of research that would give continuity to the study. Finally, the conclusions are presented in the sixth section. Materials and Methods As mentioned in the introduction, the main research question here analyses whether it is possible to design and implement a participative methodology from a lean management perspective that systematically seeks to redesign processes in health services by applying the scientific action research approach. According to Westbrook [20], the action research approach began when some authors started showing an interest in the field of social psychology in the 1940s. They used it to develop research that was not only useful for firms and organisations but that also promoted the development of scientific knowledge through the direct experience and involvement of the researchers. Conceptually, the research approach could be connected to other broader scientific approaches, such as the design science paradigm, widely used in the sphere of information systems. According to a synthesis by Hevner and Chatterjee [21]: "the design science paradigm has its roots in engineering and the sciences of the artificial. It is fundamentally a problem-solving paradigm. It seeks to create innovations that define the ideas, practices, technical capabilities, and products through which the analysis, design, implementation, and use of information systems can be effectively and efficiently accomplished. Acquiring such knowledge involves two complementary but distinct paradigms, natural (or behavioral) science and design science". A researcher using the action research approach does not just observe a process of transformation in a company or organisation but also participates and becomes directly involved in it, acting as a "change agent". Thanks to their direct involvement, researchers are able to witness the process of change first-hand during the observation-intervention-learning cycle. Knowledge gained during such a process can be enhanced and shared with other companies and researchers [22,23]. At the same time, this approach, based on the direct immersion experience of the researcher, is particularly interesting when studying organisational transformation processes such as those associated with the implementation and deployment of lean management. According to some authors [10,[24][25][26][27], there are scarce references to adoption of the action research approach in the search for improvements in health services. This is because the literature mainly centres analysis on case studies (e.g., [4,12,13]). Other papers reflect on, synthesize and review the case studies, looking at them in terms of tools, adopted practices or activities, indicators, affected areas or departments as well as the results obtained (e.g., [9][10][11]). In this context, the approach taken by this paper is not so much to describe why lean management is of interest in the health sector but rather to propose how to implement it successfully, using a scientific methodology that can be replicated and applied in other contexts. At the same time, the literature also identifies the scarcity of applied research into the role played by people in the process improvements [28] which would be linked to the deployment of structured participation systems within the framework of lean management implementation. Näslund et al. [26] propose 3 key points that make research using this approach scientifically relevant: first, deployment of a rigorous, structured, documented system for collaboration between the company (or organisation) and the researchers; second, the significant contribution of the research to the creation of scientific knowledge; and, third, the interest of the company (or organisation) itself in achieving results from the research. Furthermore, companies and organisations also obtain a methodology that can help improve efficiency and performance in their processes in terms of costs, quality, lead times, security, agility, flexibility or sustainability. The scientific interest of this paper is therefore reinforced as no specific references have been found that deal with a participative redesign of processes in the health sector through a lean management program that also adopt an action research approach. In order to apply the action research approach, the authors propose a methodology (see Figure 1) which adapts the two-phase framework (conceptual and applied) proposed by García-Arca et al. [16,29]. That framework, in turn, has its origins in the previous proposals of Coughlan and Coghlan [22], Näslund et al. [26], Coughlan et al. [27] and Farooq and O'Brien [30] which also provide the theory underlying its development. The first (conceptual) phase includes definition and reflection of the theoretical basis of the proposal for redesigning processes in the health sector by applying lean management principles. This is based on analysis of the literature and the authors' own experience gained during more than 20 years of implementing lean management and kaizen projects in industrial and service firms. The second (applied) phase involved empirical validation of the theoretical proposal to deploy working teams in different areas, departments or centres. Logically, there is mutual feedback between the two phases. This is a research process with varying levels of involvement and intensity of the researchers depending on the stages within the applied phase (preliminary, launch and consolidation-extension). The authors' proposal is participative, involving all concerned in the different processes that take place in the area, department or centre under analysis. The participants themselves identify sources of added value for the patient, identify and propose actions to eliminate "waste", and implement and monitor these actions. Below is an explanation of how each phase was developed. depending on the stages within the applied phase (preliminary, launch and consolidation-extension). The authors´ proposal is participative, involving all concerned in the different processes that take place in the area, department or centre under analysis. The participants themselves identify sources of added value for the patient, identify and propose actions to eliminate "waste", and implement and monitor these actions. Below is an explanation of how each phase was developed. Figure 2 shows a synthesis of how the conceptual model proposed in this phase has been developed. The justification for this model is explained in more detail below. As mentioned in the Introduction (Section 1), firms and organisations are currently subject not only to constant innovation with their products and services but also to demands for increasingly PHASE Figure 2 shows a synthesis of how the conceptual model proposed in this phase has been developed. The justification for this model is explained in more detail below. Phase 1 (Conceptual Phase): Structuring lean Principles in Hhealthcare As mentioned in the Introduction (Section 1), firms and organisations are currently subject not only to constant innovation with their products and services but also to demands for increasingly lower prices and increasingly greater standards, deadlines, safety, flexibility and sustainability. This is happening in markets that are increasingly turbulent and volatile and in dynamic environments, particularly at a technological level, which has forced many organisations to seek improved management or redesigned processes, in line with their strategic objectives as a source for their competitive advantage. This search for design alternatives can be based on investment in technology, equipment or radical innovations but also on small improvements that gradually increase the performance of processes. These two routes should be considered in a complementary fashion and not as mutually exclusive. Without spurning the important impact of the former (the radical route), it can be seen to have some drawbacks, particularly when it comes to having funding available for acquisition or implementation. The latter option, based on small changes that require almost no investment, is the basis for the various approaches, methodologies or philosophies mentioned previously and among which lean management stands out. Traditionally, in the industrial sector the systematic search for alternatives to redesign and improve processes without high investment in technology or equipment form the battleground for all these methodologies [31]. Figure 2 shows a synthesis of how the conceptual model proposed in this phase has been developed. The justification for this model is explained in more detail below. As mentioned in the Introduction (Section 1), firms and organisations are currently subject not only to constant innovation with their products and services but also to demands for increasingly As commented by Hellström et al. [32], when the service sector is compared to the manufacturing or industrial sector, it is widely felt that the service sector, and healthcare in particular, is lagging in terms of adopting new management approaches [33]. For many years it has been considered sufficient for there to be only professional knowledge to ensure quality and safety in the delivery of healthcare services. Today's healthcare delivery systems are complex; however, calling for further organisational awareness in order to provide the appropriate medical care along the entire patient pathway, generating savings without incurring costs but also improving, for example, the standards of quality, flexibility and safety. Consequently, the problem with healthcare today is largely organisational and not only clinical. Phase 1 (Conceptual Phase): Structuring lean Principles in Hhealthcare In today's context, given the complexity in both the nature and the environment of healthcare, managers and staff should analyse, design, and implement improvement processes to achieve efficiency and improve the quality of the provided service. In line with the above, one way of achieving this objective is by means of a management based on lean principles, which will lead to an increase in the performance of hospitals and other health centres. Lean is a term that was first coined by Womack, Jones and Roos [34] to describe the Toyota Production System (TPS) and the steps to continuously improve the efficiency and effectiveness of a system by driving out waste. They defined lean implementation through five principles that are based on the assumption that organisations are made up of processes. These principles establish that in order to meet its customers' needs, an organisation must firstly identify what its customers think of as value. Once this is clear, the organisation can work to identify value streams in order to eliminate non-value-adding process steps or waste, make a smooth customer flow in the remaining and value-adding processes, implement pull systems that let the customer pull value from the firm (services should only be provided when the customer downstream asks for them) and to continuously work towards perfection by means of setting ambitious and realistic targets for improvements, as well as to implement mechanisms for process control and continuous improvement. From an industrial point of view, Ohno defined which aspects should be considered as waste in processes [33]. He identified a total of seven categories of waste: producing too much too early (overproduction), waiting, transportation of people or materials over long distances, duplication or rework, mistakes and errors, unnecessary stock, or non-ergonomic work environments. According to Shah and Ward [35], the concept of lean management can be interpreted from two different points of view. The first of these is the philosophical or cultural perspective relating to fundamental principles and general objectives, such as Womack and Jones's five principles mentioned above. The second is a more practical perspective which deals with practices or tools that can be more directly applied. The practices or tools that could be applied in the service sector include, for example, the PDCA cycle (plan, do, check, act), the DMAIC cycle (define, measure, analyse, improve and control), 5Ss, VSM (value stream mapping), standardization, root cause analysis, ABC classification, Ishikawa diagram or visual management activities [36]. The synergetic effect of the application of these practices and tools orientated towards lean principles results in obtaining a high-quality system that offers specific products or services corresponding to client needs, generating little or no waste. Logically, indiscriminate or decontextualized use of these tools or practices without being suitably aligned with the overarching objectives could lead to failure or unnecessary organisational effort. In summary, one of the chief aims of lean philosophy is to identify and reduce waste throughout the organisation, where waste is defined as any human activity that absorbs resources but creates no value. Simply put, lean means using less to do more. Because lean thinking originated from manufacturing companies, it may be argued that the service sector and especially the healthcare sector may not gain from it. However, Womack and Jones [37] advocate the application of lean thinking in the medical system. They argue that the first step in implementing lean thinking in medical care is to put the patient in the foreground and include time and comfort as key performance measures of the system. Emphasis is given to the promotion of staff participation through multi-skilled teams taking care of the patient and an active involvement of the patient in the process [38]. The term lean healthcare has emerged indicating a stronger focus on efficiency and patient satisfaction within the healthcare sector [39,40], all aligned with the global objectives of the various stakeholders involved. Even if healthcare is specific and cannot be compared directly with other businesses, there is a growing conviction that healthcare can benefit from studying and adapting the theories, principles and methods of lean management, which have proved to be useful in other industries. So, the core values of the Toyota lean method (briefly defined by Liker [41] in The Toyota Way as a long-term philosophy: the right process will produce the right results, add value to the organisation by developing your people and continuously solving root problems which drives organisational learning) are equally applicable to health. Lean management is a management strategy that is applicable to all organisations, because it has to do with improving processes. All organisations, including healthcare organisations, are composed of a series of processes, or sets of actions intended to create value for those who use or depend on them (customers/patients) [42]. However, the need to focus on the processes is not exclusively addressed by lean management but is also the object of other approaches to management such as BPM (business process modelling) that also seek, in a participative way and with their own visual tools, to model and redesign the activities in a process to make them more efficient [43,44]. Our definition of lean healthcare, based on Dahlgaard et al. [45], is the following: lean healthcare is a management philosophy to develop an internal culture characterised by increased patient and other stakeholder satisfaction through continuous improvements in the processes and activities that create value for them, in which all the interveners in the chain of value of the provided service actively participate in identifying and reducing non-value-adding activities (waste) and promoting the creation of value for the customer/patient across the whole patient flow. In a perfect process, every step is valuable (creates value for the customer), capable (produces a good result every time), available (produces the desired output, not just the desired quality, every time), adequate (does not cause delay), flexible or agile [12] and linked to continuous flow. Failure in any of these dimensions produces some type of waste. In a healthcare context, this means that all staff are thinking about the principles of value and flow along the patient pathway as an integrated whole and not as a set of independent and isolated functions. This denotes a cultural change or transformation in thinking about the way people do work from a functional perspective to a process perspective [46]. In line with the above, in order to implement lean management in public services, it will first be necessary for there to be some understanding of the principles of lean, in terms of understanding value, focus on flow and pull as well as reduction of waste. Therefore, organisational readiness for implementing lean can be considered in terms of understanding the customer (value), having a process view (value stream), identification of capacity and demand (flow and pull) and linking to strategy, engagement and participation of the staff for problem solving (pursuing perfection), i.e., about understanding what the "value" for the process is, what the process is, what the demand types and patterns are as well as linking the process improvement activity to strategy and finding ways to engage the staff [47]. This initial step will help the organisation to understand the need for change in the way in which they are going about things until that time, so that personnel will feel committed to this approach: by generating a greater added value for the client/patient by reducing/eliminating waste and by implementing continuous improvement. On the other hand, as stated earlier, process improvement leads to a significant change in culture as it calls for strong leadership, visible support from management and patience (since it is a long-term philosophy). It is vital for senior management to show genuine interest, support and act upon the results delivered and ensure the sustainability of the changes [48][49][50]. Mark Chassin, M.D., president of the Joint Commission, supports hospitals focusing more directly on continuous process improvement to begin to adapt to the principles of so-called high reliability organisations which create tightly defined feedback loops that encourage employees to report minor problems before they rise to the level of errors or lapses in the quality of care provided. He also affirms that: "The three critical changes healthcare organizations have to undertake are a leadership commitment to zero major quality failures, the full embodiment and implementation of safety culture and the full deployment of robust process improvement". [51] When it comes to applying lean, some tools commonly used in healthcare include process mapping, value stream mapping, Kaizen improvement teams, just-in-time process management, and "5S" principles [52,53]. Gowen III and McFadden [54] argue that many healthcare organisations have previously tried to implement lean principles without great success. It normally requires a cultural change where the soft or intangible factors of management (the systemic factors). such as leadership, people management and partnerships, are changed, so that a new organisational culture is developed to support and improve the hospital's core processes. Empirical research also suggests that the implementation of improvement practices is associated with improved organisational effectiveness, in terms of service quality, customer satisfaction, net cost savings and patient satisfaction [55]. As a means of resolving quality issues, many healthcare organisations have undertaken process improvement (PI) initiatives targeted towards improving organisational performance [56,57]. Nevertheless, Kaplan et al. state that a clear lesson from the current, still early, stage of lean healthcare is that in order to achieve sustainable change results, it is insufficient to simply implement lean tools or practices and that an organisational transformation based on lean principles is required [58]. And Van Rossum et al. argue that appropriate leadership styles and workforce flexibility are success factors in the transition from technical "lean tools" to the required transformation defined as a hospital culture characterised by increased patient and other stakeholder satisfaction through continuous improvement [59]. Finally, Leite et al. analyses the deeper causes that influence the creation of ostensible barriers in healthcare, rather than just focusing on visible elements commonly related to a tools-based approach [60]. The authors believe that, in order to succeed in lean management implementation, these steps must involve three key management pillars for any organisation: (1) processes, (2) KPIS (key performance indicators) and (3) personnel involvement. This approach is consistent with Gowen III and McFadden's proposal for the successful implementation of improvement programmes [54]. • Processes: An organisation is the sum of interrelated processes aimed at offering a quality, effective and efficient service. The organisation can take direct action on internal processes to improve its results, since these internal processes consume resources and can generate waste or add value for the patient. However, it cannot take direct action on patient needs or the results obtained. In healthcare, many processes require patient involvement. Therefore, these processes are key elements to take into account when offering greater customer satisfaction. In order to maximise value and eliminate waste, processes must be evaluated by accurately specifying the value desired by the user, identifying every step in the process and eliminating non-value-added activities, and making value flow from beginning to end based on the pull of the patient [60]; in the jargon of lean management and kaizen, this involves "Go to Gemba". In this context, the adoption of some Lean tools is useful, particularly, the value stream maps (VSM). Logically, the complexity of the processes being analysed will require more or less diversity in the Lean tools or practices; • Key performance indicators (KPIS): "What doesn't get measured doesn't get managed". All processes involving change must come with clearly defined goals and objectives, for which there must be well defined indicators. These indicators act as a sort of "mediator" between the system goals and the required actions to achieve these, and thereby become more competitive. These indicators are necessary to measure results and identify deviations from the optimum, as well as trends in values. Ultimately, they supply data concerning the system variability and support decisions regarding the taking of preventive or corrective actions. Such indicators will be helpful in determining the current status of the system in terms of effectiveness, efficiency, variability, capacity and quality provided as well as in establishing preventive or corrective actions in the event of detecting deviations from the defined objectives. According to Kissoon [61], it is insufficient to innovate and introduce new processes in healthcare; it is necessary to constantly evaluate the results of the interventions and make the appropriate changes as necessary. Logically, the specific indicators and objectives for the lean management transformation project should be coherent and aligned with the organisation's overall objectives; • Personnel involvement: The activities involved in each process are best understood by the personnel, since these processes form part of their daily work routine. Therefore, if they are equipped with the correct tools, they will be able to identify areas for improvement, implement actions and take responsibility for its follow-up and control. For this reason, it is vital to involve personnel from the beginning thereby serving as a motivating factor in their commitment to the process of change. In order to do so, the researchers defined the work organisation for the project based on teams (see next section). In the conceptual model, the researchers wish to point out that the all actions relating to processes, personnel, and indicators, are all geared towards the implementation of lean management in healthcare, where the primary goal is to improve patient care (and stakeholders' needs). Likewise, this scheme includes a dynamic vision of the actions that could be implemented which follows the PDCA (plan, do, check, act) continuous improvement cycle. However, in order to develop these conceptual pillars in an applied and participative way, a suitable working system must be designed and adopted during the different stages of implementation. That is the object of the second implementation stage, dealt with below. Phase 2 (Applied phase): Implementing the Methodology This second (applied) stage has two basic initial premises that are common to any process of change or transformation: active support from the organisation's management team and alignment of the lean management programme's specific objectives with the global ones of the organisation. Logically, without these two basic premises, it is not possible to develop the proposed methodology successfully. In order to apply and enhance the theoretical basis of our proposal (particularly, personnel involvement), two types of mixed researcher-organisation teams were deployed: conceptual teams and working teams. The conceptual team, which existed throughout the whole project, was a chance for the researchers to meet the Board of the area, department or centre under analysis and design, discuss and reflect on the methodology and its implementation. Those conceptual meetings were complemented by specific working meetings to define and improve organisation processes with waste reduction and value generation in mind. In practice, they serve as a way of ensuring the participation and commitment of the personnel involved or affected by the process, in a lean management context. In this regard, although structured participation systems for deploying a lean management or continuous improvement program have traditionally been categorized as group systems (e.g., quality circles or improvement groups) or individual systems (e.g., suggestions systems), many authors (the authors included) tend to support group systems because they consider them to be an aid in the development of skills, such as learning, responsibility and communication, between an organisation's hierarchical levels. In this context, García-Arca and Prado-Prado [62] propose an organisational structure based on two types of working teams: implementation teams and improvement teams. The job of the implementation team is to define, direct and monitor the continuous improvement process. Given the importance of management involvement, its participation in this team is recommended. The implementation team also decides on the number of working teams, their aims and the times when each one will be launched or wound up. Likewise, this team will select the members of the working teams and track and prioritize the activities they develop. The implementation team stays active throughout the transformation project although its management members may change depending on the area, department or centre in which the improvement teams are being launched. The improvement teams, meanwhile, are not only responsible for proposing and analysing any problems but also for implementing improvements that contribute towards their objectives (waste reduction and value generation). This they do with supervision from the implementation team, whose management members will be able to facilitate the practical application of proposals made by working team members. The internal transformation process to implement lean management culture in the organisation in this analysis was structured in three differentiated stages: preliminary, launching and consolidation-extension. During the first stage (preliminary), the researchers gained understanding of the processes at the pilot area in the organisation which, in turn, became aware of (and enhanced) the proposed methodology, thanks, in particular, to the conceptual team but also to the implementation team. After the preliminary stage, the launching stage was when the participative methodology was initially implemented in an area, department or centre through improvement teams (typically by using a pilot improvement team). The consolidation-extension stage had a two-fold objective. First, it reinforced the maintenance (or improvement) of the results obtained in the area, department or centre in which the improvement teams were launched and, second, it encouraged the start-up of new improvement teams in other areas, departments or centres in the organisation so that deployment of lean management and organisational transformation could continue. Everything was overseen by the implementation team. In this context, the improvement teams may be non-permanent (typically in the launch stage in an area, department or centre) or permanent (typically in the consolidation stage of the project in an area, department or centre). Logically, the meetings held by the different teams must be equipped to function properly if they are to be effective. A review of the recent literature on group participation systems as part of a lean management or continuous improvement program pointed out the importance of providing these teams with a structured working system in order to promote and maintain improvement [2,15,[62][63][64][65][66][67][68]. Such a system would include the definition of six points: • The availability of KPIS for measuring improvement, one of the pillars of our model; • A preestablished calendar for meetings with dates, start times and lengths. This calendar is usually proposed and justified by the implementation team (not only for their own meetings but for those of the improvement teams), depending on availability and the priority and pace they assign to the project; • The training program. This program includes both traditional training techniques associated with problem solving, lean tools and an awareness of improvement and teamwork. Some authors recommend complementing this basic training with "learning-by-doing"; • Communication. This aspect implies the way that actions agreed upon at meetings are documented and communicated, including tasks, responsibilities and deadlines. It can take many forms such as information boards, magazines, intranet, public presentations, etc. For example, the conclusions reached at all the meetings were typically recorded in the minutes, which were sent electronically to the members of the various teams and used for discussion and reflection at the next meeting. Likewise, the main progress, improvements and adopted changes were communicated to the affected areas, departments or centres. • Resources. These resources were necessary for the proposed improvements to become a reality. A lack of resources available for developing improvements can discourage team members and reduce their commitment to participation programs. In the service sector, particularly in healthcare, the adaptation and suitability of the information system for decision making takes on particular importance. • Recognition/Reward. This aspect has an important impact on personnel motivation, and consequently on their commitment to lean management projects. Literature differentiates between "reward" (essentially economic), or a "payment in kind", and "recognition" (essentially social). The use of the different teams involves more of the people related to processes and is a critical aspect when it comes to enhancing the reflection activity [69,70]. By participating at most of the meetings, the authors became directly involved in the improvement process as agents of change and not just mere observers, which is one of the main strengths of action research. Likewise, the various viewpoints, perspectives and reflections of each of this paper's authors were shared internally to enrich the methodology. The system adopted for implementing and refining the methodology proposed by the researchers and the organisation does not only lay the foundations for the scientific rigor of this research but also for its future replication in other hospitals, clinics or any other health services. Testing the Methodology The public hospital in which the methodology was applied is one of the largest in Spain. With nearly 4000 workers, it provides healthcare to an area of more than 600,000 inhabitants. A pilot case study was carried out in the sleep unit of this Spanish hospital during a period of eight months. The project was part of the Public Healthcare Innovation Plan and included a total of 14 sub-projects carried out in different areas by different personnel. The Innovation Plan aimed to improve chronic patient care by making the care processes more efficient, agile and secure while minimising human error wherever possible, in line with lean principles. Thus, the lean transformation project was in line with the hospital's strategic objectives. Preliminary Stage During the first stage (preliminary), the researchers gained an understanding of the processes at the hospital which, in turn, they became aware of (and enhanced) the proposed methodology. Therefore, in line with the proposed methodology, a conceptual team was set-up comprising the authors and the Board of the hospital which was particularly motivated by the implementation of lean culture in the organisation. The meetings were held monthly throughout the project. They were kept informed of the project's progress and emerging changes in which they could possibly be involved, while they also made their own suggestions during the meeting. At the same time, an implementation team was set-up to guide the work throughout the three stages, particularly oriented towards the pilot area chosen for the launching stage (sleep unit). The implementation team comprised the main managers of the Unit (i.e., the head of the pneumology department and the sleep unit manager) and the researchers. The preliminary stage meant the researchers and practitioners could create an atmosphere of collaboration and trust so that the project could be developed from an action research approach by both the conceptual team and the implementation team. In this preliminary stage, the implementation team held weekly meetings for one month in order to structure, fine-tune and enhance the proposal from the hospital's viewpoint by using a general analysis of the sleep unit. Before proposing actions for implementing lean principles (launching stage), the members of the implementation team focused on analysing the sleep unit from the perspective of the model's pillars (processes, KPIS and personnel involvement): What is the sleep unit? What personnel are involved in patient services? What processes are carried out? What management KPIS (key performance indicators) are currently in place? What objectives do managers wish to achieve with this project? The sleep unit is a pneumology department specializing in chronic breathing disorders during sleep. There are several types of breathing disorders during sleep, the most prevalent being Sleep Apnea-Hypopnea Syndrome (SAHS). This syndrome is characterised by repeated episodes of obstruction of the upper air tract and occurs when the sleeping patient involuntarily stops breathing. The direct consequences of these episodes are a reduction in oxygen saturation and transitory awakenings. This leads to excessive daytime somnolence, a reduction in quality of life and neurocognitive repercussions. Similarly, daytime somnolence (a key symptom) reduces work performance and increases the possibility of accidents, causing a potentially serious risk for sufferers and third parties. The prototype of a SAHS sufferer is a middle-aged obese man who snores and is drowsy during the day. Continuous positive airway pressure (CPAP) is the most effective treatment to moderate sleep apnea. Continuous positive airway pressure involves the patient wearing a soft mask over the nose, which is attached to a machine that raises and regulates the pressure of the air that the patient is breathing, preventing the airway from collapsing during sleep. According to figures provided by unit managers, only 20-25% of serious patients in need of treatment have been diagnosed and are receiving treatment. This is due to an increase in the prevalence of this disease among the population in recent years and the dearth of resources available for its analysis, diagnosis and treatment. This is what led to the project's main objective: to increase the number of serious patients receiving treatment, accelerating detection, diagnosis and CPAP provision for home treatment. For a patient suffering from SAHS, even more so in a serious case, real added value comes from the treatment itself, since it improves the patient's standard of living and reduces the mortality rate. Therefore, while the previous steps are necessary for an appropriate treatment, it is important to streamline the process from the first suspicion of the illness to the start of treatment, eliminating any potential obstacles for the patient. As regards first model's pillar, the sleep unit used general KPIS which were common to all hospital departments, but not adapted to the specific needs of the unit. By way of example, some of these indicators were: the number of patients discharged per year and the number of consultations carried out per year. Unit managers revealed that they did not carry out any follow-up or control of these indicators, only conveying the end of year figures to verify whether they had met the established yearly objectives. The information system was initially adapted to provide the information for these indicators. At the same time, within the context of the second of the model's pillars (processes), generally, a potential sleep unit patient will go through the following steps (see Figure 3): • A patient with a suspected case of SAHS arrives at the sleep unit following a referral from either a general practitioner (GP) or a specialist; • At the first consultation, notes are taken of the patient's parameters and symptoms, and the patient completes a test which is designed to establish the extent of suspected SAHS; • Following this first consultation, the patient undergoes a diagnostic test in order to determine the frequency of apneas and the seriousness of the illness; • Once completed, doctors draft a report with the patient's diagnosis and then they contact the patient to review the test results; • At this point, the patient may be discharged if the test shows that there is no illness or, alternatively, may start home treatment; • Treatment requires the patient to sleep while connected to the CPAP, which is supplied by a home care services company specializing in oxygen therapy (hereafter, "external care provider"); • Periodically, the external care provider supplies machine data to the sleep unit, allowing doctors to make the appropriate adjustments to the machine; • In the following months, a control test is run on the patient to check his/her development with the treatment. If development is positive, the patient is discharged, otherwise regular checks will be carried out at the hospital. Once the treatment has started, the external care provider company also makes follow-up visits to the patient's home on a quarterly basis. be carried out at the hospital. Once the treatment has started, the external care provider company also makes follow-up visits to the patient's home on a quarterly basis. There are many parties involved in patient flow (the third pillar of the model), from early clinical suspicion of the existence of the illness to the commencement of treatment and the patient's subsequent discharge. Each has a different role:  Those who refer patients to the sleep unit: GPs or specialists;  Sleep unit personnel: head of the pneumology department, sleep unit manager, doctors, nurses and technicians;  External care provider: nurses who provide home care during treatment. Diagnostic Test Patient record Review Test results Patient record SAHS treatment? There are many parties involved in patient flow (the third pillar of the model), from early clinical suspicion of the existence of the illness to the commencement of treatment and the patient's subsequent discharge. Each has a different role: • Those who refer patients to the sleep unit: GPs or specialists; • Sleep unit personnel: head of the pneumology department, sleep unit manager, doctors, nurses and technicians; • External care provider: nurses who provide home care during treatment. Launching Stage To change this initial situation and deploy the proposed methodology, the implementation team decided to launch a pilot improvement team in the sleep unit. This was the "laboratory" where tests could be carried out to discover the potential and the pitfalls of implementing the methodology on a larger scale in other hospital areas. The improvement team included the different parties involved in the overall patient flow: GPs, doctors and nurses from the unit, nurses from the external care provider company, and the patients themselves. This team also included the researchers and the sleep unit manager, the nexus between the improvement team and the implementation team. The intention was that each working group would contain at least one representative from the parties involved at each stage of the patient flow, including the patient. This would serve to obtain a global, integrated, internal vision of the value chain of services on offer. In broad terms, the functions of each of the teams were as follows. The improvement team acted directly on the process by gathering data, identifying and analysing problems or opportunities for improvement, putting forward solutions or improvements, implementing these solutions and following-up in order to maintain them over time. In this stage, the implementation team played a more supporting role by defining the main goals and objectives to achieve, following-up the improvement team actions, and facilitating resources so the improvement team could take action. It is also important to designate a common leader for both working groups who assumes responsibility for the development of the project in the unit. In this stage, the sleep unit manager was designated as the leader, due to the fact of her greater influence and impact on the management of the unit at all levels: planning, personnel management, resource and activity programming and follow-up and control of results. The sleep unit manager also served as a common link between the sleep unit and the hospital managers. On the basis of the systematics mentioned earlier, the working groups functioned in the following way. Both groups met separately on a weekly basis during the first four months, thereafter they met fortnightly. The implementation team always met before the improvement team, since the implementation team would set the objectives to be met and establish what actions the improvement team needed to carry out. They would also fix a timeframe and assign responsibility for these actions. The improvement team would be informed of any decisions taken by the implementation team. In the improvement team meetings, each member would explain the progress made on the corresponding actions under his/her responsibility. At the end of each meeting, the team members would establish the actions to be carried out before their next meeting. Between meetings, both working groups dedicated their time to carrying out the assigned tasks. The implementation team meetings lasted around three hours, while the improvement team meetings lasted a maximum of one hour so as to foster a dynamic work environment. All agreements reached in each of the meetings were reflected in a standardised document (minutes) called the improvement action plan (IAP). This contained the following information: date of the meeting, the members present, time and date of the next meeting, agreements reached, a timeframe for action implementation and those responsible for carrying it out. The researchers began the project with a five-hour training session for members of both working groups. This session served to impart to members a basic knowledge of lean management, process management, problem-solving techniques, and continuous improvement. However, one of its main aims was to make both working groups aware of the importance of their role in the development of the project and, above all, in the improvement of the sleep unit management. As Johnson et al. [71] argue, the training of multidisciplinary project teams can often affect a level of change that no single group or department could implement on its own by breaking down departmental and external silos and opening the lines of communication. The researchers placed special emphasis on the importance of members' involvement in the detection, implementation, follow-up and control of potential improvements in their daily tasks; through observation, analysis and quantification. The methodology chosen to develop the project was also explained in this training session: the role of each working group, the researcher's participation in the project, the work dynamic, etc. In the first meeting of the improvement team, a brainstorming session took place. Thus, attendees were given a blank sheet of paper where, throughout the week, they could note problems or potential improvements that could be made within the sleep unit or in their dealings with the unit. This brainstorming session was complemented with the ideas supplied by the entire hospital's pneumology department and the members of the implementation team. After a week, 158 ideas for improvement were collected. The implementation team preliminarily evaluated these ideas, and those that were considered viable were analysed in further detail by the improvement team. Both working groups collaborated in a detailed analysis of the different processes that constitute patient flow, with the implementation team acting as facilitator and the improvement team in an operative role. This analysis sought to identify activities that were wasteful or not value adding, which obstructed the continuous flow of patients through the different processes. It did not look at each stage in isolation, but rather an overall analysis of the value chain. This analysis included observing the processes and the people carrying them out, as well as measuring, where possible, any seemingly wasteful aspects. These wasteful aspects were illustrated in a detailed value stream map (VSM), developing the initial chart described in Figure 3. The VSM was displayed on the wall of the meeting room by the researchers to illustrate the processes, the people implicated at each stage, and any other resources required to provide the service such as documents or databases. This VSM allowed the teams to discuss the current state of the system, the project itself and the viability of carrying out certain actions to improve its management. Thus, team members were able to identify different activities that did not add value for the patient or which caused rework for themselves, long waits, unnecessary movements, or repeated errors. The waste detected can be classified into two categories: that with a direct impact on the patient (waste which affects the patient directly and conditions his/her satisfaction with the service provided; see Table 1), and that with an indirect impact on the patient (waste that is not felt directly by the patient but impacts negatively on generating added value, and on the effective and efficient management in the Unit; see Table 2). Table 1. Waste with direct impact on the patient. Type of Waste Comments Long patient waiting lists between processes in the unit When starting treatment in the sleep unit, the patient must queue for two processes: the diagnostic test and results review. The patient waiting times for each of these processes was measured from the beginning of the project. Corresponding data from the previous year was collected and analysed. In the case of the diagnostic test, the average waiting time for the 211 patients on the waiting list was 67 days, and in the case of the results review, the average waiting time for the 673 patients of the waiting list was 49 days. In both cases there was a high variability in waiting time. Repetition of diagnostic tests Some of the monthly diagnostic tests showed an error when analysed on the computer. Those responsible were asked to analyse the results of the diagnostic tests which showed a fault, so as to find its source. At the same time, they registered the diagnostic tests which had been carried out on each of the Unit's six machines. From this they were able to establish that the most common causes for the fault were: human error due to the patient's incorrect positioning of the apparatus, a fault in the apparatus itself and a computer fault in downloading the test. During a sample over seven months, there were 482 diagnostic tests, 36 (7.5%) of which were repeated. Repetitive medical consultations in the unit due to the fact of maladjustment of the CPAP The mask connecting the patient to the CPAP (continuous positive airway pressure) must be adapted to each patient. Consequently, there were different types of mask. An inadequately fitted mask can cause discomfort while the patient is sleeping, dry mouth, disconnection from the CPAP, or breakage. Before beginning home treatment with the CPAP, the patient is fitted with the most suitable type of mask. However, the patient usually experiences discomfort with the mask after continuous use of the CPAP. Therefore, the majority of return visits to the sleep unit by patients already in treatment are due to maladjustment of the mask. These visits interrupt the scheduled appointments for patients still in the diagnostic stage of the illness. Table 2. Waste with indirect impact on the patient. Type of Waste Comments Delays in drafting diagnostic reports Before a patient can begin treatment, sleep unit doctors must firstly draft a report on the results of the diagnostic test. In the Unit there were files full of diagnostic test results waiting to be drafted in reports. Managers were unable to estimate how many reports were pending, but the files contained diagnostic test results that were more than one year old. The sleep unit doctors drafted reports whenever they managed to have some free time. Patients who do not follow the treatment Unit managers were aware that some patients who had started home treatment were not using the CPAP for the time required for an effective treatment. However, they were unable to estimate how many patients underused the CPAP or the number of patients receiving treatment. The economic cost of each CPAP amounts to €1.35 per day and 15 patients were identified as not having used the machine for more than 10 years. Halfway through the project, 4137 patients in home treatment had been recorded, 15.52% of those used the CPAP for less time than was required for an effective treatment. Sleep unit specialists dedicating time to activities outside the unit Sleep unit patients achieve added value through the effective treatment of SAHS. Apart from spending time on activities that add value for patients, doctors and nurses from the sleep unit carry out other activities within the pneumology department such as visiting admitted patients, receiving visits from supervisors, and helping with other activities outside the unit. These activities do not form part of their schedule, and when they arise, doctors and nurses must improvise in order to maintain a minimum service in the sleep unit. Paper support information system Initially, all information handled within the unit was on paper, such as the appointments diary, reports on test results, and forms to be filled out by both doctor and patient during the consultation. A large amount of paper-based information was collected from different points along the patient flow which led to the duplication of some data. This complicated data analysis and patient traceability for the unit management. Difference in criteria for diagnosis and discharge of patients After pooling the criteria for patient referral to the unit and the criteria followed by unit doctors for patient discharge, there appears to be no uniformity of criteria for decision making. Impossible to quickly and easily understand different aspects of unit management Essential aspects of unit management such as knowing the current workload, the traceability of each patient, or the productivity of the unit, were difficult if not impossible for the unit specialists to understand. This was due to a lack of defined indicators and a lack of information to create them. Once the different sources of waste were identified and pooled together, the implementation team assigned those responsible for carrying out the proposed improvement actions. The aim of these improvement actions was to eliminate waste, to create follow-up and control by means of defined indicators and to continuously improve processes over time by generating new added value. The following improvement actions were carried out to eliminate specific sources of waste grouped into seven categories (see Table 3). Table 3. Proposed improvement actions. Type of Improvement Comments Categorisation and prioritisation of patients according to seriousness Three different patient categories were defined depending on the seriousness of the illness, the risk of accident due to the symptoms of the illness and the patient's profession (the risk of accident is higher if a professional driver suffers from drowsiness, rather than an administrator). The categories of urgent, preferential and normal were established. The appropriate category is assigned to the patient by sleep unit professionals in the first consultation. Any patient diagnosed as urgent is always given priority in the flow. This action is similar to the triage method used in emergency departments. This method efficiently rations patient treatment when there are insufficient resources for all to be treated immediately. The sleep unit has a large number of patients on waiting lists. Since it is not possible to attend to all patients at once, priorities were established for administering medical care. Drafting technical instructions Technical instructions were drawn up for all activities where the probability of human error occurring was greater. The instructions explained the optimal way to carry out these activities. In the case of the diagnostic test apparatus, step-by-step instructions were drawn up for the positioning of each part of apparatus. Each step contained a small photo with an explanatory text. These technical instructions were posted on the wall in the sleep unit and placed in the bags that contained the diagnostic test apparatus. Action procedure for patients who do not adhere to treatment An action procedure was drawn up in conjunction with the external care provider company to identify patients that voluntarily underused the CPAP (continuous positive airway pressure). Under this procedure, the external care provider company would report a monthly list of non-adherent patients (including their reasons for not using the CPAP) and the sleep unit would evaluate each particular case. By way of example, for a patient who used the CPAP, but not sufficiently, efforts would be made to re-educate the patient in its use. In the case of a patient who did not use the CPAP at all or refused to use it despite having been re-educated in its use, sleep unit doctors would have the necessary authority to recall the CPAP. Categorisation and prioritisation of doctors' activities The daily activities carried out by unit doctors were recorded over a three-week period. These activities were classified into three categories: key activities that impact patients directly, such as consultations; general activities that impact patients indirectly, such as training sessions for unit staff; and other activities that do not add value for the patient, such as meeting unit visitors. Target percentages for the time dedicated daily to each type of activity were established: 70% for key activities, 20% for general activities, and 10% for the rest. In addition, a proposed weekly schedule for doctors' activities was drawn up. Computerisation of data capture, storage and analysis During the case study, an "Access" computer application was developed to capture, store and analyse all data handled within the sleep unit. This includes doctor and patient forms, medical reports, test results and so on. Data capture, analysis, treatment, and storage would be centralised in a single application and database. The patient appointment process was also systemised. Each patient was automatically allocated the maximum admissible time within which he/she should be seen (by appointment), in line with the patient's previously assigned category of urgent, preferential or normal. The patients on the waiting list were arranged from most to least urgent according to this maximum admissible time for receiving an appointment. Furthermore, if the patient is not seen within this maximum admissible time, a notice is sent to the person in charge of managing the appointment process. Development of internal procedures and protocols Internal procedures and protocols were drafted in order to standardise processes that were carried out by different doctors. This sought to homogenise diagnosis and treatment criteria for patients, reduce the probability of producing human error, and facilitate the task of estimating productivity, quality and efficiency indices. Defining indicators that are linked to objectives Indicators were defined for the appropriate management of each value adding activity. These indicators were established to measure the following: unit productivity, such as the number of consultations per week and number of reports drafted per week; quality, in terms of waiting time to access the system, waiting time between processes, total accrued patient waiting time, rate of test repetition and number of patients not adhering to treatment; system load, as in the number of patients in the system, number under treatment compared to the number of diagnostic tests and number of patients suffering from or with symptoms of the illness. Depending on the situation, either a target value or an admissible value was established for each indicator. Finally, the members' effort, commitment, and hard work was recognised in a joint presentation of the results from the experience to the hospital Board, interested parties from other hospital departments, and representatives from the Public Healthcare Innovation Plan. Each working group explained the role they had played in the project and their personal experience. Sleep unit staff expressed their commitment to the project, their responsibility for the KPIS (key performance indicators) values and their current overall vision of the patient flow. They were also satisfied with passing on the knowledge which they had acquired, and with proposing new aspects for improvement. The closing meeting served to showcase the results obtained through the effort, hard work and selfless involvement of both working groups. Consolidation and Extension Stage The follow-up and control of the implemented actions is primarily based on the revision and evaluation of the defined indicators, so as to measure how each one is performing. The values obtained from these indicators help to illustrate the effectiveness of the implemented action, since pre-implementation results can be compared with post-implementation results. The frequency with which data is gathered and processed should allow an updated view of the indicators for further follow-up and control. This in turn would allow early detection of any possible deviations in the processes, and any necessary correctional or preventive action to be taken. The computer application developed greatly facilitates the follow-up, evaluation and control of indicators. The application can pull up reports on indicators, using a database created from all project data that had been inputted since the application was launched. In this way, indicators can be extracted periodically, at the touch of a button. Understanding variability helps healthcare providers to more accurately model and address opportunities for improvement [72] and is the first step to improve a system [73]. Furthermore, McLaughlin [74] argues that variability should be seen as something to be managed and analysed rather than something to be eliminated entirely. To maintain and further enhance the results, the researchers proposed that the sleep unit manager and the head of the pneumology department should meet periodically with sleep unit staff to follow-up the implemented actions and to share ideas on potential improvement areas. In practice, this follow-up process is equivalent to a permanent improvement team. In addition, internal audits were developed to verify the state of the improvements and the key parameter values for the patient flow management. In any case, it is not enough for one department to streamline processes and improve service if the rest of the organisation is going to do business as usual [75] without progressing in the lean transformation process. Hence, efforts should be made to extend the experience to the rest of the hospital. In this respect, it is especially important to communicate the sleep unit's achievements to the other hospital departments. Thus, in order to complete the rollout of the methodology to the other units at the hospital, there were training sessions for the rest of the coordinators and managers. The trainers of these sessions were chosen from members of the first two teams. After these sessions, the coordinator for each unit worked individually to explain the methodology and lean principles to the staff, launching specific improvement teams with similar tasks to the pilot experience. Finally, when the proposed methodology had been deployed, permanent working teams will be adopted to analyse and improve the KPIS in each unit at the hospital, systematically proposing actions for improvement. The system as a whole contributes to the organisational transformation associated with the implementation of lean management within the strategic objectives set by the hospital. Results The applied methodology for lean management implementation was based on the three supporting pillars of processes, personnel involvement and indicators but also on the adoption of an appropriate working system. The resulting improvement actions that were taken led to improvements in the management of patient flow in the sleep unit in terms of quality improvement, costs reduction and productivity. In this regard, the categorisation and prioritisation of patients from the first consultation to the beginning of treatment led to a reduction in waiting time for urgent patients (quality improvement). Waiting time from the first consultation to the diagnostic test showed a reduction of 71.6% with respect to the initial measured values; from the diagnostic test to the start of treatment, the reduction was of 81.6% (see Table 4). Developing the computer application to manage waiting lists systemised the ordering of each different list according to the assigned timeframe for seeing each patient. The creation of technical instructions, procedures, and protocols allowed process standardisation. This reduced the possibility of human error, made processes more predictable, and therefore easier to manage. The standardisation of processes helped to manage their complexity, allowed the acquisition of new skills, and fostered an understanding of the interrelationship between the different processes that constitute patient flow [76]. The follow-up and control of patients who do not adhere to treatment would result in an increase in treatment efficiency, if the action protocol is approved by the hospital managers. In fact, recalling the CPAP (continuous positive airway pressure) from patients who remain non-adherent after being re-educated in its use would result in a considerable saving of €1.35 per patient per day (costs reduction). Although not fully implemented, the proposed weekly schedule for doctors' activities served to concentrate activities in continuous periods of time, thereby avoiding the previous case of activities being conducted intermittently throughout the day. System computerisation allowed the required data for unit management to be captured and stored which, along with patient traceability, reduced the amount of time unit doctors spent manually filling out forms and drafting reports. This, in turn, eliminated duplication in data collection along the patient flow. As a result of database storage, data analysis and indicators, calculation became much simpler and required much less time. Furthermore, defining specific indicators helped to manage the unit according to its needs and resources, and allowed the systematic identification and resolution of process irregularities, thereby eliminating possible waste. Thus, establishing indicators and corresponding target values brought about significant improvements in productivity. At the end of the project, the number of diagnostic tests carried out had increased by 42.8%, and the number of diagnostic reports drafted had increased by 7.7% (see Table 4). Registering and quantifying the number of unsuccessful diagnostic tests helped identify repetitive human errors and repetitive faults in some machines, which were due to a lack of maintenance. By the end of the project, 41.5% of the 158 improvement ideas from the brainstorming session had been implemented, 16% were under review, 11.7% were awaiting analysis and the remaining 22.3% were discounted for the time being, since their implementation required further investment. There were other significant results of a more qualitative nature in relation to achieving competitive advantages. The applied methodology encouraged the involvement of personnel in improving patient services. Personnel were motivated and committed to the initiative. This was reflected in the satisfaction survey that the improvement team completed at the end of the project. One of the most highly valued aspects in this survey was the possibility to propose improvement actions and then implement them. Similarly, many of the improvement team members displayed an interest in participating in more initiatives of this kind. While developing the methodology for lean management implementation in the unit, some aspects caused greater difficulty than others. It is important to note the following: the complexity of patient flow; the hospital's internal organisational restructuring that was undertaken in parallel; the unit staff's day-to-day work that would often prevent them from dedicating time to the project; the lesser involvement and consequent lesser commitment of personnel from outside the unit such as GPs and the external care provider company; and the poorly developed processes related to managing information in the case of data collection, storage, analysis, follow-up and control. Discussion The methodology used in this case study for the preparation and subsequent implementation of lean management gave rise to significant improvements in the efficiency and effectiveness of the entire patient flow. This methodology allowed instant root cause analysis and allowed those participating to feel involved in the change [77]. Sleep Unit staff must take credit for their involvement in the definitive introduction of changes and their maintenance and renewal thus far, as should the pneumology department be thanked for their collaboration. In particular, the sleep unit manager's participation, commitment, and motivation in leading the implementation of lean management must be gratefully acknowledged. Moreover, the first steps in the introduction of change would not have been possible without the involvement of an external change agent. In our methodology, this role was undertaken by the researchers, who explained the need for change, provided unit staff with the necessary tools and training to make change, and instilled in them the importance of their involvement in improving the sleep unit, breaking organisational inertia and restraints on change. Logically, the success of the experience is also based on the commitment and support of the hospital management, who aligned the project's objectives with the global ones; thus, the preliminary stage helped generate a climate of trust and collaboration between the hospital management and the researchers. That climate that was maintained and reinforced during the other stages by the conceptual team. In the literature there are few detailed examples illustrating the need for collaboration between the worlds of academia and business in a bid to create and validate knowledge within the sphere of health services and lean management. That is why the action research approach adopted here is relevant from a scientific point of view. Such collaboration and knowledge sharing between the hospital and the authors make it possible to qualify and enhance the individual views of each party involved in the research, which underpins the necessary scientific rigor of the action research approach. This approach represents a valuable contribution to scientific literature which goes beyond more technical or theoretic approaches in lean management in healthcare. As was seen in the case study, for lean management implementation it was not only a matter of analysing each process as an isolated entity, since the greatest complexity and the greatest waste was often found in the links among these processes. Consequently, the greatest potential for improvements is focused on links and functions. By centring the methodology on the processes ("Go to Gemba", redesigning and/or reviewing tasks), improvement and standardisation are facilitated. Standardisation is key when objectively laying the foundations for capacities, productivities and terms, setting targets and identifying deviations. Process observation and documentation were proven effective, because they can serve as a real eye-opener for staff members who have never before examined their own processes in this way. Since they are immersed in these processes, staff are not normally able to see the surrounding complexity [78]. For example, an important aspect in the case study, both for improving quality and for improving the effectiveness and efficiency of the service, was the definition of three patient categories in the sleep unit (classification). Not all patients have the same diagnosis or are subject to the same risks; consequently, the medical care they receive throughout the patient flow should be appropriate to the seriousness of their symptoms and their personal circumstances. The selection of the place and time to launch the pilot improvement team (second stage) is also an important element when it comes to planning activities, because it is critical to start with areas, departments or centres that not only ensure success is possible, employing the least resources and lean tools and practices that are not very sophisticated, but also ones that involve workers who are particularly motivated and proactive (in the case study, this was the sleep unit). The good results obtained in these initial experiences are an excellent calling card and incentive to other areas, departments or centres (in the consolidation-extension stage) when launching new improvement teams in more complex settings that require more sophisticated lean tools or practices. The value of these experiences as promotors of transformation or change can be summed up in the common saying: "actions speak louder than words". Moreover, the "invisible" hierarchical and functional barriers must be broken down. These are traditionally strong in the healthcare sector, and particularly so in hospitals where there are clearly defined groups (doctors, nurses, administrators, etc.). In order to transform organisations with lean culture, spaces or forums must be created for integration, exchange and collaboration, which is justification for proposing that work be done in a participative way through teams. The new lean culture means that some roles have changed. For example, managers have become teachers, mentors and facilitators rather than simply directors or controllers. This is especially true for the sleep unit manager who was a key factor in the successful implementation of lean management. Her critical role in the project was brought into sharp focus with the attempted implementation of a proposed weekly schedule for doctors' activities. Her refusal to take responsibility for this action meant a problematic implementation, taking longer than expected, and eventually not yielding full implementation. Another important result is the sustainability of lean management, now that the knowledge has been imparted to unit managers and staff. For example, the sleep unit manager now speaks at conferences explaining this case study's origin, methodology, development, implementation, results and its sustainability. Thus, she is acting as a change agent by sharing this knowledge with sleep unit personnel from other hospitals. In other studies, it has been shown that, following the departure of the consultant, most companies experienced a decrease in improvement. It is crucial, therefore, that the consultant's lean management knowledge and skills are transferred to the organisation, so that once the consultant leaves, the company has the capability to sustain its lean transformation [79]. This all involves simplifying, rationalising, eliminating bureaucracy and so on. In essence, eliminating waste. In order to do this, an internal climate of trust must be created to encourage each person from within each area, department or centre to leave their comfort zone. In many cases, these personal barriers and settled arrangements are stronger in public organisations than in private ones. In short, internal transformation will have been achieved if lean principles can be inserted into the day-to-day activity of the organisation, not just in specific projects with a start and end date. This is also justification for fostering deployment of permanent teams during the consolidation-extension stage. Likewise, when it comes to programming the activities or priorities of the improvement teams, it is interesting to pay attention not only to activities which impact heavily on patient care and the global objectives of the organisation, but also to activities that can facilitate the work of the people involved in the processes. That helps to increase individual satisfaction and motivation levels and actively contributes to the internal transformation towards lean management attitudes and culture. As Gowen III & McFadden [54] argued in their study, the intangible factors of management, such as leadership, people management and partnerships, are crucial in the implementation of lean management. Many healthcare organisations failed in implementing lean principles because they were unable to handle these intangibles. In this case study, the intangible factors of unit management were taken into account from the beginning, and the success achieved in implementing actions can be attributed to this. Thus, it is worth noting the satisfaction shown by the members of the various working teams towards the methodology. Such satisfaction promotes staff motivation and commitment to the hospital's strategic goals, actively helping to improving its efficiency and level of service. Thus, there are three complementary areas in which this satisfaction is clearest [29]: (1) the opening up of a structured, systematic communication channel for analysing and dealing with problems or improvements linked to the processes people were participating in; (2) the possibility of participating directly in the improvement of their working conditions; (3) the possibility of receiving feedback on the quality of work undertaken. At the same time, the use of KPIS comprised one of the driving forces behind the development, implementation and, in particular, sustainability of lean management. The KPIS keep lean management alive in a way that, in the event of neglect, the management system would fail, and in order to re-initiate it, it would be necessary to start again from the beginning. Furthermore, decision making on management processes should be based on quantified facts that show quantified improvements. If this is not the case, the decisions made may lead to erroneous management and the consequent generation of waste. In this context of KPIS deployment, adjustment of the information system is of relevance in order to provide or calculate the value of the KPIS. In many cases, such as the one described here, the first teams work with a preliminary information system that acts as a starter, but sooner or later a certain level of "professionalisation" is needed to improve reliability and to connect KPIS to the organisation's global indicators (with approaches such as the balanced scoreboard). This situation means that some additional resources (not just IT resources) must be foreseen and planned for in order to maintain a widespread lean culture. Given that available resources are always limited, in the worst cases some of the proposed improvements may not be seen as a priority within the global context and could therefore be delayed or even ruled out. When that happens, it must be communicated and explained so as not to cause indifference and demotivation among personnel. Future Avenues The analysis of the literature and the methodology proposed in this paper identify lines for applied research for academics and practitioners within the scope of lean management implementation in the health sector. Thus, a lean transformation in any organisation should be addressed with a mid-to long-term perspective. However, the applied experiences in the literature tend to focus on the early stages of this transformation, almost always with an analysis lasting fewer than 24 months (our case study, for example, ran for 8 months). At the same time, a large number of the papers or reviews found in the literature focus on hospitals; however, there are other, more specific centres (for example, health centres or primary healthcare) that, because of their number, care provision profile and size, could require a specific analysis and adaptation of the methodology when implementing lean principles. Likewise, benchmarking (internally in each organisation and externally between them) is a little-explored field and can be used to compare and classify practices, results and indicators related to lean management. Logically, given that these KPIS can be different in their conception and implementation, prior work would be required to standardise and bring them into line with a broader quantitative baseline. Such a comparative approach could also be used to identify differences depending on country or region, public or private organisation, or the type of area or department. Limitations One significant limitation was the study's isolated implementation in a single hospital unit. For this reason, we propose that future research should deal with the implementation of this methodology in other hospital departments and health services, so that it can be validated more widely. Furthermore, the area of the hospital chosen to illustrate the launching stage of our methodology, the sleep unit, required unsophisticated lean techniques and tools. In other cases, for application, the diversity and complexity of these tools and indicators would be greater, which would lead to some variety in the effort required in the methodology when it comes to providing the necessary training and resources. Finally, the ultimate objective of lean management and the proposed methodology is to eliminate waste in the various processes used in health sector organisations; however, some of the improvements (albeit on a small scale) could enter into conflict with the basis for medical or healthcare treatment, the good use of available technologies or even the regulations in force. Logically, in such situations, common sense would have to prevail over any potential for improvement. Conclusions Internal management of healthcare is incredibly complex and a vast quantity of data are collected. In general, it is still not possible to easily identify how a hospital is performing in terms of quality, cost and delivery of services, because a great deal of the information gathered is not linked to measure the efficiency of processes. Thus, it is important to identify what adds value for the patient and what information and activities are necessary in order to provide added value in the best possible way in line with lean principles. The participative methodology offers guidelines to achieve this by focusing on the generation of added value for the patient and eliminating waste in service provision. The proposed methodology could be adapted and adopted by any hospital department, in any other hospital, or in other health services, in other countries. Lean management's successful implementation would rely on a suitable choice of change agent who can foster change, on good management and coordination of actions related to the three pillars of processes, personnel and KPIS, and on the existence of a true leader in the organisation who would be responsible for the whole process of change. The advantages of our participative methodology go beyond the important improvement in the hospital unit operations, because people's commitment and involvement make it possible to bring the organisation into line with lean principles. Likewise, the methodology is participative and proposes support through teams that include people from different functions and hierarchical levels. This across-the-board structure is very important because it allows cohesion, deletes barriers and generates a culture of teamwork and learning. On the basis of what was developed in this paper, it can be said that the main research question was answered affirmatively. That is, it is possible to define and implement a participative methodology that systematically seeks to redesign processes in health services, by applying the scientific action research approach and under a lean management perspective.
v3-fos-license
2023-02-08T15:49:02.875Z
2020-06-15T00:00:00.000
256633117
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-020-16771-y.pdf", "pdf_hash": "caff4ff57b0d050bc8a0415f87eaf4d99e947e98", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2516", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "caff4ff57b0d050bc8a0415f87eaf4d99e947e98", "year": 2020 }
pes2o/s2orc
Early stratification of radiotherapy response by activatable inflammation magnetic resonance imaging Tumor heterogeneity is one major reason for unpredictable therapeutic outcomes, while stratifying therapeutic responses at an early time may greatly benefit the better control of cancer. Here, we developed a hybrid nanovesicle to stratify radiotherapy response by activatable inflammation magnetic resonance imaging (aiMRI) approach. The high Pearson’s correlation coefficient R values are obtained from the correlations between the T1 relaxation time changes at 24–48 h and the ensuing adaptive immunity (R = 0.9831) at day 5 and the tumor inhibition ratios (R = 0.9308) at day 18 after different treatments, respectively. These results underscore the role of acute inflammatory oxidative response in bridging the innate and adaptive immunity in tumor radiotherapy. Furthermore, the aiMRI approach provides a non-invasive imaging strategy for early prediction of the therapeutic outcomes in cancer radiotherapy, which may contribute to the future of precision medicine in terms of prognostic stratification and therapeutic planning. Non-invasive approaches to stratify early responses to therapy are of great value to improve cancer patient management. Here the authors design a hybrid nanovesicle-based activatable inflammation magnetic resonance method for the early prediction of response to radiotherapy. arly prediction of cancer treatment efficacy is of great value to cancer patients 1,2 . Although there are many strategies and combination systems available for the deployment of cancer therapy in the clinic, successful cancer control is still underrepresented 3 . Mounting evidence indicates that cancer patients have varying degree of responses to different treatments due to tumor heterogeneity, which makes the treatment planning imperative considering the fast progressive nature of cancers 4,5 . One would expect that therapeutic efficacy can be predicted at an early time of cancer treatment, which will greatly benefit cancer patients in terms of reducing side toxicity from ineffective treatments, and more importantly, earning time for better treatment optimization 6 . However, it is technically demanding and challenging to determine whether a given treatment will be effective or not at an early time, which could therefore lead to a devastating outcome if the tumor growth is not controlled before its invasion and metastasis. The biopsy-based biomarker detection methods have been utilized to monitor therapeutic response but are subject to errors and limitations of invasive tissue collection 2 . Therefore, non-invasive and accurate methods enabling to gain insights of therapeutic response are highly desirable for better management of cancer therapy [7][8][9] . Radiation therapy (RT), referring to external-beam X-ray irradiation hereafter, has gained acceptance for treating over 50% cancer patients in clinic. RT can lead to DNA damage in tumor cells that mediate the dysregulation of tissue resolution and homeostasis, resulting in the development of late adverse effects (e.g., vasculopathy and hypoxia) 10,11 . Unfortunately, the effective RT is largely compromised due to heterogeneous radiation responses and radioresistance, which vary in different cancer types and among individuals 12 . Therefore, early assessment of RT response from biological side has received much attention through evaluating vascularity changes 9 , mapping pathologic variations 13,14 , or quantifying cancer stem cells 15 , and many other approaches [16][17][18] . However, targeting molecular mechanism of RT response for therapeutic prognosis is still rare. Inflammation is an essential mechanism of innate immune reactions responding to acute stimulations from microbial invasions or tissue injury. The hallmark of sterile inflammation is the rapid infiltration of polymorphonuclear neutrophils (PMNs), which can produce large amounts of reactive oxygen species (ROS) through the secretion of bactericidal and tumoricidal enzymes myeloperoxidase (MPO). Recent studies showed that exogenous ROS generation could lead to polarization of antitumor M1 tumor-associated macrophages and recruitment of cytotoxic lymphocytes and T cells 19,20 . It is worth noting that PMNs infiltration during an acute inflammation can augment the local ROS level by up to 20-fold 21,22 , in which the ROS itself can be a lethal factor to tumor cells 23 . More importantly, therapy-induced inflammatory responses are responsible to the adaptive immune responses through ROS-mediated cell apoptosis and the ensuing activation of immune T cells 24 , in which neutrophils may play an important role in bridging the innate and adaptive immunity. Taken together, we hypothesized that the ROS generation from radiation-induced acute inflammation may serve as a molecular target for early stratification of the RT outcomes in cancer therapy. Recently, ROS-based imaging probes have been extensively studied by implicating positron emission tomography (PET) and optical methods for assessing the disease states [25][26][27][28][29] . Although PET and optical methods can provide exclusive detection sensitivity, the lack of anatomical soft tissue contrast may limit the resolution of spatial distribution in tumor. This leads to a major gap to resolve the imaging results for accurate quantification and stratification. Magnetic resonance imaging (MRI) can provide excellent anatomical accuracy in soft tissues; however, conventional MRI fails to meet the criteria of high sensitivity and the approach to RT response stratification remains elusive. In this work, we proposed an activatable inflammation MRI (aiMRI) approach for early stratification of RT response in a quantitative manner (Fig. 1a). The radiation-induced inflammatory response is responsible for the infiltration of neutrophils and the production of ROS. On the one hand, the ROS-mediated cell apoptosis could lead to adaptive immune responses, which may take days to weeks after RT treatment. On the other hand, the inflammatory ROS can specifically oxidize hydrophobic thioethers to hydrophilic sulfones 30 , which can be engineered as a ROS-responsive platform for activatable MRI. The quantitative MRI is operated at 24-48 h after RT and is further examined by correlating the T 1 relaxation changes with the corresponding tumor inhibition rates. The OFF-ON phenomenon for activatable T 1 MRI is leveraged by the interplay between T 2 quencher (Q) and T 1 enhancer (E), which is mainly related to the quencher's T 2 effect and the Q-E distance. Here, we used triblock poly(ethylene glycol)-poly(propylene sulfide)-poly(ethylene glycol) (PEG-PPS-PEG) amphiphilic copolymers to fabricate a nanovesicle (NV) structure containing small-sized iron oxide nanoparticles (IO NPs) in the membrane and gadolinium (Gd) species on the surface, denoted as IO-Gd NVs (Fig. 1b). The selfassembly and disassembly of the IO-Gd NVs confer dual-positive factors to the T 1 OFF-ON effect: (1) the quencher's T 2 effect is decreased upon disassembly due to the dispersed magnetic field coupling effect 31 ; (2) the Q-E distance is increased due to the oxidation-induced swelling of polymers equipped with Gd species 32,33 . To facilitate quantification, we used MRI relaxation maps to quantify the T 1 relaxation time changes in the aiMRI, allowing us to correlate the T 1 relaxation changes derived from the aiMRI with the ensuing adaptive immune responses and the tumor growth rates in different treatment models. Results The aiMRI nanoprobe. The triblock PEG--PPS-PEG-NH 2 copolymers were synthesized through a modified procedure from literature and characterized by proton nuclear magnetic resonance (NMR) spectrum (Supplementary Figs. 1−3). Selfassembly of the triblock copolymers led to the formation of blank NVs with diameters at around 100 nm from the transmission electron microscopy (TEM) images (Fig. 2a). The presence of amine terminal groups on the blank NVs was confirmed by the zeta-potential analysis ( Supplementary Fig. 4). The membrane thickness of the black NVs is about 6-8 nm, indicating a uniform lamellar assembly structure. Previously, we have shown that small-sized hydrophobic Au NPs were able to incorporate into the membrane during the assembly of PPS-PEG NVs 30 . Here, we used 5 nm sized IO NPs as building-blocks to attain IO NVs which was further modified with Gd species to attain IO-Gd NVs ( Fig. 2b and Supplementary Fig. 5). We further prepared single-component IO NVs and Gd NVs, which show similar hydrodynamic diameters with those of blank NVs and IO-Gd NVs ( Fig. 2c and Supplementary Fig. 6). The IO-Gd NVs with different Fe:Gd ratios were obtained by tuning the feeding amount of IO NPs and Gd species (Fig. 2d-f). The three IO-Gd NVs with Fe:Gd ratios of 35.5:1, 144:1 and 21:1 for Figs. 2b, d, and e, respectively, show good uniformity both in size and morphology. To investigate the degradation behavior of the blank NVs in response to an inflammatory milieu, we used H 2 O 2 and MPO to mimic the inflammation-mediated oxidative burst in which MPO can convert H 2 O 2 into HClO in the presence of chloride ions. While the oxidation strength of H 2 O 2 is slightly higher than that of HClO 34 , the later has greater oxidation kinetics, which has been reported as a promising alternative to H 2 O 2 with greatly enhanced oxidative efficiency 35 . We monitored the UV-vis absorption of the blank NVs at different time points after incubated with H 2 O 2 (500 μM), MPO (5 U/mL), and NaCl (500 mM). The results show that the absorption between 400 and 475 nm had a remarkable increase at as early as 10 min after incubation, which may be ascribed to the formation of hypochlorites (Fig. 2g). Meanwhile, the gradually decreased intensity after 500 nm indicates the decomposition of the blank NVs due to the oxidation and swelling of the membrane. This phenomenon is comparable to the oxidation behavior of blank NVs incubated at a high concentration of H 2 O 2 (10 mM) but without MPO ( Supplementary Fig. 7A). On the contrary, the blank NVs show negligible changes in the absorption intensity incubated with H 2 O 2 at 500 μM but without MPO (Supplementary Fig. 7B). The changes of chemical shifts in the proton NMR spectrum also confirmed the successful oxidation of PPS backbone ( Supplementary Fig. 8). These results indicate that MPO can potentiate the efficacy of H 2 O 2 to oxidize PPS units at an inflammation-relevant concentration. We further used TEM images to characterize the morphology changes of the blank NVs and IO-Gd NVs after inflammatory oxidation. By taking aliquots from the solutions at different time points, we observed a gradual degradation of the blank NVs initiated at around 10 min after incubation (Fig. 2h). The oxidation of hydrophobic PPS units leads to swelling of the membrane backbone of NVs, whereas the residual hydrophobic components underwent reverse micellation within 4 h and lasted to 24 h incubation time from the TEM images. This process was also monitored by DLS measurements which show gradually decreased hydrodynamic diameters with increasing the incubation time ( Supplementary Fig. 9). For IO-Gd NVs, the DLS results show a slightly increase of hydrodynamic diameters at 10 min and 1 h incubation time probably due to the micellation of the residual IO NPs. At later time points of incubation, precipitation was observed in the IO-Gd NVs which further indicated the decomposition of the vesicular structures under the artificial inflammatory milieu. The MRI performance of the IO-Gd NVs was evaluated on a 7 T MRI scanner. The r 1 relaxivity values are inversely proportional to the Fe:Gd ratio of different IO-Gd NVs, from 1.13 ± 0.36 to 1.47 ± 0.31 and 6.39 ± 0.49 mM −1 s −1 for Fe:Gd ratios of 144:1 to 35.5:1 and 21:1, respectively (Fig. 3a). The sharp decrease of the r 1 values from 6.39 to 1.47 mM −1 s −1 could be due to emerging magnetic field coupling effect, which is highly dependent on the distance between IO NPs. Interestingly, the r 2 relaxivity values of two IO-Gd NVs (Fe:Gd ratios of 144:1 and 35.5:1) and the IO NVs are similar, 174.5 ± 21.3, 195.3 ± 11.5, and 188.4 ± 17.6 mM −1 s −1 , respectively, which are much higher than that of the IO-Gd NVs with Fe:Gd ratio of 21:1 (r 2 = 113.7 ± 12.7 mM −1 s −1 ) (Fig. 3b). Compared with the r 1 value of the Gd NVs (16.3 ± 3.2 mM −1 s −1 ), these results indicate that the clustering effect of IO NPs could 'quench' the T 1 relaxivity of adjacent Gd species due to the significantly enhanced T 2 shortening effect of IO-Gd NVs. The r 2 values of the single IO NPs and the Gd NVs are 63.5 ± 4.6 and 31.2 ± 1.5 mM −1 s −1 (Supplementary Table 1 Fig. 1 Illustration of the activatable inflammation magnetic resonance imaging (aiMRI). a Radiotherapy (RT)-induced acute inflammatory response leads to reactive oxygen species (ROS) production, which exerts tumor inhibition through ROS-induced cell apoptosis and T-cell activation pathways. The adaptive immune responses usually take days to weeks after RT. The aiMRI is applied to quantify the ROS at an early time (24-48 h) after RT which is proposed to stratify the tumor inhibition. b The procedure of self-assembly and disassembly of the aiMRI nanoprobe that is composed of iron oxide nanoparticles (IO NPs), gadolinium (Gd) species, and triblock PEG-PPS-PEG-NH 2 polymers, denoted as IO-Gd NVs. The oxidation of hydrophobic thioethers to hydrophilic sulfones leads to swelling of the polymers and decomposition of the IO-Gd NVs. This procedure confers dual-positive factors to the T 1 MRI OFF-ON effect: the quencher's T 2 effect is decreased upon disassembly due to the dispersed magnetic field coupling effect; (ii) the quencherenhancer (Q-E) distance is increased due to the oxidation-induced swelling of polymers equipped with Gd species. response to inflammation, we studied the r 1 values of the IO-Gd NVs with Fe:Gd ratio of 35 Fig. 10). Moreover, the r 1 values were also positively correlated with the MPO concentration under the same concentration of 500 μM of H 2 O 2 ( Supplementary Fig. 11). The r 2 values of the IO-Gd NVs slightly decreased after oxidation due to the reverse micellation of the IO NPs (Supplementary Fig. 12 and Table 2). The r 1 and r 2 values of the Gd NVs show similar trends upon oxidation (Supplementary Fig. 13 and (Fig. 3d). This phenomenon is due to the equipped mechanism of acquiring T 1 -weighted MRI that the longitudinal (T 1 ) magnetization has to flip down to the transverse plane to be detected, giving rise to the 'quenching' phenomenon of T 2 shortening effect to T 1 contrast. In this respect, strong T 2 effect may disparage the recovery of T 1 magnetization during the T 1 signal acquisition, reducing the T 1 contrast and vice versa. Upon oxidation, the T 1 bright contrast of the IO-Gd NVs appeared with increasing concentration of H 2 O 2 while keeping the amount of NaCl and MPO constant, which may be attributed to the ROS-responsive decomposition of IO-Gd NVs and the swelling of Gd species. The T 1 relaxation time maps were further constructed from a series of T 1 phantoms of different parameters, allowing us to compare the relaxation changes in a quantitative manner (Fig. 3e, f). Moreover, the quantitative analysis of relaxation changes (ΔR 1 = 1/T 1, post -1/T 1, pre ) indicates nearly linear correlations with the concentration of H 2 O 2 (Fig. 3g). In vitro and in vivo evaluation of aiMRI. Cell viability assay shows that the blank and IO-Gd NVs have negligible cytotoxicity to both U87MG and 4T1 cells at polymer concentrations up to 800 μg/mL ( Supplementary Fig. 14), indicating the good biocompatibility of these samples. To evaluate the feasibility of aiMRI in vivo, we established the mouse sterile inflammation model and conducted the T 1 MRI using IO-Gd NVs, IO NVs (always off), and Gd NVs (always on) as contrast agents, respectively. The bright contrasts were observed in the inflammation spot for mice intravenously injected with IO-Gd NVs and Gd NVs with equivalent doses, whereas mice treated with IO NVs showed negative contrast in T 1 MRI at 24 h post injection (p.i.) due to the dominant T 2 effect ( Supplementary Fig. 15A). The semi-quantitative signal-to-noise ratio (SNR) and contrast-tonoise ratio (CNR) analyses show that aiMRI using IO-Gd NVs could provide 2.4-and 9.1-fold enhancement for SNR and CNR of mouse inflammatory foci at 24 h p.i., respectively (Supplementary Fig. 15B). These results indicate that the IO-Gd NVs is a good candidate for aiMRI in vivo in terms of high biosafety and output efficiency. To explore the aiMRI in mouse tumor after X-ray irradiation, we performed T 1 MRI at subcutaneous U87MG tumors after receiving control (no irradiation), 2 or 8 Gy irradiation using IO-Gd NVs as contrast agents. The results show that the intravenously injected IO-Gd NVs turned into negative contrast in mouse tumors without irradiation (Fig. 4a). The accumulation of IO-Gd NVs led to gradually increased negative contrast in tumor with a maximum reached at 24 h p.i. The semi-quantitative analysis also reveals that the SNR tumor decreases to a factor of 3.4 at 24 h p.i. compared with that of pre-contrast images (Fig. 4b). In contrast, mouse tumors receiving 8 Gy X-ray irradiation show bright contrast at 24 h p.i. with SNR tumor of 177%, which slightly decreases to 141% at 48 h p.i. (Fig. 4a, b). The slightly decreased SNR tumor is possibly due to the diffusion of Gd species after the decomposition of the IO-Gd NVs. However, mouse tumors receiving a low dose X-ray irradiation (2 Gy) show little T 1 MRI signal changes and SNR tumor . The use of Gd NVs and IO NVs as contrast agents resulted in always ON and always OFF phenomenon in T 1 MRI of mouse tumors receiving 8 Gy X-ray irradiation, which are consistent with the aiMRI in vitro and in vivo mouse inflammation models ( Supplementary Fig. 16). We further explored the differences of inflammatory responses between mouse tumors receiving different doses of irradiation. The ELISA measurements of IL-6 and TNF-α show that the mouse group receiving 8 Gy irradiation produced significantly higher level of both inflammatory factors in the plasma compared with those of mouse group with 2 Gy irradiation, especially at 48 and 72 h post treatment time points (Supplementary Fig. 17A, B). This trend is also consistent with the IL-6 levels measured in the tumor mass, however, the differences of TNF-α level in the tumor mass between mouse groups receiving 2 and 8 Gy irradiation are marginal ( Supplementary Fig. 17C, D). This phenomenon may be associated with the dysfunctional ROS resolution and oxidative DNA damage in the tumor rafter X-ray irradiation 36,37 . The hallmark of tumor inflammation is the infiltration of neutrophils which in turn secret specific tumoricidal enzymes MPO. Immunofluorescence staining results show that 8 Gy irradiation can elicit significantly higher level of MPO in mouse tumors compared with that of control group and mouse group with 2 Gy irradiation ( Fig. 4c and Supplementary Fig. 18). The infiltration of neutrophils from peripheral vessels to deep tumor tissue over time was also observed. Furthermore, we investigated the relationships between the early-time inflammatory ROS level and the late-time RT outcomes in U87MG mouse tumor models. We used quantitative aiMRI to stratify the inflammatory ROS levels at 24-48 h post irradiation of mouse groups receiving 0 (control), 2 or 8 Gy irradiation. The pre-and post-contrast T 1 -mapping MRI were acquired on each mouse of the three groups at 24 and 48 h after RT treatment, respectively. The representative T 1 relaxation time maps of the three groups at respective pre-and post-contrast time points are presented at Fig. 4d, which were obtained from the reconstruction of a series of T 1 parameters ( Supplementary Fig. 19). Moreover, the multi-slice acquisition of MRI enables an anatomical analysis of the T 1 relaxation time changes covering the whole frame of tumor ( Supplementary Figs. 20 and 21). These results allow us to record the quantitative T 1 MRI changes in tumor. For example, mice receiving 8 Gy irradiation had a remarkably higher T 1 relaxation time changes (673 ± 183 ms), compared with that of control (173 ± 33 ms) and 2 Gy irradiation (264 ± 72 ms) (Fig. 4e, *p = 0.031 and **p = 0.0054, one-tailed homoscedastic t-test). The mouse group receiving 8 Gy irradiation shows slightly higher tumor uptake of IO-Gd NVs than that of control and 2 Gy irradiation ( Supplementary Fig. 22), which are possibly due to the enhanced vascular permeability in tumor during inflammatory response. This result further underlies that the prominent T 1 relaxation time changes in mice receiving 8 Gy irradiation are mainly ascribed to the aiMRI rather than other factors. The tumor growth curves after RT show that mouse group receiving 8 Gy irradiation had significantly lower average tumor growth rates compared with the control groups (Fig. 4f, **p = 0.0032, one-tailed paired t-test). However, the tumor growth rates for individual mouse show great variations for all the treatment groups due to the heterogeneous RT responses (Supplementary Fig. 23). The mouse body weight and the survival rate were recorded at 20 and 40 days post irradiation, respectively ( Supplementary Fig. 24). The representative hematoxylin and eosin (H&E) staining and the terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) immunofluorescence staining results show higher level of cell death for 8 Gy treated mice compared with control groups ( Supplementary Fig. 25). To investigate the correlations between different factors in the RT, we plotted the T 1 relaxation time changes and the corresponding tumor volume changes for individual mouse of different groups (Fig. 4g). The results show that mouse groups receiving different doses of irradiation have convergent correlations between each other, which are consistent to the averaged tumor growth curves. More importantly, the Pearson's correlation coefficient (R = −0.8001) obtained from the divergent analysis indicates that higher T 1 relaxation time change in tumor after RT is strongly negatively correlated to the smaller tumor volume change of mouse at 20 days post treatment. The strong Pearson's R was also recorded between the irradiation doses, the T 1 relaxation changes and the tumor inhibition rates (Supplementary Fig. 26), indicating the great promise of using aiMRI to stratify the RT outcomes. Stratification of RT response by aiMRI. Encouraged by the above results, we further investigated how aiMRI can be applied to bridge the innate and adaptive immunity after RT. We studied the aiMRI and the immune response in Balb/c mouse 4T1 tumor models. RT were conducted on four groups including RT only, RT + aLy6G (neutrophil depleting antibody), RT + G-CSF (granulocyte colony-stimulating factor), and RT + DPI (diphenyleneiodonium, a NADPH oxidase inhibitor). The RT dose of 15 Gy was chosen in the 4T1 tumor model due to the evidenced radiotherapy response 24 . Although the mechanism is not clear, it is accepted that a high dose ablative RT, rather than multiple low doses, is able to reduce the myeloid-derived suppressor cells, thus reversing the immune-suppressive nature in tumor 38,39 . We acquired the aiMRI for each mouse at pre-and post-contrast time points which corresponds to 24 and 48 h after RT treatment, respectively (Fig. 5a-f and Supplementary Fig. 27). The representative T 1 mapping results shows that the control group received an averaged T 1 relaxation time changes of 323 ± 37 ms (Fig. 5g, **P = 0.0014, one-tailed homoscedastic t-test). In addition, the mouse groups treated with RT + aLy6G and RT + DPI had much lower T 1 relaxation time changes (273 ± 33 and 364 ± 121 ms, respectively) than that with RT only (574 ± 62 ms). In contrast, the highest T 1 relaxation time changes were recorded in mice treated with RT + G-CSF (1372 ± 221 ms) according to the multi-slice analysis of the aiMRI results ( Supplementary Figs. 28 and 29). This phenomenon can be ascribed to the enhanced CD11b + Gr-1 + neutrophil infiltration in tumor by RT and G-CSF stimulation (Supplementary Fig. 30). Moreover, the segregation of immature (CD101 − ) and mature (CD101 + ) neutrophil subsets revealed that mouse tumors treated with X-ray + G-CSF are responsible for a significantly higher fraction of mature neutrophils in the tumor, but not in the blood, when compared with the other groups (X-ray or G-CSF alone) (Supplementary Fig. 31). These results further indicate that RTmediated neutrophil infiltration can contribute to the oxidative stress in tumor cells, which could be alternatively quantified by the aiMRI. At the fifth day after treatment, the ROS-mediated tumor cell death for mice treated with RT + G-CSF was 2.3-fold higher compared with that of RT only (Supplementary Fig. 32). Moreover, the frequency of CD4 + CD8 + T cells in the splenocytes of mice examined at day 5 post treatment of RT + G-CSF was much higher (5.91%) than those of other groups ranging from 0.72% to 2.19% (Fig. 5h, i). In addition, the frequency of Treg cells in the splenocytes of mice treated with RT + G-CSF decreased to a lowest value of 0.67% among the other groups at day 5 post treatment, indicating the reversed immunosuppressive nature in tumor (Supplementary Fig. 33). This trend was further confirmed at the end point around day 18 after RT treatment, indicating the continuous immune responses throughout the posttreatment period (Supplementary Fig. 34). The tumor growth curves show that mice receiving RT + G-CSF treatment had a significantly lower growth rate and higher survival rate than other groups until 20 and 40 days after treatment, respectively ( Fig. 6a and Supplementary Fig. 35). The tumor inhibition rate of the RT + G-CSF mouse group (70.7%) is about twofold, fourfold, and sevenfold higher than those of RT only (34.3%), RT + aLy6G (17.6%), and RT + DPI (10.6%) mouse groups, respectively. The individual mouse tumor inhibition rates were calculated to quantify the correlations between the tumor inhibition rates and the T 1 relaxation time changes (Fig. 6b and Supplementary Fig. 36). The H&E staining results indicate negligible systemic toxicity to the major organs of mice in all treatment groups (Supplementary Fig. 37). The Pearson's R value is 0.9308 derived from the correlations between the individual T 1 relaxation time changes at 24-48 h and the corresponding growth inhibition rates at day 18 after RT treatments for all groups (Fig. 6c). This result indicates that the quantitative aiMRI approach is able to early stratify the efficacy of RT in the mouse model regardless the treatment methods. The high Pearson's R value of 0.9831 is also obtained from the correlations between the averaged T 1 relaxation time changes at 24-48 h and the frequency of CD4 + CD8 + cells in splenocytes at The pre-and post-contrast T 1 phantom images and the quantitative T 1 relaxation maps of mouse tumors after receiving control, RT (15 Gy) only, RT + aLy6G, RT + DPI, and RT + G-CSF, respectively. Yellow arrows indicate mouse tumors. The mouse tumor is magnified in the middle for the RT + G-CSF panel. g Quantitative T 1 relaxation time changes in mouse tumors (n = 5 biologically independent mice, **P = 0.0014, one-tailed homoscedastic t-test). h, i Flow cytometry analysis and quantification of the CD4 + CD8 + T cells in the splenocytes of mice at 5 days after different treatments (n = 3 biologically independent mice). All error bars indicate mean ± s.d. day 5 after different treatments (Fig. 6d). Furthermore, we performed the aiMRI study in NADPH oxidase (Nox-2) deficient mice bearing B16F10 tumor ( Supplementary Fig. 38). The results indicated that the Nox-deficient mice had significantly lower rate of inflammatory response derived from the aiMRI experiments, which correlated well with the lower tumor inhibition rate either with or without G-CSF treatment at late time (Pearson's R = 0.9007). Discussion We established our study on the hypothesis that acute inflammation-mediated oxidative burst may serve as a molecular mechanism for stratifying the therapeutic response in RT. It is known that inflammation can exert controversial effects on the malignant process with evidences for both pro-tumor and antitumor roles 40,41 . Recently, mounting evidence suggested that neutrophils alongside inflammation may have direct effect on regulating the malignant process of cancers [42][43][44] . One of the key features of neutrophil infiltration is the oxidative burst, which occurs concomitantly in tumor after exposure to radiation. It is important to point out that neutrophils contribute to the major source of MPO compared with monocyte-derived macrophages 45 , 1.8 mg versus 13 ng per 10 6 neutrophils or macrophages, respectively. A recent work demonstrated that neutrophils play a key role in RT-induced antitumor effect in which the enhanced necrotic cell damage and tumor shrinkage were attributed to the ROS production 24 . The ensuing adaptive immunity after inflammatory ROS production was also confirmed 24 . Our study used aiMRI approach to quantify the RTinduced acute inflammatory ROS at 24-48 h post RT, which provided an insight of using ROS as a targeting mechanism for stratifying the RT response. Our results presented not only convergent correlations between enhanced ROS generation and improved tumor inhibition rate, but also divergent RT response in different individuals. These results further protrude the necessity of stratifying RT response at an early time for better management of cancer therapy. MRI is a non-invasive and non-radiation method that is widely used in the clinic. The anatomical nature of MRI especially on soft tissues provides great opportunity in analyzing tissue structures and functions. Contrast agents are developed to enhance the contrast between imaging target and background in MRI, while activated MRI can further augment the sensitivity and specificity in diagnosis 46 . Designing T 1 -based MRI probes is usually achieved by sensing the distance between T 2 quencher and T 1 enhancer, in which a strong T 2 quencher and a short Q-E distance result in efficient T 1 signal quenching and vice versa 28,33 . In this study, we engineered both the T 2 quencher effect and the Q-E distance to promote the sensitivity of the T 1 OFF-ON phenomenon. The decomposition of IO-Gd NVs confers dualpositive factors to activate the T 1 ON state of the conjugated Gd species: (1) small-sized IO NPs dismissed from the IO-Gd NVs significantly decrease the T 2 quenching effect; (2) swelling of Gd species away from the IO NPs restores their intrinsic T 1 effect. Upon stimulation, the Gd species are still conjugated on polymers which could benefit the T 1 relaxivity due to the enhanced molecular tumbling time compared with that of unbounded Gd species generated by conventional cleavage strategy 47,48 . In this respect, we are able to use the aiMRI to stratify the ROS level within a biologically relevant range. On this occasion, the aiMRI may be amenable to imaging other inflammatory ROS-relevant diseases, such as atherosclerosis, arthritis, and sepsis. Last but not least, we used quantitative MRI to assess the T 1 relaxation time changes due to the following reasons: (1) the relaxation changes can be correlated with the inflammatory ROS levels in a quantitative manner; (2) the radiologic technologists may benefit from the quantitative aiMRI during the radiation treatment planning (e.g., determining the delivery paths and dosages). Ultimately, successful treatment planning may lower the possibility of suffering from unnecessary side effects and, more importantly, lead to a higher likelihood of improved treatment outcomes. The situation of MPO deficiency in human needs to be carefully considered, which could result in low signal output in our approach. Therefore, further actions such as testing the MPO activity and modulating the therapeutic plan are needed for those cases. To summarize, we developed an aiMRI strategy to early stratify the inflammatory ROS and the tumor inhibition rates in RT of different mouse models. The nanoprobe is equipped with dualpositive factors for sensing ROS with great sensitivity and specificity through activated T 1 MRI in a quantitative manner. The T 1 relaxation time changes at 24-48 h post RT show strong correlations with the ensuing adaptive immune responses and the tumor inhibition rates at the 18 days after RT, which provide an insight to early stratify RT response targeting the acute inflammatory ROS level. Moreover, the aiMRI approach may serve as a general tool to stratify the treatment efficacy of other ROS generating anticancer strategies. This study may shed new light on cancer diagnosis, prognosis and treatment planning for precision cancer therapy. Methods Synthesis of PEG-PPS-PEG amphiphilic triblock polymers. Poly(ethylene glycol) methyl ether (Mw 750) were used as source materials to obtain PEG-Tosyl and PEG thioacetate according to previously reported procedures. The PEG thioacetate were used to yield PEG-PPS-disulfide pyridine as follows: 200 mg of PEG thioacetate were dissolved in 4 mL of THF in a 10 mL Schlenk flask and degassed with N 2 . Sodium methoxide (16 mg, 0.5 M) were dissolved in 0.5 mL of MeOH and were then added to the flask through a syringe. After 30 min at room temperature, propylene sulfide (925 mg, 50 eq.) were injected to the system which was further degassed with N 2 for three times. The system was left to react for 45 min and the end capping agent disulfide dipyridine (165 mg) was added and the system was left for 24 h. The product was precipitated at ethyl ether, washed with ethyl ether for three times and dried under vacuum overnight. The product PEG-PPS-disulfide pyridine (400 mg) were then dissolved in THF with the addition of thiol-poly (ethylene glycol)-amine (Mw 1k, 150 mg). The reaction was left for 48 h before adding ethyl ether for precipitation and washing for three times. The final product was dried under vacuum overnight and characterized by 1 H NMR spectroscopy (Bruker, 300 MHz) in CDCl 3 . Preparation of nanovesicles. The PEG-PPS-PEG NVs were prepared using the solvent exchange method. Briefly, 6 mg of PEG-PPS-PEG were dissolved in 1.5 mL of THF. Deionized water was then added to the solution under vigorous stirring until the appearance of cloudy point. The solution was left open in fume hood until the completed evaporation of THF. The solution was then purified by low-speed centrifugation (3000 rpm) for 2 min to remove possible precipitation. A clear solution containing blank NVs was characterized by TEM and DLS measurements. The NVs equipped with IO NPs were obtained using a similar procedure by feeding as-synthesized oleic acid coated IO NPs (5 nm) during the formation of blank NVs. Furthermore, the amine terminated NVs then react with DOTA-NHS in borate buffer (pH = 8.5) for 2 h. Gd chloride of different ratios with respect to IO NPs were added to the solution at pH = 4.5 to yield IO-Gd NVs. MRI measurements. The MRI study was conducted on a 7 T scanner (Bruker). T 1 MRI experiments were acquired using rapid acquisition with relaxation enhancement with variable repetition time (RARE-VTR) sequence. T 2 MRI experiments were acquired using multi-slice multi-echo. The phantom samples with different concentrations of IO (Fe ions) and Gd were prepared in solution with addition of different amount of H 2 O 2 and MPO as required. The T 1 MRI phantom was acquired using the following parameters: echo time = 12.507 ms, effective TE = 12.507 ms, number of experiments = 7, multiple repetition time = 50, 250, 500, 1000, 2000, 4000, 6000 ms, flip angle = 180, rare factor = 2, number of repetitions = 1, number of averages = 2, matrix = 256 × 256. The T 2 MRI phantom was acquired using the following parameters: echo time = 10 ms, effective TE = 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, Study on the mouse U87MG tumor model. All animal experiments were performed under the National Institutes of Health Clinical Center Animal Care and Use Committee (NIH CC/ACUC) approved protocol. The U87MG cells mouse tumor model was established by subcutaneously injecting 2 × 10 6 cells into the right back flank of mice (athymic nude, 5-6 weeks old). After the tumor size reached around 35-42 mm 3 , mice were randomly grouped into three groups (n = 5). The mouse groups were treated with different doses of X-ray irradiation at day 0 including 0 (control), 2, and 8 Gy. After 24 h, mice from different groups were scanned individually by MRI using T 1 mapping sequence and multi-slice T 1weighted MRI sequence (pre-contrast MRI). The IO-Gd NVs were then injected intravenously with the injection dose of 4 μmol [Gd]/kg mouse body weight. At another 24 h after injection of contrast agents, the post-contrast MRI were acquired using the same T 1 mapping sequence and multi-slice T 1 -weighted sequence as for pre-contrast MRI. The data were analyzed using NIH developed software ImageJ. The tumor size and body weight were recorded every 2 days after each treatment until 20 days post irradiation, and the tumor volumes were calculated by the equation: V = width 2 × length/2. The survival rate was recorded for 40 days post irradiation. Study on the Balb/c mouse 4T1 tumor model. All animal experiments were performed under the National Institutes of Health Clinical Center Animal Care and Use Committee (NIH CC/ACUC) approved protocol. The 4T1 cells mouse tumor model was established by subcutaneously injecting 1 × 10 6 cells into the right back flank of mice (athymic nude, 4-5 weeks old). After the tumor size reached around 50-60 mm 3 , mice were randomly grouped into five groups (n = 8 for each group, n = 5 for tumor growth monitoring and n = 3 for checking the immune responses). The mouse groups were treated at the day 0 including control group, RT only (15 Gy), RT + anti-Ly-6G (neutrophils depleting antibody, 200 μg per mouse, intraperitoneal injection at 24 h before RT), RT + G-CSF (granulocyte colony-stimulating factor, 3 μg per mouse, intradermal injection after RT, one time per day for four days), RT + DPI (diphenyleneiodonium, a NADPH oxidase inhibitor, 50 μg per mouse, intraperitoneal injection after RT). After 24 h post RT, the aiMRI was conducted for mice from different groups (n = 5) using T 1 mapping sequence and multi-slice T 1 -weighted MRI sequence (pre-contrast MRI). The IO-Gd NVs were then injected intravenously with the injection dose of 4 μmol [Gd]/kg mouse body weight. At another 24 h after injection of contrast agents, the post-contrast MRI experiments were acquired using the same sequences as for pre-contrast MRI. The data were analyzed using NIH developed software ImageJ. The tumor size and body weight were recorded every 2 days after each treatment until 18 days post irradiation, and the tumor volumes were calculated by the equation: V = width 2 × length/2. The survival rate was recorded for 40 days post irradiation. The spleens and tumors from different groups (n = 3) were dissected at day 5 and the end point (day 18) after the X-ray irradiation. The isolated splenocytes and tumor cells were stained with different antibodies and the analysis was conducted by flow cytometry. Neutrophils from tumor cells were stained with APC anti-mouse Ly-6G/Ly-6C (Gr-1) and PE/Cy5 anti-mouse/human CD11b. Treg cells were stained following the protocol of the mouse Treg flow kit (FOXP3 Alexa Fluor 488/CD4 APC/CD25 (Biolegend). Reconstruction of the aiMRI maps. All the simulations were performed on homewritten programs 49 in MATLAB 2018a (MathWorks, Natick, MA, USA). In the phantom experiments where the signal in some voxels reached noise floor at large TEs, the signal correction for Rician noise was performed. The quantitative T 1 maps were calculated from the multi-TR signals with non-linear least square fitting of the data with the following equation: M TR ð Þ ¼ M 0 1 À exp À TR (TR) was the signal intensity at each TR and M 0 was a free fitting variable and equal to M = (TR = +inf). Statistical analysis. Student's t tests were used for evaluating differences between groups. No samples were excluded from analysis except for specifically noted. Quantitative data are expressed as means ± s.d. (standard deviation). The statistical significance is indicated as *P < 0.05, **P < 0.01, and ***P < 0.001. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
v3-fos-license
2017-12-09T13:30:36.426Z
2014-05-02T00:00:00.000
41508620
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=45592", "pdf_hash": "0707286b5360b47a38106ef858a230fad19ae06d", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2517", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "0707286b5360b47a38106ef858a230fad19ae06d", "year": 2014 }
pes2o/s2orc
Design of Type-1 and Interval Type-2 Fuzzy PID Control for Anesthesia Using Genetic Algorithms This paper presents the automatic drug administration for the regulation of bispectral (BIS) index in the anesthesia process during the clinical surgery by controlling the concentration target of two drugs, namely, propofol and remifentanil. To realize the automatic drug administration, real clinical data are collected for 42 patients for the construction of patients’ models consisting of pharmacokinetic and pharmacodynamic models describing the dynamics reacting to the input drugs. A nominal anesthesia model is obtained by taking the average of 42 patients’ models for the design of control scheme. Three PID controllers are employed, namely linear PID controller, type-1 (T1) fuzzy PID controller and interval type-2 (IT2) fuzzy PID controller, to regulate the BIS index using the nominal patient’s model. The PID gains and membership functions are obtained using genetic algorithm (GA) by minimizing a cost function measuring the control performance. The best trained PID controllers are tested under different scenarios and compared in terms of control performance. Simulation results show that the IT2 fuzzy PID controller offers the best control strategy regulating the BIS index while the T1 fuzzy PID controller comes the second. Introduction Anesthesia is a reversible state of people who are temporarily lack of consciousness in the purpose of undergoing a surgery without pain.General process of anesthesia consists of induction, maintenance, emergence, and re-covery [1].The depth of anesthesia (DOA) represents the level of consciousness [2], which is to be controlled through the process of anesthesia for varieties of control targets such as steady error, settling time, and overshoot.In general, control strategies can be categorized into two classes: open-loop control and closed-loop control.In open-loop control, based on the knowledge and experience, anesthetists manually adjust the drug dosage to maintain the DOA assisted by some clinical indices of patients.In closed-loop control, the drug dosage is automatically adjusted according to some indices of DOA, which makes control input continuous and responsive [3].Closed-loop control is also expected to avoid over-and under-dosage and suppress the adverse effect of interindividual differences [4].Despite of its advantages, the stability of closed-loop control needs to be ensured due to the automated process without supervision [5]. To achieve the stabilization based on closed-loop control, the following components are required [6]: 1) a patient model, with an output as the index of DOA; 2) a controller for stabilization, such as proportional-integraldifferential (PID) controllers.Therefore, to begin with, an estimated patient model is required to represent real patients supporting the control design and performance evaluation.A widely employed mathematical model consists of a linear pharmacokinetic (PK) model and a nonlinear pharmacodynamic (PD) model [7].The PK model uses the drug dosage as the input and the drug concentration as the output.In what follows, the PD model exploits the drug concentration as the input and exports the index of DOA. As stated above, the control input of the overall patient model is the drug dosage.According to the types of drugs, the anesthesia can be classified to inhalational and intravenous anesthesia.Inhalational anesthesia has been used since the mid-19th century with for instance nitrous oxide, isoflurane, and halothane as commonly used drugs [8].In contrast, intravenous anesthesia with propofol and remifentanil as commonly used drugs [9] has the following advantages [10]: separate provision of anesthesia from ventilation, reduced atmospheric pollution, rapid and clear-headed recovery, and so on.While inhalational anesthesia is still frequently used for children, intravenous anesthesia becomes more and more popular due to the rapid and safe transition [11]. Main difficulties involved in modeling are the determination of PD model parameters and the selection of the output index.In general, the parameters of PK and PD models need to be determined beforehand.For the PK model, parameters can be estimated depending on sex, age, and weight of patients.Nonetheless, for the PD model, it is not possible to estimate the parameters for certain patient.Accordingly, it requires the controller to be robust in a domain of PD model parameters [12].Although PK model parameters are varied with patients and PD model parameters are even changing for one patient, a general index can be designed to evaluate the DOA for all patients [13].The bispectral (BIS) index [14] is an extensively accepted index to measure the DOA.It is a univariate dimensionless parameter from 0 to 100, which is devised from a large set of electroencephalogram (EEG) data [15]. With regard to the controllers for stabilization, several control problems are faced for the closed-loop control of anesthesia: stability, which is the basic control objective; robustness, which overcomes the uncertainty of PD model parameters, measurement noise, surgical stimulation, and so forth; adaptiveness, which makes the controller adaptive to different patients rather than only one patient.Commonly used controllers include PID controllers, model-based controllers, and knowledge-based controllers [6]. To close the loop of control systems, classical PID controller is a doubtless option due to its successful applications in other areas.For stabilization, it is necessary to tune the parameters of PID controllers for a certain patient model.While the tuning process is an empirical manner, some tuning schemes were even developed to guarantee the robustness [16] [17].Combined with patient model identification from the induction phase of anesthesia [18], the adaptiveness can be further achieved.In addition, combined with genetic algorithm (GA), the parameters of PID controllers can also be online optimized [19]. Despite of various causes of the changes in PK and PD model parameters, the model-based controller relies on the current model which reflects the patient's current pharmacological behavior.For this reason, the patient model needs to be updated through the overall process of anesthesia.This online adaptation can be achieved by several approaches such as Kalman filter algorithm [20], Bayesian-based adaptive control [21], and adaptive genetic fuzzy clustering algorithm [19].Alternatively, the variability of PK and PD models between individuals as well as surgical stimulation and anesthetic-analgesic interaction can be explicitly considered offline such that the stability is mathematically guaranteed [22].Additionally, a Lyapunov-based adaptive controller was developed to attain partial asymptotic regulation [23]. Unlike model-based controllers, knowledge-based controllers do not require a known mathematical model.In fuzzy logic controllers, for example, decisions are made based on fuzzy rules predefined by expert knowledge and experience.Compared with linear PID controllers and model-based controller, knowledge-based controllers are easier to implement without tuning PID parameters or mathematical derivation.A hybrid control scheme, namely fuzzy-PID control, was developed to combine the merits of both control strategies [24].Moreover, a multivariable neural-fuzzy controller was proposed to simultaneously administrate both propofol and remifentanil [25].However, one problem of knowledge-based controllers is that the interaction of each piece of knowledge makes the controller less transparent and makes it difficult to achieve adaptive control [6].To deal with this problem, a direct adaptive interval type-2 (IT2) fuzzy logic controller was proposed for multivariable anesthesia systems [26] and a genetic fuzzy logic controller was developed to adjust fuzzy rules using GA [19]. Due to the advantages of closed-loop control, in this paper, we aim to implement the closed-loop control for anesthesia model.The performance of controllers is ensured by simulation and optimization.For the anesthesia model used in this paper, while existing anesthesia model is utilized, the parameters of PD model are specially obtained from clinical data collected from 42 patients.Following the same design procedure, these tailor-made controllers can be redesigned for other patients as long as clinical data have been collected.In the anesthesia model, the co-administration of both propofol and remifentanil is investigated.To realize the drug administration, we propose two groups of strategies: two fuzzy PID controllers and one fuzzy PID controller with scaling factors.For each group of strategies, linear PID controller, type-1 (T1) fuzzy PID controllers and IT2 fuzzy PID controllers are designed.To draw a distinction from existing IT2 fuzzy PID controllers, we combine IT2 fuzzy PID controllers with GA.All PID gains, scaling factors and parameters of membership functions are optimized by GA in an offline manner subject to a performance index (cost function) which quantifies the performance of the controllers.A BIS training profile which sets different local targets for regulation considering the real anesthesia situation is employed for training purposes.The trained PID control strategies will be tested by a testing profile to verify their performance towards unseen working conditions.Comparisons are made among all controllers to demonstrate the characteristics of each control strategy. Following sections are organized in this sequence: in Section 2, a multivariable anesthesia model used in this paper is described; in Section 3, the control background of linear PID controller, T1 fuzzy PID controllers and IT2 fuzzy PID controllers is introduced; in Section 4, we design all the control strategies and overall procedure and necessary information of the simulation; in Section 5, simulation results are provided and comparison and analysis are carried out; finally in Section 6, a conclusion is drawn. Multivariable Anesthesia Modeling In this section, the anesthesia model with two drugs is introduced.As shown in Figure 1, the anesthesia model consists of the target controlled infusion (TCI) system, the PK model, and the PD model, where are effect-site concentrations, and the subscript prop and remi are for propofol and remifentanil, respectively.Details of each module are described in the following subsections.The clinical protocol for obtaining the parameters of PD model is also described. Pharmacokinetic Modeling PK model is applied to reckon the relation from the drug infusion rate and the amount of drugs to plasmatic concentration ( ) Cp and effect-site concentration ( ) Ce .In this paper, we utilize the PK model from Marsh [27] and Minto [28] for propofol and remifentanil, respectively.A three-compartmental structure of these PK models is presented in Figure 2 which can be formulated by the following dynamic system: is the drug infusion rate; ( ) is the rate constant for distribution and elimination of the anesthetic drug; ( ) ( ) is a scalar describing the delay on time of the plasmatic-effect-site drug concentration equilibration. Pharmacodynamic Modeling After Cp and Ce are obtained from PK model, PD model maps Ce to the BIS value which reflects the anesthetic condition of patients.In general, PD model is established by a sigmoid max E model named as Hill curve for single drug.Based on the Hill curve, a response surface model was proposed by Minto et al. [30] to characterize drugs' interaction for multi-drugs.The PD model for representing propofol and remifentanil can be formulated as follows [31]: where 0 E is the clinical effect when no drug is infused; max E is the maximum drug effect; γ is the steep- ness of the concentration-response curve;θ is the ratio of the two drugs defined in Equation ( 3 Ce , β and γ are pa- tient-dependent parameters which are estimated by clinical data explained in the following subsection. Target Controlled Infusion System TCI system transfers the plasmatic concentration targets ( ) Cpt to corresponding drug infusion rates.Practical- ly, the infusion can be realized by Alaris PK syringe pumps which incorporates the TCI system.These pumps are able to infuse adequate amount of drugs according to predefined Cpt .Furthermore, they can predict the realistic concentration based on PK model. The TCI system consists of a bolus-elimination-transfer (BET) infusion scheme designed from the PK model so the Cpt is achieved.The initial bolus, loading dose (LD), is calculated according to Equation ( 6) where ( ) V l is the volume of central compartment represented on Figure 2. The LD bolus will ensure the Cpt is achieved however it needs to be followed by a continuous infusion, ( ) r t , defined by Equation ( 7) to ensure Cpt is maintained due to elimination and compartmental transfer of the drug [32].10 12 13 e e , where ( ) t s is the continuous time; ( ) is the rate constant for distribution and elimination of the anesthetic drug defined in Section 2.1. TCI systems are also able to deal with Cpt changes.When Cpt increases an initial additional loading dose (ADDLD), Equation (6) and Equation (7) will become the following equations to maintain the new ( ) where τ is the time of Cpt change; ( ) The new continuous infusion rate formula takes into account the amount of drug already present in the peripheral compartments and which will be redistributed. In the case when the Cpt decreases, no additional bolus is necessary, however the infusion of anesthetic drug is halted until NCpt is achieved on central compartment due to elimination of drug presented on the PK model.Once this NCpt is achieved the continuous infusion restarts according to Equation (9) where τ is the time NCpt is achieved on central compartment through the elimination process. Clinical Protocol The parameters of PD model in this paper are determined by clinical data obtained from 42 elderly patients who had major vascular surgery of the lower limb.The information of these patients is summarized as follows (in the format of "mean ± standard deviation"): 35 males and 7 females, age 72 ± 8 years old, height 169 ± 10 cm and weight 73 ± 14 kg. The process of collecting data from these patients is briefly described here.Before induction, all sensors were settled and a radial arterial line was inserted to receive baseline readings on all clinical monitors.Two Alaris PK syringe pumps were employed to implement the total intravenous anesthesia (TIVA) with propofol [27] and remifentanil [28] were adjusted by anesthetists according to readings of different clinical apparatuses to maintain the BIS value within [40,60] for an appropriate DOA.Among these readings, cardiac output (CO) was monitored and stabilized at or near pre-induction level by anesthetists through all periods of the surgery. PID Controllers In this Section, 3 types of PID controllers, namely linear PID controller, T1 fuzzy PID controller and IT2 fuzzy PID controller, are introduced.These 3 types of PID controllers are employed to regulate the output BIS index of the anesthesia model. Linear PID Controller A linear PID controller is shown in Figure 3, which consists of 3 elements, namely proportional, integral and derivative blocks.The output of the linear PID controller in discrete time is given as where , 0,1, 2, , proportional, integral and derivative gains, respectively, to be determined. Type-1 Fuzzy PID Controller In view of the linear PID controller [33], as the proportional, integral and derivative gains are constant, it is not able to handle well a highly nonlinear system.It motivates the use of T1 fuzzy PID controller [34] [35] of which the gains are changing according to the operating domains.By applying different sets of gains in different operating domains, more appropriate PID controller is employed to deal with the nonlinear system resulting in an improvement of control performance. A T1 fuzzy PID control system is shown in Figure 4, which consists of a T1 fuzzy PID controller and a patient's model (detailed in Figure 1) connected in a closed loop.Unlike the linear PID controller having a set of constant gains, the T1 fuzzy PID controller has a fuzzy inference system providing a set of feedback gains through a reasoning process according to the operating condition. The behavior of the fuzzy inference system is governed by a set of fuzzy rules of the following format: Rule : IF is AND AND is THEN , where ( ), 1, 2, , , is the fuzzy term corresponding to the linguistic variable j x in the th i rule;  is a positive integer; ( ) ( ) is the output of the fuzzy inference system; and i y is the singleton membership function corresponding to the th i rule.The inferred output is gi- ven as where , where 0 p > denotes the number of rules; is the normalized grade of membership; is the grade of membership corresponding to the fuzzy term The output of the fuzzy inference system Equation ( 12) is employed to replace the gains of the linear PID controller turning it to become a T1 fuzzy PID controller.More precisely, 3 fuzzy inference systems are required to implement a T1 fuzzy PID controller.The outputs of the 3 fuzzy inference systems will be employed as P K , I K and D K .Consequently, the PID gains are no longer constant but dependent on the operating condition characterized by the membership functions. Interval Type-2 Fuzzy PID Controller Type-2 (T2) fuzzy sets [36]- [39] demonstrate a superior characteristic handing uncertainties compared with the T1 fuzzy sets.Uncertainties are captured by the lower and upper membership functions which form the footprint of uncertainty (FOU).A T2 fuzzy inference system can be considered as a set of infinite number of T1 fuzzy inference systems.Consequently, T2 fuzzy inference system is able to outperform the T1 fuzzy inference system in terms of reasoning and generalization capability.In general, the defuzzification process for general T2 fuzzy sets is computational demanding.By using IT2 fuzzy sets [40], the computational demand can be significantly reduced. By employing interval fuzzy sets for the T1 fuzzy inference system Equation ( 12), it becomes an IT2 fuzzy inference system.The behavior of an IT2 fuzzy inference system is described by a set of rules of the following format: Rule : IF is AND AND is THEN , , , and denote the lower grade of membership, upper grade of membership, lower membership function and upper membership function, respectively.Using the center-ofset type reducer, the inferred output of the IT2 fuzzy inference system is given as , where and l r y y can be obtained using the Karnik-Mendel (KM) algorithms [41].The final defuzzifized output is given by ( ) ( ) The IT2 fuzzy PID control system [42] can be represented by Figure 4 as well of which the PID gains are given by the outputs of 3 IT2 fuzzy inference systems. Simulation Design In this section, the simulation environment is presented.First, we design the training profiles and testing profiles which all controllers need to deal with.The purpose of selected target profiles is first described.Then different control strategies are provided for comparison.For fuzzy control strategies, the approach to determine fuzzy rules is offered.Finally, the procedure of using GA for optimization is described, and a performance index is designed as the cost function for optimization. Target Profiles A training profile as shown in Figure 5, which is a series of BIS values required for an appropriate DOA, is employed for the training of the PID control strategies using GA.It defines the target BIS values in different periods that a PID control strategy has to achieve.After the training, a testing profile as shown in Figure 6 is employed to verify if the trained PID control strategies are able to control the BIS value to reach some targets subject to different operating conditions. Clinically, the induction process starts with ( ) Since the induction period is short compared with the maintenance period, there is not much difference during induction in terms of control performance between various control strategies.Thus, we use two linear PID controllers for two drugs to drive the BIS from 98 to 50.By trial and error, the PID gains are predefined, and it is guaranteed that ( ) BIS 1000 is As for the recovery process, since it is not allowed take the drug out of humans, the cease of controllers is the only and fastest option.These will not exist overshoot either because the target BIS is the maximum value.Therefore, all controllers will perform the same at this stage.It is not necessary to design controllers for the recovery process which is thus not included in the target profiles.It is noted that although the induction process is not under comparison either, we cannot ignore it.The reason is that it is difficult to find all model parameters describing the state of ( ) On the other hand, the parameters for ( ) are known and it is easy to start from this initial condition. Control Strategies In this paper, we aim to compare different control strategies to achieve a better control performance.All six control strategies are listed in Table 1.These six cases can be separated into two groups: two-controller Cases 1 to 3 and one-controller Cases 4 to 6.In two-controller cases, we control each drug by an independent PID controller offering control outputs α .Apparently, parameters of these two divided controllers are constrained by the ratio of these two factors.Theoretically, this constraint leads to conservativeness and make the performance worse than the ones of two-controller cases.Nevertheless, one-controller cases have less number of parameters to be determined resulting in lower computational demand, faster convergence in the training process and lower implementation cost for the PID control strategies.In each of these two groups, we have the following control strategies: linear PID controllers, T1 fuzzy PID controllers, and IT2 fuzzy PID controllers.For fuzzy PID controllers, the block diagram of BIS index regulation using two controllers and one controller with scaling factors are shown in Figure 7 and Figure 8, respectively. Fuzzy Rules T1 and IT2 fuzzy PID controllers are discussed in Subsections 3.2 and 3.3, respectively.While the number of fuzzy rules and the shape of membership function are predefined, the parameters of input and output membership functions are optimized by GA.In this paper, the shape of membership functions is defined as triangular shape for simplicity.The triangular IT2 membership functions are shown in Figure 9.For each input IT2 membership functions, the lower and upper membership functions are characterized by seven points 1 7 to p p which are to be optimized by GA.As for T1 fuzzy PID controller, each input membership function is characterized by three points 1 3 to p p which are to be optimized by GA.For both T1 and IT2 fuzzy PID controllers, their output membership functions are T1 and IT2 singleton membership functions whose values are to be determined by GA. In fact, from the determination of membership functions and fuzzy rules, we can find the relation between these control strategies: solutions from linear PID controllers can be implemented by T1 fuzzy PID controllers, and solutions from T1 fuzzy PID controllers can be implemented by IT2 fuzzy PID controllers.In other words, linear PID controller is a subset of T1 fuzzy PID controller, and T1 fuzzy PID controller is a subset of IT2 fuzzy PID controller.By adding some constraints on the membership functions and consequents in fuzzy rules, IT2 fuzzy PID controllers can be reduced to T1 fuzzy PID controllers, and T1 fuzzy PID controllers can be reduced to linear PID controllers. Three rules are employed for each fuzzy inference system. . More rules can be used to partition the universe of discourse for a better result.However, it will lead to slower convergence of training and more computational burden.Additionally, the rate of change of ( ) can also be treated as another linguistic variable together with ( ) . Likewise, it will cause more fuzzy rules and parameters which increase the computational burden and defers the convergence.From the above discussion, triangular membership functions and three rules are employed for both T1 and IT2 fuzzy PID controllers.In the GA optimization, the triangular shape and sequence of membership functions of three rules should be guaranteed.For the sequence, specifically, the same point in membership functions corresponding to linguistic terms N, Z and P should be in ascending order (except 7 p ).For example, Another condition needed to be ensured is that at least one rule is fired for = +∞ .It can be found that solutions from the first approach can be implemented by the second approach.In other words, the second approach is more general than the first approach. Hence, we adopt the second method.For that reason, rules N and P are two shoulder-shape membership functions which can be treated as a special case of triangular membership functions.In this approach, it is guaranteed that at least one rule is fired for are temporarily employed in the optimization. Parameters Optimization The PID gains, scaling factors and membership functions are optimized by GA subject to a cost function reflecting the control performance.Due to the disparity of each training, GA is run for 10 times for each case of PID control strategies as shown in Table 1.The best set of parameters for each control strategy for each run of GA is recorded for further comparison, analysis and practical application.Statistical information including the worst, mean and best costs and the standard deviation for the 10 runs are collected.Among the 10 runs for each PID control strategy, the best set of parameters given by the best cost is used to implement the corresponding PID controller.During the optimization, the lower and upper bounds (LB and UB) of PID gains and scaling factors are listed in Table 2, which are determined by trial and error for good control performance.Due to a large number of variables and the highly nonlinear cost function (defined in the following subsection 4.5), especially for fuzzy PID controllers, GA may not be able to reach the global optimal solution.To speed up the training process, we define the initial population for fuzzy PID controllers based on our knowledge on the PID control strategy.It is known that linear PID controller is a subset of T1 fuzzy PID controller, and T1 fuzzy PID controller is a subset of IT2 fuzzy PID controller.It is thus reasonable that the best set of PID gains for linear PID controllers obtained by GA is employed as the initial PID gains for T1 fuzzy PID controllers for all rules.As a result, initially, the T1 fuzzy controller is equivalent to a linear PID controller.Similarly, the best set of PID gains and membership function parameters obtained for T1 fuzzy PID controller is employed as the initial set of PID gains and membership function parameters for IT2 fuzzy PID controller.That is to say, we utilize the "knowledge" on the PID control strategy such that GA starts with a considerable well initial condition.In this way, although there is still the same number of variables to be trained, GA only needs to utilize the additional parameters and new structures (fuzzy rules and membership functions) provided by fuzzy PID controllers in order to obtain a better cost value. Performance Index The performance index is used to judge whether the control objective is achieved and show the merits of various control strategies.Different performance indices can be selected such as settling time, overshoot, steady error, mean absolute error (MAE), and mean square error (MSE).For the GA optimization, a well-defined performance index is required as the cost function.All parameters of controllers are optimized according to the cost value provided by the cost function.In this paper, we present the following index J mainly based on MAE: 26), the first term is MAE which aims at minimizing the difference between the current BIS value and its target.Unlike settling time, overshoot and steady error, MAE and MSE record the er- ror information through all simulation periods which reflect more comprehensive properties of performance. The reason we choose MAE instead of MSE is that MSE amplifies the effect of large errors and then the optimization tries to minimize large errors which results in oscillation.On the other hand, MAE keeps the original weights on large and small errors leading to a mild and smooth response. The three terms after the first term of ( 26) are the rate of change of ( ) ( ) remi k Cpt t , respectively, which are designed to reduce the oscillation for a gentle progress.The last term is to reduce the difference of the concentration of two drugs because they are equally important for different purposes during the anesthesia.Serious bias to either of these two drugs will not provide an adequate anesthesia. Simulation Results In this section, we implement the simulation on the control of anesthesia using the anesthesia model in Figure 1.Different control strategies in Table 1 are applied to regulate the BIS using the training profiles shown in Figure 5 for training and their control performance are verified by testing profiles shown in Figure 6.PID gains, parameters of scaling factors and membership functions are optimized by GA according to the cost function Equation (26).Comparisons of performance are made between the six cases.It should be noted that the simulation was carried out in discrete time.The PK model in Equation ( 1) was discretized by zero-order-hold (ZOH) assumption.The TCI system implemented has also been adjusted to perform under discrete PK model. A real-coded GA available from Matlab global optimization toolbox is employed for the training.The control parameters of GA are listed in Table 3. We define the parameters in Equation ( 26) as follows: 1000, . It is worth to mention that although the weight 1 λ is not the largest, the MAE is still the main contribution for the total cost.The bounds of parameters in Table 2 are adopted for the training process of GA.By running GA 10 times for each case in Table 1, we obtain the statistical information of the cost J as shown in Table 4. Referring to Table 4, comparing with Cases 1 to 3, Case 3 offers the best cost and Case 2 comes to the second which coincides with the theory.The same rank can be found in Cases 4 to 6. Comparing Cases 1 and 4, twocontroller case is better than one-controller case, which can be also proved by comparing between Cases 2 and 5, and Cases 3 and 6.The reason is that one-controller case has the constraint on the ratio of control signal between two drugs while two-controller case does not have such constraint.One-controller case is a subset of two-controller case.Although one-controller case performs worse, it has less number of parameters to be determined and thus reduces the computational demand and implementation cost of PID control strategy. In spite of the "Best" cost, the "Std" is descending from linear PID controllers to IT2 fuzzy PID controller except Case 4. It provides the information that the convergence gets easier for fuzzy PID controllers.The reason is that we use the best set of parameters of previous cases as the initial populations of following training.It also indicates that the improvement becomes more and more difficult, and thus 10 times running offer similar results.The "Std" for two-controller cases is larger than the corresponding one-controller cases, which implies that the convergence for two-controller cases is harder due to more variables to be trained. The best sets of parameters and membership functions are shown in Tables 5-14.Corresponding membership functions are exhibited in Figures 10-13.From the obtained membership functions, it can be summarized that points and the maximum change of ( ) as an estimated domain where the PID controllers work suggesting that the fuzzy blending should start from around 20 ± .For the IT2 membership functions of IT2 fuzzy PID controllers in Figure 11 and Figure 13 after training, it was found that the lower and upper membership functions are very close to each other such as membership functions of N and Z in Figure 11.The values of points ( ) to p p associated with the IT2 membership functions are shown in Table 9 and Table 14. By applying the best sets of parameters to the training profile of BIS, the time responses of BIS and corresponding drug concentration information are shown in Figures 14-19 for Cases 1 to 6, respectively.By applying the best sets of parameters to the testing profile of BIS, the ones for the testing profile are exhibited in Figures 20-25 +∞ lower and upper green lines, the performance will be well acceptable. It can be seen from the figures that all cases of PID control strategies succeed in tracking the BIS targets and maintaining BIS values.In terms of control performance, while the linear PID controllers have less steady error, fuzzy PID controllers have less overshoot and settling time.Since there is no rigorous requirement on small steady error, fuzzy PID controllers are preferable that they are able to keep the BIS values within the regions bounded by the green lines with less overshoot and settling time.Both training and testing profiles demonstrate the same characteristics as discussed for the corresponding control strategies.Recalling a common controller with two linear PID controllers and predefined parameters is used to achieve the induction process for all cases, the proposed PID control strategies kick in at the start of the maintenance process.The switching point of induction and maintenance process occurs at 1000 seconds which results in a small vibration. The lower panels of Figures 20-25 show the behavior of drug concentration.It can be seen that both Cp and Ce of propofol and remifentanil are adjusted by controllers according to target profiles.More importantly, they are adjusted in advance to decrease the overshoot.This is mainly because all controllers are based on PID control strategy, whose abilities are inherited.In addition, the total amount of remifentanil is larger than propofol, which illustrates that the remifentanil is more favorable to the stabilization of BIS than the propofol.This might be due to the faster action time of remifentanil comparatively to propofol.Although the total amount of remifentanil is larger, the cost function takes the difference of the amount of two drugs into consideration such that serious bias to remifentanil is avoided.Even though the slightly bias towards remifentanil can be noticed overall the propofol and remifentanil Ce reached are within acceptable clinical boundaries even when the BIS target was set at 20, a value inferior to the safe interval of 40 to 60. Overall from a clinical point of view the results and performance obtained are acceptable, considering the lack of significant overshoot, the stability of the controller response and the ability of achieving the desired targets on a short time with clinically acceptable Ce .The controller also provided the expected response when the value of BIS target increases, and it stops infusion of anesthetic drugs but only restarts its infusion in other to avoid an overshoot. When the PID control strategies with the best set of parameters (offering the best cost of J ) are employed to regulate the BIS value using the testing profile in Figure 6, their cost values are listed in Table 15.The rank of which is identical to the sequence of cost values from the training profile.It indicates that the advantage of fuzzy PID controller over linear PID controller exists not only in the training profile, but also in various target profiles. Conclusion In this paper, drug administration for anesthesia has been realized by various PID control strategies.We have first constructed a multivariable anesthesia model with propofol and remifentanil based on 42 patients' clinical data.Simulation has been conducted to regulate the output BIS value governed by this model using six control strategies including linear PID controllers, T1 fuzzy PID controllers and IT2 fuzzy PID controllers.These six strategies are separated into two groups, namely two controllers and one controller with scaling factors, to handle the co-administration of two drugs.Parameters of these controllers have all been optimized by GA subject to a performance index quantitatively measuring the control performance.In order to make the control sophisticated, target profiles have been designed for BIS regulation.Different target profiles have been utilized to test and verify the performance of controllers.It has been demonstrated that the IT2 fuzzy PID controllers offer the best performance, and T1 fuzzy PID controllers come to the second.In the future, further investigation can be carried out to control a general and non-parametric multivariable anesthesia model rather than the model with parameters obtained from specific group of patients. Figure 1 . Figure 1.A block diagram of multivariable anesthesia model. Figure 2 . Figure 2. Structure of a three-compartmental PK model with effect-site compartment [29]. ( the amount of anesthetic drug in the second compartment at the time of target change; of anesthetic drug in the third compartment at the time of target change. Figure 3 . Figure 3.A block diagram of linear PID controller. Figure 4 . Figure 4.A block diagram of fuzzy PID control system. Figure 6 . Figure 6.Testing profile.around50 and the maintenance process starts from In one-controller cases, however, the control output of the PID controller is divided into two control outputs by two scaling factors r α and p P are points 2 p for membership functions corresponding to lin- guistic terms N, Z and P, respectively. Figure 7 . Figure 7. BIS index regulation using two fuzzy PID controllers. Figure 8 . Figure 8. BIS index regulation using one fuzzy PID controller with scaling factors. Figure 9 . Figure 9.An example of IT2 membership functions.Dashed line: lower membership function.Dotted line: Upper membership function.Gray area: footprint of uncertainty. functions, as long as they intersect each other, there exists one rule to be fired.On the left-hand side of membership function N and the right-hand side of membership function P, there are two approaches to ensure this condition.One is defining 5 . be around 20 related to the training profiles we define in Figure Considering that the only linguistic variable is ( ) Figure 10 . Figure 10.Membership functions for two T1 fuzzy PID controllers.Dashed line: membership function N. Dotted line: membership function Z. Solid line: membership function P. Figure 11 . Figure 11.Membership functions for two IT2 fuzzy PID controllers.Dashed line: lower membership functions.Dotted line: upper membership functions.Solid line: the shoulder of membership functions. Figure 12 . Figure 12.Membership functions for one T1 fuzzy PID controller with T1 fuzzy scaling factors.Dashed line: membership function N. Dotted line: membership function Z. Solid line: membership function P. Figure 13 . Figure 13.Membership functions for one IT2 fuzzy PID controllers with IT2 fuzzy scaling factors.Dashed line: lower membership functions.Dotted line: upper membership functions.Solid line: the shoulder of membership functions. Figure 14 . Figure 14.BIS and drug concentration for training profile by two PID controllers. Figure 15 . Figure 15.BIS and drug concentration for training profile by two T1 fuzzy PID controllers. Figure 16 . Figure 16.BIS and drug concentration for training profile by two IT2 fuzzy PID controllers. Figure 17 . Figure 17.BIS and drug concentration for training profile by one PID controller with scaling factors. Figure 18 . Figure 18.BIS and drug concentration for training profile by one T1 fuzzy PID controller with T1 fuzzy scaling factors. Figure 19 . Figure 19.BIS and drug concentration for training profile by one IT2 fuzzy PID controller with IT2 fuzzy scaling factors. Figure 20 . Figure 20.BIS and drug concentration for testing profile by two PID controllers. Figure 21 . Figure 21.BIS and drug concentration for testing profile by two T1 fuzzy PID controllers. Figure 22 . Figure 22.BIS and drug concentration for testing profile by two IT2 fuzzy PID controllers. Figure 23 . Figure 23.BIS and drug concentration for testing profile by one PID controller with scaling factors. Figure 24 . Figure 24.BIS and drug concentration for testing profile by one T1 fuzzy PID controller with T1 fuzzy scaling factors. Figure 25 . Figure 25.BIS and drug concentration for testing profile by one IT2 fuzzy PID controller with IT2 fuzzy scaling factors. Table 1 . Six cases of PID control strategies. t denotes the desired BIS target value, ( ) The premise of each rule takes only one linguistic variable, i.e., only [ ] Table 2 . Lower and upper bounds of parameters. . Table 3 . Control parameters of GA. Table 4 . The cost J from running GA 10 times. Table 5 . Best set of parameters for two PID controller. Table 6 . Best set of parameters for two T1 fuzzy PID controllers. Table 7 . Best set of parameters for two T1 fuzzy PID controllers. Table 8 . Best set of parameters for two IT2 fuzzy PID controllers. Table 9 . Best set of membership functions for two IT2 fuzzy PID controllers. Table 10 . Best set of parameters for one PID controller. Table 11 . Best set of parameters for one T1 fuzzy PID controller with T1 fuzzy scaling factors. Table 12 . Best set of membership functions for one T1 fuzzy PID controller with T1 fuzzy scaling factors. Table 13 . Best set of parameters for one IT2 fuzzy PID controller with IT2 fuzzy scaling factors. Table 14 . Best set of membership functions for one IT2 fuzzy PID controller with IT2 fuzzy scaling factors. Table 15 . The cost J for the testing profile.
v3-fos-license
2021-10-19T15:14:58.396Z
2021-01-01T00:00:00.000
239032051
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-75797-7_12.pdf", "pdf_hash": "8ee95d9ff32de71d34e00ce003ef2dc8b464ddbb", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2518", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "826c867ea3f2ffc971e041c6312be90c19d7e9d6", "year": 2021 }
pes2o/s2orc
Creatio Continua and Quantum Randomness Some thinkers in the Christian and Islamic traditions assert that God doesn’t only create the universe ex nihilo, but that he also continuously recreates the universe in order to preserve its existence. This chapter will discuss randomness vis-à-vis the doctrine of continuous creation as understood in both religious traditions. We argue that the doctrine of continuous creation in its version that is held by both Christians and Muslims would preclude ontic quantum randomness. The reason is that in the doctrine of continuous creation, God is ultimately and meticulously responsible for the existence of objects and properties at every single moment. of nothing (creatio ex nihilo) may not have a direct impact on the issue of randomness, the doctrine of continuous creation (creatio continua) surely does. 1 This paper will discuss what we call The Common View of the doctrine of continuous creation that we offer as a common denominator between the Christian and Islamic traditions. 2 It will also discuss whether there is a place for ontological quantum randomness in the universe if the doctrine of continuous creation is true. Ontological randomness is different from epistemic randomness. The latter has to do with human cognition and its limitations. Events appear to be random because our minds do not have the necessary information to understand why things happen the way they do. By contrast, ontological randomness is independent of human cognition, but concerns the causal nexus of entities that is deprived of efficient or final causation. In this paper, we argue that The Common View of the doctrine of continuous creation would preclude ontological randomness. For clarity, let's first distinguish two interpretations of the doctrine of continuous creation. First, "the doctrine of continuous creation qua recreation" (CC Rec ) says that continuous creation is conceptually equivalent to continuous conservation, but interprets continuous creation as continuous recreation. In this interpretation, objects constantly go out of existence and come into being by God's continuous ex nihilo recreation. Second, "the doctrine of continuous creation qua sustenance" (CC Sus ) also says that continuous creation is conceptually equivalent with continuous conservation, although it rejects that objects continuously vanish and are being recreated by God. In CC Sus , continuous creation is merely continuous sustenance, without repeated ex nihilo creation. We also stipulate CC Rec/Sus as a blanket term for "continuous creation," which can be specified further into CC Rec or CC Sus . 1 Pannenberg suggests that the concept of divine providence has three aspects: conservation, concurrence, and government (Pannenberg 1988, 8-9). In some theological traditions, conservation is considered equivalent to continuous creation. For this reason, the doctrine of continuous creation is very relevant to the issues of randomness and providence. 2 This paper will not discuss the steady-state theory of Bondi and Gold, which was discounted by the presentation of the cosmological microwave background radiation that favors the Big Bang theory. This theory is sometimes called the "continuous creation" theory (Bondi and Gold 1948;Karimi 2011). For a panentheist-idealist version of the doctrine of the continuous creation, which will not be considered here, see Schultz and D'Andrea-Winslow 2017. Karl Svozil uses the term "creatio continua" to refer to "indeterministic" generation process that results in quantum randomness (Svozil 2016, 28). Svozil's usage of "creatio continua" is not how we understand the phrase in this paper. Definition 1.1 CC Rec 3 God continuously recreates everything ex nihilo in successive instant. Objects continuously come into being and go out of existence. Continuous creation is not simply continuous sustenance. Definition 1.2 CC Sus God continuously creates everything, but objects don't continuously come into being and go out of existence. Continuous creation is simply continuous sustenance and not ex nihilo recreation of objects. Definition 1.3 CC Rec/Sus God continuously creates everything, but "continuous creation" can be interpreted as either continuous ex nihilo recreation (CC Rec ) or merely continuous sustenance (CC Sus ). We also need to note that when we use the word "conservation" without further specific information, we utilize it as a general term without specifying it as conservation qua continuous recreation or conservation qua mere sustenance. the chrIstIan tradItIons The Christian tradition has a long history of the doctrine of continuous creation, especially during the medieval and early modern periods. The doctrine also finds a way into contemporary analytic philosophy of religion. Nicolas Malebranche Malebranche (1638-1715) argues that God's continuous creation (CC Rec/ Sus ) ranges over both (a) the existence and (b) the determinate properties of objects, including their spatiotemporal coordinates. His argument for (a), namely, for the continuous creation (CC Rec/Sus ) of objects, is the argument from dependence: since creatures are metaphysically dependent on their creator, there isn't a possible world in which creatures exist but their creator (per impossibile) no longer does. For Malebranche, continuous creation (CC Rec/Sus ) ranges over the determinate properties of objects, such as their location. 4 His argument is that because the universe and its complete features are immediately created by God in every successive instant, there couldn't be a previous (or a subsequent) time-slice in which the properties of an object can be determined by God other than in the very instant the universe is created (Miller 2011, 5). Instead, the imparting of all of the objects' determinate properties must be done by God simultaneously with the objects' coming into being, that is, at the moment of creation. It is simply inconceivable, according to him, that a chair exists unless it exists "somewhere, either here or elsewhere" (Dialogues VII.VI). 5 René Descartes Descartes (1596-1650) is another author who believes in the doctrine of continuous creation (CC Rec/Sus ). The Cartesian argument for continuous creation (CC Rec/Sus ) is an argument from the impotence of the objects to persist on their own over time. In the Third Meditation, he argues that an extended body does not have the power to ensure its existence in a future time. Because time is divisible into countless parts which are completely independent of one another, there is simply no guarantee that an object at time t 1 can assure its existence at time t 2 where t 2 > t 1 . The existence of an object, then, must be caused by something external, "which as it were creates me afresh at this moment-that is, which preserves me" (Descartes 1984, 33). To continue existing, at every successive instant, objects must be recreated by God. Descartes concludes that conservation and creation are different only by virtue of a distinction of reason. Nevertheless, we do not have a way of deciding between the literal and non-literal reading of Descartes' aforementioned sentence. That is, we don't actually know whether Descartes embraces CC Rec or CC Sus . Jonathan Edwards Like Descartes, Edwards (1703-1758 believes that existences are ontically bound to a particular time and place. Because of this fact, Edwards strongly argues that things cannot cause their own existence at a later time. Hence their existence at a later time must be immediately caused by an external agent, namely, the Creator. What is absent in Malebranche and unclear in Descartes, however, is Edwards' explicit thought that objects continuously come into being and vanish, only to be recreated by God with gradual changes. In other words, Edwards clearly endorses continuous creation qua recreation (CC Rec ). 6 Since created objects (sometimes called "nature") are impotent to sustain their own existence, God must literally recreate them at every successive instant in their entire life span. Moreover, God not only recreates their existence, but also their "properties, relations, and circumstances" (Edwards 1970, 403). the IslamIc tradItIons There are several groups and thinkers in the Islamic traditions 7 but our focus will primarily be the Ash'arite school of thought because they famously offer a unique combination of occasionalism and atomism, both of which will be useful to think about our continuous creation model. We primarily discuss and utilize the works of al-Juwayni (1028-1085) and his student, al-Ghazalı̄ (1058-1111), both of whom are well-known in the Ash'arite household. 8 Each of them wrote treatises that explicate the 6 Compare this view to McCann and Kvanvig's, which suggests that the doctrine of continuous creation doesn't need to mean that the universe always appears anew every moment at its existence (McCann and Kvanvig 1991, 590). McCann and Kvanvig, however, still think that God determines both essential and accidental properties of objects (597). Another Christian historical figure other than Edwards who believes in the literal, "strong" view of continuous recreation of bodies (CC Rec ) is Leibniz's contemporary Pierre Bayle (Anfray 2019). One can also find a similar view in the early Leibniz's doctrine of transcreation (White 2000). 7 For excellent references on the various schools, see Wolfson 1976;Jackson 2014;Taftazani 1950;Muhtaroglu 2017;Frank 1966. 8 A caveat needs to be pointed out. In the contemporary literature there is a healthy debate over Al-Ghazalı̄'s metaphysical worldview. Some maintain, as we do provisionally in this article, that Al-Ghazalı̄ was an Ash'arite, while others argue he was a covert Neoplatonist. To familiarize oneself with the advocates and references for either side, see footnote 8 in Malik 2019. Ash'arite doctrine. Al-Juwayni wrote A Guide to Conclusive Proofs for the Principles of Belief (Al-Juwayni 2000) and al-Ghazalı̄ wrote Moderation in Belief (Al-Ghazalı̄ 2013). The Ash'arite Worldview The initial bifurcation in their worldview is between Creator and the world, that is, anything other than God, where the former is a necessary being and the latter being entirely (and radically) contingent. The Ash'arites provide a systematic taxonomy on the metaphysics of the world. First, the world is divided into atoms (jawhar 9 ) and modes ('arad). Atoms are indivisible, self-subsisting, space-occupying (mutahayyiz) units, while modes are properties that adhere in atoms. These properties include things like color, taste, odor, life, and death. 10 Modes cannot exist on their own and they need a locus to manifest themselves, which is why they subsist in atoms. In effect, atoms are simply indivisible scaffolds, similar to building construction. When atoms aggregate into various combinations, they form a body (jism). 11 Second, the Ash'arites classify four states or manners of being (akwan). These include (1) movement, for example, rotational or translational; (2) rest, where an entity remains in the same position for two or more moments of time; (3) combination or aggregation of atoms or bodies; and (4) separation of atoms and bodies (Al-Juwayni 2000, 11;Al-Ghazalı̄ 2013, 27). 12 This forms the basic ontology of the Ash'arites upon which everything else is built. 13 9 In the kalamic texts, the word "jawhar" is sometimes equally used to refer to the atom and a body. 10 These may seem archaic, which they are. It seems that macroscopic properties have been imposed on the microscopic world, something which modern physics may not agree with. For example, textures of macroscopic entities can be rough, but atoms themselves aren't "rough." By contrast, atoms have various properties specific to them which aren't visible in the macroscopic world, e.g., spin, charm, and strangeness. 11 There is a disagreement among the mutakallimun over how many atoms are needed to combine to make a body. 12 Al-Ghazalı̄ doesn't discuss the akwan the same way that al-Juwayni does, but he has a discussion about rest and motion in general. See al-Ghazalı̄ 2000, 31-33. 13 For more information on this paradigm, see Erasmus 2018, 53-63;Sabra 2009;and MacDonald 1927. One could ask where the soul, angels, and demons fit in this scheme since they aren't material entities. To our knowledge, neither Al-Juwayni nor al-Ghazalı̄ explicitly addresses the concern. Motivation and Justification There are two main reasons for why the Ash'arites hold such a worldview. For one, the Ash'arites are strongly opposed to anything that contradicts the laws of logic because God is bound by them, that is, God can do everything except the logically impossible. This is the full extent of God's capabilities (Al-Ghazalı̄ 2000, 175). This is important because it explains how and why the Ash'arites take atomism seriously. They hold atomism, or more broadly a discrete-based worldview, to avoid infinities in creation as they can lead to contradictions, and God cannot produce contradictions. A common example is the comparison of the seed and the mountain. If both these very different entities can be divided into smaller and smaller entities ad infinitum, it would result in an infinite regress, which would be illogical as it suggests that both the seed and the mountain have the same (infinite) parts (McGinnis 2018). Accordingly, there must be a limit to nature and a fundamental unit which forms the basic building blocks of the natural world. 14 For similar reasons, they hold discrete interpretations of space and time (Arthur 2012;Bulgȇn 2018;Altaie 2016, 17). On this point, al-Ghazalı̄ outlines three problems with an actual infinity. The first issue he raises with actual infinity is that it can never end, so to say infinity has passed is conceptually problematic (al-Ghazalı̄ 2013, 37). The second problem he raises is related to counting of celestial properties with the passage of time. If infinity has passed, then it makes no sense to say that a celestial body has rotated an even or odd number of times. In effect, counting in temporal terms would be useless (37). The third problem al-Ghazalı̄ raises is, pace Cantor, the existence of different sizes of infinities (38). Given these issues with actual infinity "neither in space nor in time can there be an infinity of extension, nor an infinity of subdivision, nor an infinity of succession" (MacDonald 1927). The second reason for holding atomism is that atoms are the linking point between the natural world and God's power. If the very basic entities are under the command of God, then by extension everything else is. 15 14 It is along these lines that al-Ghazalı̄ makes a distinction between atom and mode. A mode cannot subsist in another mode because then otherwise that second mode will need another mode and so on to infinite regress. There must be a stopping point or a locus within which a mode resides, which is the subsisting atom (Al-Ghazalı̄ 2013, 34). 15 Unless and of course God designed a bottom-up kind of emergence. Though this isn't a tight logical argument, as God could still be in absolute control of everything in the absence of particles, it just goes to show how thoroughgoing the Ash'arites are in making sure that nothing escapes God's control. It should be stressed that occasionalism is a predominant theme among the Ash'arites. 16 The reason for this is because they don't agree with autonomy or agency existing out of God's control as it is seen to be theologically problematic (Jackson 2014;Farfur 2010). Everything outside of God relies on Him while He relies on no one. Furthermore, God doesn't decide what to do from one moment to another. Rather, because God is outside of time and space, God has already "decided" past, present, and future in a single timeless act. So, God has already fully determined each moment's existential posits, properties, relations, and circumstances. Implications The Ash'arite worldview has two very interesting implications. First, if everything exists momentarily in discrete time, and if everything in creation is contingent, that is, it could have equally existed or not existed, there must be a necessary being that wills it into existence over nonexistence (murajjih) (and vice versa) at each moment in time (Al-Juwayni 2000, 12; Al-Ghazalı̄ 2013, 42). 17 So it's not just that that atoms and their modes are sustained from one moment to another, rather everything is brought into existence from nothing and then annihilated into nothing in the next one, that is, CC Rec . Reality, then, is fundamentally ephemeral. This contrasts with mere conservationism (CC Sus ) in which God merely sustains existences while nature/creation has some independence (Moad 2018), and process theology in which God evolves alongside nature in manifesting the development of nature/creation (Ruzgar 2016). In short, Ash'arite metaphysics sees the entire universe being continuously recreated like the refresh rate on a computer screen. 18 Second, nothing in the world is intrinsically necessary. Since God is literally recreating every atom and mode everywhere in each time-slice, it implies that that there are no internal potentialities to things. It is simply God's will that manifests reality moment by moment. That is why al-Ghazalı̄ (and Ash'arites in general) famously denies any kind of inherent necessity in creation, be they in the form of passive powers (Al-Ghazalı̄ 2000, 170) or active ones (166). That said, according to the Ash'arites God has chosen to manifest certain nomological laws in place (sunnatAllah), but there is nothing refraining Him from changing those laws if He chose to do so for the sake of performing miracles. So it is not impossible for God to convert a staff to a snake or split the seas or perform any other kind of nomological changes since they are all under the realm of logical possibilities. the common VIew Both the Christian and Islamic traditions provide resources for constructing "The Common View of the Continuous Creation," which consists of five theses: Here it should be noted that there was (and still is) a debate about whether it was the atom and the modes that were being recreated or just the modes. For example, Altaie (2016, 17), a contemporary scholar of physics and kalam, seems to suggest that atoms are recreated alongside modes. By contrast, Ibn 'Arabi, a famous and classical Sufi theologian, is quite against the idea of substances as discussed by the Ash'arites as pointed out by Koca (2017). Regardless of what one makes of this debate, the key thing to remember here is that, at the very least, modes cannot endure more than a moment and are recreated. Interestingly, Adi Setia 2006, a contemporary scholar of Islamic intellectual thought, provides an explanation of this in that the atomism of the mutakallimun cannot but be thought of as a conceptual limit. 19 Kim calls this sort of divine causation "vertical determination." He then uses the phrase "Edwards's Dictum" to describe the incompatibility between God's causation and creaturely 4. The Bottom-Up Thesis: God's creation of objects includes the creation of all of the objects' properties or modes. 5. The Determinacy Thesis: God immediately wills the determinate properties or modes of objects. We will now assess the consequences of this view for the issue of randomness. Conservation Without Determinacy It is possible to adopt The Conservation Thesis, but not The Determinacy Thesis. That is, it is possible to think that God conserves the world without determining every single property that created objects have. At most, The Conservation Thesis entails The Partial Determinacy Thesis, which says that some properties of an object x are instantiated in virtue of x's existence being sustained. In the same way, one might think that God only conserves the existence of the world without determining all of the countless properties that created objects have. The Equivalence Thesis The Equivalence Thesis says that there is no difference between God's act of continuous creation (CC Rec/Sus ) and his act of continuous conservation. 20 It is possible to adopt The Equivalence Thesis without embracing The Edwards-Ash'arite Thesis, and to claim that continuous creation is merely continuous sustenance, but not continuous recreation. This would result in CC Sus . Kvanvig and McCann, for example, think that there is equivalence between conservation and continuous creation. They claim that what is proposed in The Edwards-Ash'arite Thesis should be rejected, and suggest that the use of the word "creation" in the phrase "continuous creation" is a terminological infelicity (Kvanvig and McCann 2005, 15). The Edwards-Ash'arite Thesis The Edwards-Ash'arite Thesis or CC Rec might run into problem on the assumption that God is timeless. If we agree with Malebranche that there is a single and undivided act of creation, in what sense does God continuously or repeatedly recreate (CC Rec ) the world? Does The Edwards-Ash'arite Thesis require the assumption that God is actually in time instead of outside of time? If one assumes that God is in time, the doctrine of continuous creation (CC Rec ) seems to be unproblematic. God can simply recreate the world in each time-slice. If God is timeless, however, the continuous recreation (CC Rec ) of the world would have a different picture, perhaps one that includes space and time themselves to be a part of creation that constantly goes in and out of existence. At any rate, the problem about God's timelessness and its relation to God's creating act is not a problem unique for the doctrine of continuous creation (CC Rec ), but for the doctrines of creation and divine action in general. Even if one doesn't subscribe to the doctrine of continuous recreation (CC Rec ), one has to explain how a timeless God acts in the temporal world. Another worry with The Edwards-Ash'arite Thesis is that it denies the persistence of objects and personal identity through time. The intuition is that an object never persists since at every moment it immediately vanishes. Quinn points out that this is problematic because if humans don't persist, they do not perform actions at all, which is contrary to both common sense and theistic orthodoxy (Quinn 1983, 63-67). One other significant worry here is that if there is no personal identity through time, then there is no conservation happening. If an object were to vanish, by definition, it would not have been conserved. If this were the case, then one couldn't equate continuous creation (CC Rec ) with conservation. The claim that the doctrine of continuous creation (CC Rec ) precludes personal identity through time assumes either of the following premises. First, an object that has vanished cannot come into being again. Second, even if it could, it would not be identical to the vanished object. One way to address this issue is to adopt the exdurantist or stage theory view of persistence, in which objects persist in stages of short-lived entities (Haslanger 2003, 321). 21 In exdurantism, objects persist in virtue of having counterparts in other temporal stages. In each stage of its life, the object (i.e., a counterpart) is wholly present, although it comes into and goes out of existence at every successive instant. If identity requires that an object must continually exist through time, then exdurantism must bite the bullet and say that persistence doesn't require identity. As such, proponents of continuous creation (CC Rec ) must say that although conservation requires persistence, it doesn't require identity. However, instead of conceding that identity is absent in exdurantism, perhaps it is better to say that only a less robust notion of identity is required for persistence, which is a theory of identity that doesn't require seamless continuity in existence. Two analogies might be useful here. First, a film roll has many frames, which depict objects. The objects depicted in one frame are strictly speaking neither identical to nor spatiotemporally continuous with the objects depicted in another frame. When the film is played, however, viewers (and the exdurantists) see that objects persist and maintain personal identity over time. In this way, there is no robust ontological identity in objects, but only in the perception or the mind of the viewers. This view is sometimes called "the cinematographical view" of identity. 22 Second, a patch of color persists through time but doesn't maintain identity due to the constant supply of photons needed. 23 A person, upon seeing a color patch, can say that the color patch persists over time. In reality, nevertheless, the color patch is constantly refreshed as new photons enable it to be perceived. In these ways, exdurantism can maintain persistence with identity, although the notion of identity is less restrictive. One immediate question that might arise here would be of moral responsibility. How are persons morally responsible if they persist without a robust sense of identity? The cinematic image would be helpful to invoke again. Human beings that are recreated continuously are morally responsible for their action to the extent that villainous film characters are in a sense responsible for their actions, although they are in reality a bundle of different film frames. Admittedly, the philosophical problems discussed earlier are indeed difficult to address satisfactorily. The solutions require the acceptance of controversial intuitions about change, rigid designation, existential inertia, 1970,403). This seems to point us toward the stage theory of objects. Crisp himself says in his later piece that for Edwards, objects exdure (Crisp 2018, 12). 22 See Bergson 1911, 304-311 andCrisp 2016, 203-204. 23 See Edwards 1970, 404 andDescartes 1984, 254-255. identity, and moral responsibility. However, we contend that seeing The Edwards-Ash'arite Thesis as a stage view can still be metaphysically defensible. The Bottom-Up and the Determinacy Theses The Bottom-Up Thesis says that God creates not only x, but also x's modes or properties. The Bottom-Up Thesis, however, is not explicit about whether God meticulously determines the properties of x. The Determinacy Thesis strengthens The Bottom-Up Thesis by asserting that God not only creates the properties of objects, but also meticulously determines them. On this view, theistic determinism thoroughly reigns in the world. Every property of objects is determined by God, including the properties of quantum particles. contInuous creatIon and Quantum mechanIcs The intersection of The Common View of the doctrine of continuous creation (CC Rec ) and randomness includes the issue of the determinateness of properties. Suppose that God recreates the universe at every successive instant. Suppose that he also wills the determinate properties of each object, including its physical properties. How, then, would The Common View explain quantum weirdness in which particles sometimes are neither somewhere nor nowhere, but in a superposition? If The Common View is true, then the place of ontological randomness in the universe is questionable. Although there are two kinds of interpretations for quantum mechanics, namely, deterministic and indeterministic ones, in the following, we will only review the indeterministic Copenhagen interpretation. Under the Copenhagen indeterministic interpretation of quantum mechanics, the wavefunction and its probabilistic character provide complete specification of a quantum state (Bohr 1935). 24 On this view, however, one can't always say that a particle has a determinate location or momentum, particularly when it is in a superposition. According to Heisenberg's uncertainty principle, a system cannot simultaneously possess perfectly precise values of position and momentum. Heisenberg contends that there is something inherently indeterminate in the system. What is happening, then, when God recreates (CC Rec ) the world at every successive instant, including when some particles are in a superposition? Two metaphysical strategies are available here to maintain that God recreates (CC Rec ) objects and properties. The first strategy is to see quantum properties as vague properties. Some philosophers think that there are problems in individuating macro-objects such as mountains and forests, for one doesn't always know where a mountain or a forest begins and where it ends. Sometimes there is no way to tell which trees can function as the exact boundaries of mountains and forests. The boundaries, in other words, are vague. Vagueness problem doesn't only plague objects, but also properties. Might quantum objects have vague properties? Lowe argues that quantum indeterminacy with respect to electrons is an example of real ontic vagueness in the properties of quantum objects such as positions and momenta (Lowe 1994, 114). Bokulich suggests that ontic vagueness in the properties of quantum particles obtains because they lack "space-time trajectories" and value definiteness in their positions, momenta, and so forth (Bokulich 2014, 463ff.). Unlike classical particles, quantum particles do not always have a determinate location, a definite spin, or an exact charge. This is especially true when they are in an entangled or superposed state. Perhaps some indeterminate quantum properties are vague properties and indeed display genuine ontic indeterminacy. 25 The second strategy to maintain that God recreates (CC Rec ) objects and properties in quantum events is to posit holism in the quantum states. Holism is the view that "a whole is something more than the sum of its parts, or has properties that cannot be understood in terms of the properties of the parts" (Maudlin 2013, 46). Perhaps quantum states are irreducible to any smaller parts. Unlike the first strategy, the recognition of holism doesn't require one to posit vague properties in the quantum state. Rather, the quantum state itself has properties that might be determinate, although the properties of its parts are themselves unspecifiable or even are lacking of a physical state (58). Juxtaposed with the doctrine of continuous creation (CC Rec ), one can say that at some instants, particles become a part of a holistic quantum state. While no one could know for sure the nature of quantum states, God would determine that some quantum states would contain a certain number of particles whose properties are not specifiable further. It is true that The Determinacy Thesis says that God wills that particles have determinate properties, but it doesn't follow from this that those determinate properties can be more determinate than being superposed in quantum state Q. Adopting both the second strategy and The Common View might offer an explanation of how God acts in quantum events given the measurement problem in quantum mechanics. Von Neumann identifies that there are two processes by which quantum states evolve: the indeterministic and probabilistic process when a measurement occurs (Process 1) and the deterministic process in accordance with the Schrödinger wave equation (Process 2) (Neumann 1955, 417-418). In von Neumann's "orthodox" interpretation of quantum mechanics, the wavefunctions would "collapse" or "jump" during observations or measurements of particles, and the quantum states would undergo a process change from deterministic (Process 2) to indeterministic (Process 1) evolution. 26 The measurement problem is a problem because one doesn't know how and when a wavefunction collapse would occur. Let's situate the measurement problem within the current discourse of how "non-interventionist divine action"-assuming the necessitarian reading of the natural laws-plays a role in quantum events. Saunders lists the four possibilities of such divine action: 1. God alters the wavefunction between measurements. 2. God makes God's own measurements on a given system. 3. God alters the probability of obtaining a particular result. 4. God controls the outcome of measurement. (Saunders 2002, 149ff.) Saunders argues that these four possibilities are unsatisfactory because they require some kind of intervention on God's part (156). 27 We argue that adopting The Common View and positing quantum holism can provide a better account of non-interventionist divine action in quantum events. To begin with, the strategy of positing quantum holism is not the same as altering wavefunction between measurements or 26 The designation of Process 1 as indeterministic and Process 2 as deterministic follows Everett's exposition of the universal wavefunction in Everett 1973, 3. 27 See different responses to Saunders' critique in Wildman 2008, 162ff. manipulating probability distributions (options 1 and 3). In contrast to option 1, God doesn't have to change the wavefunction to ensure that the outcome accord with his will. The reason is that in continuous creation (CC Rec ), he can simply pick the initial quantum state to ensure it evolves into some later state via the Schrödinger equation (Process 2). In contrast to option 3, accepting quantum holism is not claiming that God quirkily manipulates the probability distribution of a quantum state to achieve a certain outcome. In continuous creation (CC Rec ), God only needs to create a certain holistic quantum state. Afterward, God will be able to determine the outcomes that fall within the probability distribution in the quantum state. The strategy is not the same as option 2 either because no measurements of the particle properties are made by God in the quantum state. Quantum holism concedes that some parts of the system are unspecifiable not because there is a lack of knowledge on anyone's part, but because of the very holistic character of the state. Option 4 is only half-right. The strategy does say that God controls the outcomes of measurements because God creates the universe (CC Rec ) at every successive instant. Saunders is worried about this option, saying that if it is taken, then we need to move ontologically backward, namely, from the outcomes of measurements to the determination of the ontological probability of the quantum state (Saunders 2002, 154-155). But The Common View can escape this worry. First, in God's continuous creation (CC Rec ), the determination of the quantum state and its probability distribution at time t 1 is independent of the outcome at time t 2 where t 2 > t 1 . Second, the outcome is entirely according to God's determination and needn't violate any regular scientific laws because God can ensure that the outcome falls within the acceptable probability range (Tracy 2008, 273). God can work within the physical parameter such as wave amplitude that God himself has established. The strategy of falling back metaphysically to quantum holism, then, fulfills Saunders' requirements for a reasonable "non-interventionist special divine action" that the quantum probabilities must: 1. be ontologically prior to the measurement and thus represent some feature of the system in question; and 2. be modifiable by God without an intervention in the quantum wavefunction itself (which evolves deterministically under the Schrödinger equation). (Saunders 2002, 154) We argue that embracing The Common View and adopting quantum holism allow us to fulfill the first requirement, since in God's continuous creation (CC Rec ), there is no need for a proleptic manipulation of the quantum wavefunction to match the outcomes. The probabilities can ontologically be determined before the measurements. The second requirement to modify probabilities is not applicable, since in continuous creation (CC Rec ), God can determine the outcome of the measurements simply to be within the range of the logical probability distribution in the preceding quantum state. However, from the physicist point of view, there can still be epistemic randomness. objectIons and replIes One objection to our discussions would be to point out the obvious of the fact that the Christian and Muslim thinkers discussed in this paper lived before modern physics arrived. Why bother with juxtaposing the doctrine of continuous creation (CC Rec ) with quantum randomness? Isn't this an anachronistic endeavor? Indeed, The Determinacy Thesis was proposed by the thinkers before the quantum revolution. However, the theological position under scrutiny is still relevant, that God is the meticulous ruler of everything. It is important to show that traditional theological understanding can be reconciled with new sciences. While we appreciate that the sciences keep evolving, it is important to observe how theology stands in their light, as well as to ponder the theological implications of the new sciences (Vanney 2015, 751). Another objection would be to point out that our solution to reconciling continuous creation (CC Rec ) with quantum randomness (under the standard interpretations) is simply to move the problem to a different metaphysical plane, namely, by saying that ontic quantum randomness is precluded because there are new metaphysical entities such as vague properties and holistic quantum states. In response, we argue that both the notions of vague properties and quantum states are independent of the issue of randomness itself. Our task is to provide ways for reconciling the doctrine of continuous creation (CC Rec ) with quantum randomness, which we have proposed already with these plausible metaphysical apparatus. It is also important to underline the fact that quantum states are weird and that the behaviors of particles might appear to be metaphysically beyond determinacy simply because they have an uncharted ontology. A third objection might be that adopting quantum holism in which some properties such as spin, position, and momentum are momentarily inapplicable to particles is questionable. How is it the case that an electron at time t 1 has a position, but at time t 2 where t 2 > t 1 , when it is in a quantum state, locative properties can't be attributed to it? To answer this objection, one should remember that the assertion about the inapplicability of properties such as spin and momentum to particles when they are in a quantum state is something of a common acceptance already in physics. It is simply a weird quantum phenomenon. However, perhaps we can persuade the reader to consider that there is a sense in which objects momentarily lose their property applicability. Imagine a ball that is vertically thrown up toward the sky. In the initial half of the trajectory it has a property of moving up. There is a point, that is, the vertex, however, when the ball will momentarily stop and lose the property of moving before it starts moving down due to its gravitational pull. There is nothing odd here. Also, the doctrine of the resurrection of the body might require that persons in the intermediate state do not have any spatial location while still existing. While these two analogies may not be satisfactory, they can be a starting point for further discussion. conclusIon Our paper finds common ground between orthodox Christian and Islamic thoughts on the doctrine of continuous creation (CC Rec ). We reconcile the doctrine with the issue of quantum randomness and argue that if the doctrine is correct, there can't be ontic quantum randomness in this world. Under the standard indeterministic interpretations, the ontic quantum randomness is precluded if we buy into either the notion of vague properties or quantum holism, both of which are plausible metaphysical concepts to utilize. Lastly, we have also suggested that embracing The Common View of continuous creation (CC Rec ) provides a way to show that a noninterventionist special divine action is possible in quantum events, for God would be able to determine both the wavefunction of quantum states and the outcomes of the measurement that fall within a reasonable probability range. 28
v3-fos-license
2019-03-18T14:03:18.702Z
2018-09-10T00:00:00.000
81287101
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3934/medsci.2018.4.316", "pdf_hash": "6fbe72983f6677526359592b320fa37569bf2760", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2519", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2053e7fc50b37c366a9c52f4ae1bc612dccdcda4", "year": 2018 }
pes2o/s2orc
Telerehabilitation for community-dwelling middle-aged and older adults after musculoskeletal trauma: A systematic review Background: Musculoskeletal trauma at midlife and beyond imposes significant impact on function and quality of life: Rehabilitation is key to support early and sustained recovery. There are frequent barriers to attending in-person rehabilitation that may be overcome by the recent advances in technology (telerehabilitation). Therefore, we conducted a systematic review of published evidence on telerehabilitation as a delivery mode for adults and older adults with musculoskeletal Introduction The global burden of musculoskeletal trauma is substantial [1]. Home-based rehabilitation has the potential for maximizing recovery after discharge from hospital with musculoskeletal trauma, such as hip fracture [2]. There are barriers to delivery of publicly-funded home rehabilitation [3,4], possibly because it is resource intensive, and or because of a chronic shortage of clinicians [5], especially in rural communities [6]. Telerehabilitation is a promising delivery mode innovation because it could minimize barriers to providing health care management. It is defined as "the provision of rehabilitation services at a distance using telecommunications technology as the delivery medium" [7] page 217 and can be delivered via a number of different modes, such as telephone, video (webcam, video-conferencing), mobile apps, web-based etc. The field of telerehabilitation for managing health conditions, such as heart disease [8] and stroke [9,10] is growing. However, less is known about this delivery mode for the prevention or management of impairments or disability after musculoskeletal trauma (i.e., from falls and fractures) in adults at midlife and older. Although a limited number of reviews are available [11,12] previous literature did not report or synthesize evidence on factors such as feasibility, generalizability, adherence or behavior change techniques employed-all key elements important to understand for future delivery of this mode of rehabilitation. Previous reviews included studies with participants who had musculoskeletal conditions such as osteoarthritis, and although this is also an important focus, it is distinct from the experience of adults and older adults who have an acute (unexpected) trauma such as fracture. Further, the populations may be different. Based on population-level data, patients who had an elective total hip replacement compared with patients who had surgery for low-trauma hip fracture were younger, had fewer co-morbidities, and there were more men [13]. These contextual factors may or may not present different challenges for delivering care remotely, but it signals the need to provide an evidence synthesis specific to adults and older adults with musculoskeletal trauma. Therefore, the aim of this systematic review was to provide a synthesis of available evidence of telerehabilitation for adults aged 50 years and older who sustained musculoskeletal trauma. We anticipate that this new knowledge will extend current evidence on the management of recovery after musculoskeletal trauma, and highlight gaps in evidence to inform further research agendas for this all too common health condition for adults and older adults. Protocol and registration We completed a systematic review following the guidelines for conducting and reporting as established by the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) [14]. Prior to starting the review, we registered the title and methods on PROSPERO CRD42017083447. Our review question was: What is the effectiveness of telerehabilitation for community-dwelling adults and older adults with musculoskeletal trauma? Systematic review team members Our review team has representation from the academic and clinical communities, with representation from Australia, Canada, Germany and Spain. In particular, our review team included registered and or practicing clinicians working in musculoskeletal health (MCA, CLE, TKG, DL, LM, PAV) who provided an important perspective for guiding the review process. Our team also includes a health psychologist (LF) who has a significant track record using behavior change theories. Many of the authors previously published numerous systematic reviews (MCA, CLE, AC, LF). Eligibility criteria (concepts) We only included peer-reviewed publications, and studies that represented the following concepts: Population: Community-dwelling adults (50 years and older) with musculoskeletal trauma; interventions: Telerehabilitation using apps, computer, telephone, videophone, videoconference, webcam, webpage, or similar media; comparator: Usual care (in-person rehabilitation) or no rehabilitation; and type: Randomized controlled trials (RCT), controlled clinical trials, controlled before and after studies, interrupted time series, and feasibility/pilot or implementation studies. Information sources and searches One author (MCA) developed the search strategy and the other co-authors reviewed it for completeness and comprehensiveness. We searched the following electronic databases for all years until June 23, 2018: Cumulative Index to Nursing and Allied Health Literature (CINAHL), Cochrane Database of Systematic Reviews, Embase, Medline (Ovid and PubMed), PsycINFO, and SportDiscus. Figure 1 is the search strategy used for Ovid MEDLINE. We also searched ClinicalTrials.gov and the WHO clinical trial registry (June 30, 2018) and conducted a focused search in Google Scholar for the following keywords in the title only: (1) fracture AND (telephone OR interview OR interviewing OR telerehabilitation) NOT protocol; and (2) fracture AND telephone OR telerehabilitation. We uploaded the identified citations into Covidence (Covidence systematic review software, Veritas Health Innovation, Melbourne, Australia) which removed duplicate references. For all studies accepted at the full text level, we conducted a forward citation search, and reviewed the reference lists. For the reference lists, one author (MCA) screened them and excluded any webpages, reviews, and references to methods/outcome measures, before uploading the remaining citations to Covidence. We included literature from all years and in all languages. If a publication was written in a language other than English, we requested an English version from the authors, or used Google Translate and resources within the team. Study selection (Level 1, Level 2) Following the search from the target databases (and removal of duplicates), two of four authors (MCA, CLE, AC, PAV) independently screened titles and abstracts of studies (Level 1), using Covidence. A third author reviewed any conflicts and made the final decision at Level 1 (LF). We repeated this process for the full text articles (Level 2) with two of five co-authors (MCA, CLE, AC, LF, TKG) who independently screened the full-text article. If we were uncertain if an intervention was considered telerehabilitation, we contacted the first author for confirmation. Data collection process We extracted the following information for included studies: Author, year, country, setting, injury, recruitment (including sampling frame) and retention, population, intervention (including mode of delivery), behavior change techniques, outcomes, and findings. We contacted authors, if necessary, to obtain additional information. One author extracted data (MCA) and four authors confirmed data extraction (PAV, CLE TKG, LM). Risk of bias assessment (internal validity) We used the Cochrane Risk of Bias tool [15] to assess the quality of included studies for RCTs. Two of three authors (MCA, TKG, PAV) independently reviewed each included study using Covidence to record their decisions. No author reviewed an included study that they authored, and final decisions were based on consensus between reviewers. We included all RCTs, regardless of the determined risk of bias, but planned to conduct a sensitivity analysis to determine the effect of excluding studies with higher risk assessments, if possible. For the category of blinding of personnel and participants, we used the Cochrane Handbook (Chapter 8) to guide our evaluation for risk of bias [16]. Specifically, we did not want to unfairly judge studies based on challenges with blinding group allocation from participants and personnel, a known challenge for rehabilitation trials [17]. Therefore, two authors discussed whether knowledge of group allocation would cause a substantial change in behavior in one group over the other (e.g., high risk of bias). Generalizability (external validity) We reviewed included studies to search for sampling frame, recruitment and retention, and overall description of included study participants to determine the generalizability of the findings [18]. Summary measures Our a priori outcomes of interest were independent living, quality of life, falls, fractures, adverse events, mobility, balance, physical function and capacity, physical activity and sedentary behavior, fear of falling, and implementation factors: Feasibility, adherence, and behavior change techniques (BCT). BCTs We proposed to list BCTs used within each study. If study authors provided a published list of BCTs we included them in the overall description, otherwise, two authors (MCA, LF) independently reviewed each study and identified possible BCTs using the taxonomy established by Michie and colleagues [19]. We calculated Cohen's [20] Kappa statistic to estimate inter-rater reliability of identified BCTs used within the interventions. Synthesis of findings A priori we proposed to conduct a meta-analysis, if data were available and it was appropriate, following established guidelines (e.g., assess for statistical and clinical heterogeneity, determine use of random or fixed effects model based on available data, conduct sensitivity analyses, etc.). If it was not appropriate to conduct a quantitative synthesis, we planned to provide a narrative summary of the findings based on the population, intervention, and results. If possible, we proposed to conduct subgroup analyses for women/men and/or different types of delivery modes for interventions (e.g., telephone, online video). Managing bias and potential conflict of interest Throughout the review process, team members strived to reduce unconscious bias. We registered our protocol prior to starting, and followed standard systematic review guidelines with two reviewers who independently adjudicated potential publications at Levels 1 and 2, and assessed study risk of bias. If we were uncertain of a study design or intervention, we emailed the corresponding author for clarification. In addition, no author of an included study assessed its risk of bias. This was an unfunded study and authors stated no known conflicts of interest. Study selection We conducted a comprehensive systematic search for evidence across several databases including all years and languages. Figure 2 presents the flow diagram of citations reviewed at Levels 1 and 2, with reasons for excluding citations at Level 2. Risk of bias We only included the four RCTs [22][23][24][25] in the assessment. They had predominantly low or unclear risk of bias across items. For determining the risk of bias related to blinding of participants and personnel, we judged that the study by Di Monaco and colleagues (single phone call) [22] would not initiate a substantial change in behavior. In contrast, for the remaining three studies it was difficult to untangle if the lack of blinding would increase risk of bias because of the design of the studies and interventions. Figures 3A, B summarize risk of bias overall and for individual studies. Generalizability of findings Almost all of the studies (5/6) were conducted in older adults recovering from hip fracture. All participants were community-dwelling and did not have dementia or low cognition scores; most participants were older white women. Only three studies reported their sampling frame with recruitment rates ranging from 19-74% for all possible participants, and 38-94% of all eligible participants. Participant retention ranged from 71-100% (Table 2). Implementation factors and BCTs All interventions were completed in six months or less, but few studies reported a detailed description of the intervention, and its implementation. Only the two pre-post studies reported on participants' satisfaction. Only one study [23] listed the intervention BCTs. There was moderate to substantial agreement between the raters (ϰ = 0.57-1.0) [27] for adjudication of BCTs in the remaining five studies. Of the 93 behavior change techniques [19], 28 different techniques were identified within the included interventions ( Table 3). The majority of BCTs identified belonged to the clusters goals and planning, repetition and substitution, and social support. Interventions were complex and the number of identified techniques used per study ranged from 4 to 17 (BCTs/study [4,15]). The most popular BCTs included in the interventions were behavioral practice (83.3%), credible source (83.3%), and unspecified social support (83.3%), followed by instruction on how to perform the behavior (66.7%). Synthesis of results There was heterogeneity in outcome variables within the included studies precluding combining data quantitatively, therefore we only conducted a narrative summary of the identified evidence. For example, physical activity was measured in two RCTs [24,25], however one study used selfreport [25], and the other study used activity monitors [24], with different outcomes. Overall studies ascertained feasibility for remote delivery via telephone and computer/online based modes for the target population, and some studies observed significant differences between groups (favoring the intervention). For example, based on two studies [21,26], there were statistically significant pre-post differences noted for exercise self-efficacy [21], function [21,26], physical activity [21], and quality of life [21]. In the four RCTs [22][23][24][25] there were significant between group differences in two trials only for exercise self-efficacy [25], falls self-efficacy [24], physical activity [24,25], and quality of life [24] (Table 4). Discussion Globally, there is a large and growing population of adults and older adults who sustain musculoskeletal trauma each year [1]. Rehabilitation and resumption of usual activity is imperative for recovery and avoidance of further deterioration. Although delivering health care remotely using technology has increased considerably in the past two decades [28], we highlight a gap in knowledge for musculoskeletal trauma rehabilitation. In this systematic review, we only identified six studies that met the inclusion criteria. The existing evidence is based predominantly on telephony as the delivery mode, with limited generalizability to predominantly older community-dwelling women. An encouraging observation was the internal validity of RCTs, and use of behavior change theory and techniques within some studies. However, due to limited evidence and the heterogeneity of the outcome measures, we cannot make recommendations, at present. A key take home message for this review is the apparent gap in evidence for telerehabilitation as a delivery mode after musculoskeletal trauma. This is in contrast to other clinical areas, such as stroke [10,29,30], cardiopulmonary [31][32][33], joint replacement [34,35], and multiple sclerosis [36,37]. Despite evidence on telerehabilitation for adults with chronic orthopaedic conditions [12], it is important to discern "what works for whom and under what circumstances" [38]: Implementation factors such as delivery mode for the intended population is an important factor for delivery, uptake and long term sustainability of an intervention. But, it is not entirely clear why there are fewer published studies in this area, but it may be related to the sudden and unexpected nature of traumatic events, and or the diversity of the population at risk [39]. Another consideration is that post-discharge pathways for routine rehabilitation may not clearly be defined for adults and older adults with musculoskeletal trauma [40]. Alternatively, hip fracture, for example, occurs later in life (the average age of participants in the included studies ranged from 75-82 years), and there may be a (mis) perception that online resources pose challenges [41]. However, older adults are a growing segment of the population using online resources: Approximately two-thirds of older adults are online, and many have internet access at home [42]. Despite barriers to remote care, e.g. cost of internet connection, security of online clinical discussions etc., access to online resources could support families and caregivers. For example, Nahm and colleagues developed a caregivers' online resource centre and discussion boards for hip fracture recovery [43]. There was limited support for some telerehabilitation interventions to increase physical activity, quality of life, and self-efficacy in older adults after hip fracture and proximal humerus fracture. However promising, these results should be viewed with caution, especially as the evidence was limited to mostly older community-dwelling women without significant cognitive impairment who have access to communication tools. Nonetheless, the collective evidence highlights feasibility of using remote delivery of care (mostly via telephone) after musculoskeletal trauma. In addition, the review highlights the use of an in-person clinical connection to build initial rapport prior to providing care remotely (e.g., in hospital, or attending rehabilitation or follow-up orthopaedic appointments) as part of the recruitment strategy. More detailed reporting on the implementation of telerehabilitation, such as using the Template for Intervention Description and Replication (TIDieR) checklist [44] is one way to encourage translation of important key ingredients for the successful delivery and uptake of an intervention to support future replication into practice. A strength of some of the interventions was the use of behavior change theory and theory-based BCTs to support delivery and update of the interventions. Overall, we noted studies were complex and used many BCT (i.e., behavioral practice, credible source, social support). These "active ingredients" may have been important implementation factors. A recent systematic review of internet interventions for changing behavior noted that studies using more BCTs observed larger effect sizes [45]. However, it remains to be determined what BCTs (or combination of) were effective, and to determine the effect of delivery modes that use video, real-time observation and remote monitoring (via wearable sensors) for this population. We acknowledge strengths and limitations with the systematic review process, and the evidence identified. Within the review process, we strived to be as comprehensive as possible and included studies from all years and in all languages. We used established guidelines to conduct the review and included three novel elements: BCTs, description of the generalizability of the evidence, and other implementation factors. However, the interventions and outcomes were too heterogeneous precluding meta-analyses. We are not able to draw any conclusions as there was limited evidence (based only on two RCTs using different types of interventions) that care delivered remotely may support an increase in participants' physical activity, quality of life, and self-efficacy. This review signals the need for more interventions to test the effectiveness of telerehabilitation following musculoskeletal trauma. In particular, data are lacking for middle-aged adults, men, and across ethnicities, languages, cognitive abilities, and the socioeconomic and health literacy spectrum. In conclusion, based on this systematic review of published peer-reviewed literature, we identify a gap in knowledge for telerehabilitation for adults at midlife and older who experience musculoskeletal trauma. The existing evidence is a robust base from which to build clinical knowledge and develop, test and implement innovations. Future directions should consider using behavior change theory and behavior change techniques, and include detailed information for replication. Taken together, this review indicates the need for more studies to test telerehabilitation following musculoskeletal trauma. Funding This study was unfunded, but we acknowledge the support of the Centre for Hip Health and Mobility.
v3-fos-license
2021-09-28T01:09:46.077Z
2021-07-07T00:00:00.000
237835057
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4395/11/7/1375/pdf", "pdf_hash": "cf9359d24981399900a94728d4889ed208c868f1", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2520", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "d70f075caae3f4e99729f161cf35a9c3822a7fce", "year": 2021 }
pes2o/s2orc
Two Distinct Soil Disinfestations Differently Modify the Bacterial Communities in a Tomato Field Differently Modify the Bacterial Communities a Tomato Field. Abstract: Reductive soil disinfestation (RSD) and soil solarization (SS) were evaluated based on environmental factors, microbiome, and suppression of Fusarium oxysporum in a tomato field soil. Soil environmental factors (moisture content, electric conductivity, pH, and redox potential (RP)) were measured during soil disinfestations. All factors were more strongly influenced by RSD than SS. 16S rRNA amplicon sequencing of RSD- and SS-treated soils was performed. The bacterial communities were taxonomically and functionally distinct depending on treatment methods and periods and significantly correlated with pH and RP. Fifty-four pathways predicted by PICRUSt2 (third level in MetaCyc hierarchy) were significantly different between RSD and SS. Quantitative polymerase chain reaction demonstrated that both treatments equally suppressed F. oxysporum . The growth and yield of tomato cultivated after treatments were similar between RSD and SS. RSD and SS shaped different soil bacterial communities, although the effects on pathogen suppression and tomato plant growth were comparable between treatments. The existence of pathogen-suppressive microbes, other than Clostridia previously reported to have an effect, was suggested. Comparison between RSD and SS provides new aspects of unknown disinfestation patterns and the usefulness of SS as an alternative to RSD. remain unknown. This study compared the RSD and SS soil disinfestation methods based on soil environmental factors and tomato plants cultivated Introduction Increasing food production is one of the most important challenges to meet the global population growth of the 21st century. Chemical fertilizers and synthetic pesticides have been used for stable crop growth, pest control, and yield increase worldwide [1]. In conventional agriculture, pest control depends on chemical pesticides; for example, soil fumigants, such as chloropicrin, metam-sodium, and dazomet, are used for broadspectrum pest control [2][3][4]. However, conventional agricultural systems simultaneously degrade the global environment (e.g., land, water, biodiversity, and climate) and consume a tremendous amount of natural resources [5]. Organic farming, the management system limiting chemical fertilizers and synthetic pesticides, is a promising solution for these negative impacts. A meta-analysis to compare organic and conventional farming showed that organic farming has positive environmental impacts per area unit, but not necessarily per product unit [6]. Nevertheless, organic farming has been shown to improve soil fertility and reduce global warming potential [7]. Although organic farming is considered more environmentally friendly than conventional farming, it produces lower crop yields than Soil Disinfestations and Sampling Field experiments were performed in a plastic multispan greenhouse (4 m × 24 m × 6 m) located in Kyoto Prefecture, Japan (35.0 • N, 135.4 • E), where the type of soil was clay accumulation red yellow. Three plots were designed for RSD (Plots 1-3) and SS (Plots 4-6) treatments (a total of six plots; size, 1.2 m × 20 m per plot). Both treatments were started on 6 August 2019. Wheat bran (2% N and 40% C, estimated available nitrogen and carbon supply; Tamagoya-Shoten, Ibaraki, Japan) was applied at 1.2 kg m −2 to RSD, and BLOF compost (0.9% N and 20.7% C, estimated available nitrogen and carbon supply; Japan Bio Farm Ltd., Nagano, Japan) was applied at 2.5 kg m −2 to SS. The initial (estimated) volumetric moisture content (MC) was adjusted to about 40% for RSD and about 30% for SS by irrigation, and the soil surface was covered with plastic films. The temperature in the greenhouse was controlled not to exceed 50 • C in the daytime by the environmental controller and the other temperature was entrusted. Changes in air temperatures were shown in Supplemental Figure S1 and the mean temperature during this period was 29.8 • C. The films were removed on 27 August 2019, and the temperature was changed to ambient temperature (mean of 30 • C). Soils were sampled on 6 August 2019 before the start, 13, 16, 20, 24, 27, and 30 August and 3, 6, 10, and 13 September 2019, during the treatment period, and 13 November 2019 and 22 January 2020, during the tomato cultivation period. Soils were collected from three different points with a core size of 17 mm in diameter at a depth of 0-15 cm from the surface in each plot for one sample. These soils were used for the measurement of environmental factors and DNA extraction. Environmental Factor Measurement The soils sampled during the treatment period were dried at 60 • C for 48 h. MC, electric conductivity (EC), and pH were measured. MC was calculated by subtracting the weight of the dry soil from the weight of the moist soil divided by the weight of the dry soil. A 1:5 (w/v) ratio of dried soil and water was used for measuring EC and pH by LAQUAtwin EC-11 conductivity meter and LAQUAtwin pH-11 pH meter, respectively (HORIBA Ltd., Kyoto, Japan). Besides, redox potential (RP) was continuously monitored by an FV-702 RP sensor (Fujiwara Seisakusho Ltd., Tokyo, Japan). DNA Extraction and 16S rRNA Amplicon Sequencing DNA was extracted from the soils sampled on 6, 13, 20, and 27 August, 3 and 10 September, and 13 November 2019 and 22 January 2020, using the DNeasy PowerSoil Kit (QIAGEN K.K., Tokyo, Japan). The DNA concentration was measured by a Qubit dsDNA HS Assay Kit and Qubit 2.0 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA). Polymerase chain reaction (PCR) amplification of the V4 region of bacterial 16S rRNA was performed using KOD FX Neo (TOYOBO, Osaka, Japan) with the following primer set: 515F (5 -ACACTCTTTCCCTACACGACGCTCTTCCGATCTGTGCCAGCMGCCGCGGTAA-3 ) and 806R (5 -GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCTGGACTACHVGGGTW TCTAAT-3 ) in technical triplicate. PCR was conducted according to the following program: 94 • C for 2 min and 20 cycles of 98 • C for 10 s, 50 • C for 30 s, and 68 • C for 30 s. Purification of the PCR products was carried out using AMPure XP (Beckman Coulter, Danvers, MA, USA), and the purified DNA was used as a template for PCR amplification for MiSeq (Illumina, San Diego, CA, USA) adapter attachment. PCR amplification was performed using KOD FX Neo with primers provided by FASMAC Co., Ltd. (Kanagawa, Japan) in technical duplicate. PCR was conducted according to the following program: 94 • C for 2 min and 10 cycles of 98 • C for 10 s, 59 • C for 30 s, and 68 • C for 30 s. The PCR products were purified as described above. Equal amounts of DNA were mixed and applied for Agronomy 2021, 11, 1375 4 of 15 2 × 250 bp paired-end sequencing using MiSeq in FASMAC. The acquired sequence data has been registered in the DNA Data Bank of Japan Sequence Read Archive (DRA012344). Microbiome Data Analysis 16S rRNA amplicon sequence data were analyzed using the QIIME2 environment version 2019.7 [38]. The raw sequences were processed using the q2-dada2 plugin in QIIME2 to trim the bases other than the 21st to 200th of all paired reads and construct error-corrected amplicon sequence variants (ASVs) from the trimmed reads [39]. The taxonomic assignment of ASVs was performed using the naïve Bayes classifier pretrained on the Silva 16S rRNA database release 132 [40,41]. The obtained ASV sequences were aligned using MAFFT [42]. Phylogenetic trees were constructed using FastTree2 [43]. The core-metrics-phylogenetic pipeline in the q2-diversity plugin within QIIME2 was used to calculate the αand β-diversity indices of the sampled soils with a subsampled ASV dataset with 19,000 sequences per sample. Bacterial metagenome functional profiles were predicted from the obtained ASV sequences and ASV profiles using the q2-picrust2 plugin [44] with the MetaCyc database of metabolic pathways and enzymes [45]. The predicted pathways by PICRUSt2 were grouped into parent classes based on the MetaCyc pathway hierarchy version 23.5 (released on December 18, 2019; Supplemental Table S1) and used for subsequent analysis. Quantification of F. oxysporum in Soils DNA from the soils sampled on 6, 13, 20, and 27 August and 3 and 10 September 2019, were used for real-time PCR to quantify F. oxysporum in soils as templates. PCR was performed using KOD FX Neo with the following primer set: CLOX1 (5 -CAGCAAAGCATCAGACCACTATAACTC-3 ) and CLOX2 (5 -CTTGTCAGTAACTGGAC GTTGGTACT-3 ) [46]. The PCR program was conducted as follows: 98 • C for 2 min and 50 cycles of 98 • C for 10 s, 60 • C for 10 s, and 68 • C for 30 s, followed by ramping up from 65 • C to 95 • C at 0.1 • C s −1 . The absolute abundance of F. oxysporum was estimated using a calibration curve constructed using a dilution series of plasmids with cloned fragments of the specific gene. Tomato Cultivation and Evaluation of the Growth and Yield Cherry tomato (Solanum lycopersicum cv. Benisuzume; Institute for Horticultural Plant Breeding, Chiba, Japan) was cultivated after soil disinfestations. The seeds were sown in pots, and the seedlings were grown at the greenhouse for about 50 days. They were planted into the field on 23 September 2019, after disinfestations and grown with two leaders in 1.7 plants m −2 . Plant growth was measured on 16 and 30 October and 13 and 27 November 2019. The length of the main stem was defined as its distance from the shoot apex to the soil surface position. The perimeter of the main stem was measured at 15 to 20 cm from the shoot apex. The leaf area was defined as the average of the products of width and length of the first upper and lower expanded leaves at 50 cm from the shoot apex of the main stem. The length of the internode was defined as the average of internode lengths from the first upper inflorescence with flowers of the main stem to the first upper and lower inflorescences. All items were measured in five plants per plot. Ripe fruits from more than 100 plants for each treatment were harvested and weighed to determine the fruit yield per unit area. Statistical Analysis The Welch's t-test was carried out using the t.test function of R package "stats." Tukey's test was performed using TukeyHSD function of R package "multcomp." Hierarchical clustering was performed using the complete linkage method with the hclust function in the R package "stats." Permutational multivariate analysis of variance (PERMANOVA) was performed with the adonis function in the R package "vegan" [47]. Correlation analysis between soil bacterial communities and soil properties was performed by the Mantel test using the weighted UniFrac (WUF) distance matrix and the Euclidian distance matrix of environmental factors in Plots 2 and 5. The R software package "vegan" was used with 10,000 permutations (* p < 0.05; ** p < 0.01). ANOVA-like differential expression (ALDEx2) analysis [48,49] was performed to detect differentially abundant taxa at the class level and pathways predicted by PICRUSt2 between RSD and SS with the R software package "ALDEx2" [false discovery rate (FDR) < 0.01, Welch's t-test corrected by the Benjamini-Hochberg method]. Low abundant taxa (mean relative abundance of < 0.1%) were filtered out for ALDEx2 analysis. Fluctuation of Environmental Factors during Soil Disinfestations To assess the soil disinfestation methods without chemical fertilizers and pesticides, the effects of RSD and SS in a cherry tomato greenhouse were compared. Both treatments were initiated by adding organic substrates and water and covering them with plastic films. Four parameters (i.e., MC, EC, pH, and RP) were monitored for 6 weeks. MC reached about 40% and 30% after the addition of water in RSD and SS, respectively, which were then decreased after 4 weeks in both conditions. MC in RSD remained continuously higher than that in SS during the test period ( Figure 1A). In both conditions, EC decreased to 0.2 mS cm −1 for 1 week after initiation and returned to the initial levels (0.5-0.6 mS cm −1 ) after 5 weeks. EC in RSD was generally lower than that in SS ( Figure 1B). pH increased from 6.4 to 7.4 in 3 weeks and then decreased to 6.6 in RSD; in SS, it increased from 6.4 to 7.2 in 1 week, maintained for 18 days, and then decreased to 6.9 ( Figure 1C). Soil anaerobicity is described as follows: aerobic (RP> +300 mV) and anaerobic [moderately reductive (+300 to 0 mV), reductive (0 to −200 mV), and highly reductive (< −200 mV)] [50]. RP decreased to −200 mV after 10 days in RSD, maintained for 3 weeks, and then rapidly returned to the levels at the start (about +600 mV); in SS, it decreased to +250 mV for 1 week, maintained for 2 weeks, and then returned to the levels at the start 3 weeks later ( Figure 1D). Based on this observation, RSD and SS formed highly and moderately reduced conditions, respectively. The measurement of soil parameters revealed that RSD and SS caused different influences on the soil environmental factors. Changes in Bacterial Communities by Soil Disinfestations To evaluate the influence of RSD and SS on bacterial communities, a 16S rRNA amplicon sequence of soils during treatments was conducted. Both weighted and unweighted UniFrac (WUF and UUF)-based principal coordinate analysis (PCoA) showed a variation of the bacterial communities depending on the treatment methods and periods (PER-MANOVA, p = 0.001; Figure 2). The differentially abundant test was performed using ALDEx2 analysis, and nine bacterial taxa at the class level were significantly different between RSD and SS (FDR < 0.01; Supplemental Table S2). Three classes, such as Bacilli and Clostridia of Firmicutes and Deltaproteobacteria of Proteobacteria, were more abundant in RSD than in SS, and six classes, such as Acidimicrobiia of Actinobacteria, Chloroflexia and Gitt-GS-136 of Chloroflexi, Gemmatimonadetes and S0134 terrestrial group of Gemmatimonadetes, and Verrucomicrobiae of Verrucomicrobia, were more abundant in SS than in RSD (Supplemental Figure S2 and Table S2). The Mantel test was used to assess the relationships between soil bacterial communities and environmental factors. It showed that soil RP and pH were significantly correlated with the bacterial communities (p < 0.05; Figure 3). These results suggested that the difference of changes in bacterial communities depends on the soil disinfestation methods and that fluctuation of environmental factors affects the communities. Agronomy 2021, 11, x 6 of 16 Changes in Bacterial Communities by Soil Disinfestations To evaluate the influence of RSD and SS on bacterial communities, a 16S rRNA amplicon sequence of soils during treatments was conducted. Both weighted and unweighted UniFrac (WUF and UUF)-based principal coordinate analysis (PCoA) showed a variation of the bacterial communities depending on the treatment methods and periods (PERMANOVA, p = 0.001; Figure 2). The differentially abundant test was performed using ALDEx2 analysis, a bacterial taxa at the class level were significantly different between RSD and SS 0.01; Supplemental Table S2). Three classes, such as Bacilli and Clostridia of Fi results suggested that the difference of changes in bacte soil disinfestation methods and that fluctuation of enviro munities. Changes in Bacterial Pathway Composition by Soil Disinfestations The predicted pathway composition of the bacterial metagenomes in the disinfested soils was inferred with a 16S rRNA gene sequence using the PICRUSt2 pipeline developed from PICRUSt1 to predict the functional potential of a bacterial community [44,51]. By PICRUSt2, 445 metabolic pathways were predicted for the data, and for understanding with simplicity, these pathways were grouped into parent classes based on the MetaCyc pathway hierarchy (Supplemental Table S1). Top-level (eight categories) and third level (176 categories) in the class hierarchy were used for subsequent analysis. Principal component analysis (PCA) of the functional profiles at third and eighth levels (most divided) showed a difference in the pathway compositions depending on the methods and periods of disinfestations by the first and second axes, respectively (Figure 4; Supplemental Figure S3). Based on functional categories at level 1 of the MetaCyc pathway, the function "Biosynthesis" accounted for about 70%, and "Degradation/Utilization/Assimilation" and "Generation of Precursor Metabolite and Energy" accounted for about 15% ( Figure 5A). ALDEx2 analysis showed that 54 categories at level 3 of the MetaCyc pathway were significantly different between RSD and SS (FDR < 0.01; Supplemental Table S3). Focusing on periods when environmental factors were especially different between RSD and SS (Figure 1), three categories belonging to "Degradation/Utilization/Assimilation" peaked at 2 weeks later in RSD, whereas the other eight categories peaked at 1 to 3 weeks later in SS ( Figure 5B). Intriguingly, the category "Siderophore Biosynthesis" belonging to "Biosynthesis" was enriched in SS, particularly in soils 2 weeks later ( Figure 5B). These results implied that the potential of siderophore production is higher in SS than in RSD. Changes in Bacterial Pathway Composition by Soil Disinfestations The predicted pathway composition of the bacterial metagenomes in soils was inferred with a 16S rRNA gene sequence using the PICRUSt2 pipe from PICRUSt1 to predict the functional potential of a bacterial commu PICRUSt2, 445 metabolic pathways were predicted for the data, and for with simplicity, these pathways were grouped into parent classes based o pathway hierarchy (Supplemental Table S1). Top-level (eight categories) (176 categories) in the class hierarchy were used for subsequent analysis. ponent analysis (PCA) of the functional profiles at third and eighth levels showed a difference in the pathway compositions depending on the metho of disinfestations by the first and second axes, respectively (Figure 4; Suppl S3). Based on functional categories at level 1 of the MetaCyc pathway, the synthesis" accounted for about 70%, and "Degradation/Utilization/Ass "Generation of Precursor Metabolite and Energy" accounted for about 15 ALDEx2 analysis showed that 54 categories at level 3 of the MetaCyc pat nificantly different between RSD and SS (FDR < 0.01; Supplemental Tabl on periods when environmental factors were especially different betwe (Figure 1), three categories belonging to "Degradation/Utilization/Assim at 2 weeks later in RSD, whereas the other eight categories peaked at 1 to SS ( Figure 5B). Intriguingly, the category "Siderophore Biosynthesis" bel synthesis" was enriched in SS, particularly in soils 2 weeks later ( Figure 5B implied that the potential of siderophore production is higher in SS than i Difference of F. oxysporum Density between Two Soil Disinfestations To compare the effects of disinfestation methods on plant pathogens, a tomato pathogen F. oxysporum was quantified with PCR during RSD and SS treatments. F. oxysporum decreased over time in both RSD and SS. It was reduced to 6.8% of the initial amount in RSD and was below the limit of quantification in SS 5 weeks later ( Figure 6). There is no significant difference between RSD and SS on the abundance of F. oxysporum, although these practices differentially affect the soil environmental factors and soil bacterial communities. To compare the effects of disinfestation methods on plant pathogens, a tomato pathogen F. oxysporum was quantified with PCR during RSD and SS treatments. F. oxysporum decreased over time in both RSD and SS. It was reduced to 6.8% of the initial amount in RSD and was below the limit of quantification in SS 5 weeks later ( Figure 6). There is no significant difference between RSD and SS on the abundance of F. oxysporum, although these practices differentially affect the soil environmental factors and soil bacterial communities. Effects of Soil Disinfestation on Soil Bacterial Communities and Tomato Growth during Cultivation It is unclear how long the effects of disinfestations last. To analyze whether the difference of bacterial communities between RSD and SS was maintained during the cherry tomato cultivation period, bacterial communities in soils sampled during the tomato cultivation period after these treatments were analyzed. WUF-and UUF-PCoA showed a clear distinction of the bacterial communities between RSD and SS (PERMANOVA, p = 0.019 for WUF and p = 0.003 for UUF; Figure 7). In contrast, the bacterial communities of both soils were not altered between 14 and 24 weeks later (PERMANOVA, p = 0.058 for WUF and p = 0.237 for UUF; Figure 7). These data indicated that the effects of RSD and SS continue even after 6 months. Effects of Soil Disinfestation on Soil Bacterial Communities and Tomato Growth during Cultivation It is unclear how long the effects of disinfestations last. To analyze whether the difference of bacterial communities between RSD and SS was maintained during the cherry tomato cultivation period, bacterial communities in soils sampled during the tomato cultivation period after these treatments were analyzed. WUF-and UUF-PCoA showed a clear distinction of the bacterial communities between RSD and SS (PERMANOVA, p = 0.019 for WUF and p = 0.003 for UUF; Figure 7). In contrast, the bacterial communities of both soils were not altered between 14 and 24 weeks later (PERMANOVA, p = 0.058 for WUF and p = 0.237 for UUF; Figure 7). These data indicated that the effects of RSD and SS continue even after 6 months. Differences in bacterial communities between RSD and SS during the tomato cultivation period were confirmed. The effects of RSD and SS on the growth of cherry tomatoes during cultivation were also investigated. The length and perimeter of the main stem and length of internode on October 30 and the length of the main stem on November 27 were significantly different between RSD and SS (Table 1). Fruit yield showed a slight increase in SS (5.64 kg m −2 ) than in RSD (5.23 kg m −2 ) but not statistically tested due to singlicate. Differences in bacterial communities between RSD and SS during the tomato cultivation period were confirmed. The effects of RSD and SS on the growth of cherry tomatoes during cultivation were also investigated. The length and perimeter of the main stem and length of internode on October 30 and the length of the main stem on November 27 were significantly different between RSD and SS (Table 1). Fruit yield showed a slight increase in SS (5.64 kg m −2 ) than in RSD (5.23 kg m −2 ) but not statistically tested due to singlicate. Discussion To improve the profitability of organic farming, it is required to effectively control pathogens without using pesticides to grow plants healthily. Fusarium wilt is a major soil-borne tomato disease caused by FOL [52] and is difficult to prevent without pesticides. Both RSD and SS strongly suppressed the abundance of F. oxysporum. The suppressive effect by RSD was previously reported for several pathogens, including Agrobacterium tumefaciens, F. oxysporum, Ralstonia solanacearum, Rhizoctonia solani, Pythium spp., and Verticillium dahliae [53][54][55][56][57][58]. Pathogen suppression mediated by RSD is supposed to involve various chemical and biological mechanisms in a complicated manner [18,19]. The decrease in RP coincided with pathogen suppression [15], but the reductive condition alone was not lethal for F. oxysporum [18]. The results of the study suggested that even a moderately reduced condition is enough to suppress F. oxysporum and that the sequential chemical and biological modulations are essential. Another possible mechanism for pathogen control is the accumulation of toxic organic acids mediated by the anaerobic decomposition of organic substrates added at the start of disinfestation, accompanied by the pH decrease during RSD [59]. However, this study, including some previous works, indicated pH increase and pathogen suppression during RSD [60][61][62]. Nevertheless, the fluctuation of pH during RSD had different results depending on added organic substrates and initial pH at the start and the geographic locations, although the treatments showed pathogen suppression [63,64]. Thus, it remains unclear why the pH fluctuations during RSD differ among several studies and which pathogens are suppressed by organic acids. However, under the conditions set in this study, the observed suppression of F. oxysporum would be caused by factors other than organic acid generation with pH decrease. The quantity, diversity, and community of soil-borne microbes are important indicators of soil quality and play key roles in plant growth and defense [24,65]. RSD treatment formed different bacterial and fungal communities from untreated soil by culture-dependent and culture-independent investigation [66]. At the phylum level of bacteria, members of Firmicutes (from both classes Clostridia and Bacilli) increased in the RSD-treated soil [25][26][27]67]. Because RSD makes soil reducing/anaerobic, it is reasonable to increase anaerobic microbes, such as Clostridia. In accordance with these findings, the relative abundance of class Clostridia tended to be inversely proportional to soil RP (Supplemental Figure S2B). Regarding the involvement of microbes in pathogen suppression during RSD treatment, specific strains of Clostridia and Bacilli from treated soils could have a disease control effect [30,32]. The families Ruminococcaceae, Lachnospiraceae, and Clostridiaceae belonging to Clostridia increased during RSD and produced toxic compounds against plant pathogens [68][69][70]. Time-series analysis of soil microbiome and metabolome during RSD demonstrated that composition of Firmicutes abundance continuously changes throughout the treatment period and that population dynamics of Clostridium spp. were correlated with temporal changes in abundances of organic acids, p-cresol, and methyl sulfides with antimicrobial activities [71]. These previous studies argue that Clostridia is responsible for suppressing pathogens. In this study, the suppression effects of F. oxysporum between RSD and SS treatments were similar, although Clostridia was not increased in SS as much as RSD (Supplemental Figure S2B), suggesting that microbes other than Clostridia may also be involved in the suppression. Indeed, the relative abundances of some groups belonging to the bacterial order Sphingobacteriales and fungal order Sordariales also increased by RSD, which were negatively correlated with disease incidence [60]. In addition to the pathogenic control effect, the abundances of the nitrogenase and ammonia monooxygenase genes related to nitrification increased in RSD-treated soil [68,72], suggesting the improvement of nutritional availability by RSD. Recently, SS was reported to induce organic nitrogen to increase plant growth [34]. The comparison between RSD and SS revealed that SS might be involved in siderophore production. This result gave a possibility of plant growth promotion by SS, although there was no significant difference in the effects of RSD and SS on cherry tomato plant growth. Furthermore, the legacy effect on the bacterial community was confirmed 3 months after treatment [73], in accordance with the observation of distinct bacterial communities between RSD and SS even during the tomato cultivation period. Thus, this study suggests that the effects of soil disinfestation on soil microbiota last during the cultivation period in organic farming. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/agronomy11071375/s1, Figure S1: Air temperature observed outside the green house, Figure S2: Relative abundances of bacterial taxa in the soil samples under reductive soil disinfestation (RSD) and soil solarization (SS), Figure S3: Principal component analysis (PCA) of the pathway composition (eighth level in MetaCyc hierarchy) predicted by PICRUSt2 in the soil bacterial metagenomes under ( ) reductive soil disinfestation (RSD) and ( ) soil solarization (SS), Table S1: MetaCyc_pathway_hierarchy_(version 23.5), Table S2. Relative abundances of bacterial classes (mean relative abundance > 0.1%) in the soil samples under reductive soil disinfestation (RSD) and soil solarization (SS), Table S3. Relative abundances of the predicted pathways in the soil samples under reductive soil disinfestation (RSD) and soil solarization (SS). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All the data is available in the paper.
v3-fos-license
2017-10-19T16:06:44.025Z
2014-08-11T00:00:00.000
29424456
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-3/101/2014/isprsarchives-XL-3-101-2014.pdf", "pdf_hash": "1944b35d49a280b10521ce627bb951442c89eafa", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2522", "s2fieldsofstudy": [ "Engineering" ], "sha1": "1944b35d49a280b10521ce627bb951442c89eafa", "year": 2014 }
pes2o/s2orc
AUTOMATIC BUILDING DETECTION BASED ON SUPERVISED CLASSIFICATION USING HIGH RESOLUTION GOOGLE EARTH IMAGES This paper presents a novel approach to detect the buildings by automization of the training area collecting stage for supervised classification. The method based on the fact that a 3d building structure should cast a shadow under suitable imaging conditions. Therefore, the methodology begins with the detection and masking out the shadow areas using luminance component of the LAB color space, which indicates the lightness of the image, and a novel double thresholding technique. Further, the training areas for supervised classification are selected by automatically determining a buffer zone on each building whose shadow is detected by using the shadow shape and the sun illumination direction. Thereafter, by calculating the statistic values of each buffer zone which is collected from the building areas the Improved Parallelepiped Supervised Classification is executed to detect the buildings. Standard deviation thresholding applied to the Parallelepiped classification method to improve its accuracy. Finally, simple morphological operations conducted for releasing the noises and increasing the accuracy of the results. The experiments were performed on set of high resolution Google Earth images. The performance of the proposed approach was assessed by comparing the results of the proposed approach with the reference data by using well-known quality measurements (Precision, Recall and F1-score) to evaluate the pixel-based and object-based performances of the proposed approach. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.4% and 853% overall pixel-based and object-based precision performances, respectively. INTRODUCTION Automatic building detection from monocular aerial and satellite images has been an important issue to utilize in many applications such as creation and update of maps and GIS database, change detection, land use analysis and urban monitoring applications. According to rapidly growing urbanization and municipal regions, automatic detection of buildings from remote sensing images is a hot topic and an active field of research. Building detection, extraction and reconstruction have been studied in a very large number of studies; some review studies of several techniques can be found in (Mayer, 1999;Baltavias, 2004;Unsalan and Boyer, 2005;Brenner, 2005;Haala and Kada;2010). Considering the type of data which have been used for building detection such as multispectral images, nDSM, DEM, SAR, LiDAR datasets, the existing methods can be categorized into two groups: 1-Building detection using 3Dimage provider datasets, 2-Building detection through monocular remote sensing images. This study is devoted to the autonomous detection of buildings from a monocular optical Google Earth images. Therefore, a brief discussion of the previous studies which used single optical image datasets to automatically detect the buildings will be given first. The studies in the monocular context used region growing methods, simple models of building geometry, edge and line segments and corners (Tavakoli and Rosenfeld, 1982;Herman and Kanade, 1986;Hueratas and Nevatia, 1988;Irvin and Mckeown, 1989) to detect buildings. The shadow areas are engendered with regard to height of the buildings and illumination angle of the sun in the optical remote sensing images, and they give important clues about the location of the buildings. First, Hueratas and Nevatia (1988) used shadows to carry out the sides and corners of the building. Then, Irvin and McKeown (1989) predicted the shape and the height of the buildings using shadow information. To extract buildings from aerial images through boundary grouping, Liow and Pavalidis (1990) used shadow information to complete the boundary grouping process. Furthermore, shadow information was used as an evidence to verify the initially proposed methods (McGlone and Shufelt, 1994;Lin and Nevatia, 1998). Besides, Peng and Liu (2005) proposed a new method based on models and context that is guided with shadow cast direction which has computed using neither illumination direction nor viewing angle. Recently, some methods have been proposed based on classification methods to detect and extract buildings from remote sensing imagery. Supervised classification and hough transformation are used by Lee et al. (2003) as a new method to extract buildings from Ikonos imagery. They illustrated that their proposed model largely depends on the supervised classification method to get a accurate and detailed set of building roofs. Furthermore, Inglada (2007) used support vector machines classification (SVM) of geometric image features to detect the man-made objects in high resolution optical remote sensing imagery. He just utilized original bands of the SPOT 5 satellite images for learning the SVM. Then, the additional bands such as NDVI, nDSM, and several texture measures additionally were used for finding the building patches (San and Turker, 2014). As an effect of additional bands the accuracy of the building detection method has been increased about ten percent. Tanchotsrinon et al. (2013) proposed a method utilizing integration of the texture analysis, color segmentation and neural classification techniques to detect buildings from remote sensing imagery. Initially, the graph theory was used to detect buildings in aerial images by Kim and Muller (1999). They used linear features as vertices of graph and shadow information to verify the building appearance. Then, Sirmacek and Unsalan (2009) utilized graph theoretical tools and scale invariant feature transform (SIFT) to detect urban-area buildings from satellite images. Ok et al. (2013) proposed a new approach for the automated detection of buildings from single very high resolution optical satellite images using shadow information in integration of fuzzy logics and GrabCut partitioning algorithm. Thereupon, Ok (2013) increased the accuracy of their previous work by using a new method to detect shadow areas (Teke et al., 2011) and developing a two-level graph partitioning framework to detect buildings. In this paper, a fully automatic method is proposed to detect buildings from single high resolution Google Earth images. First a novel shadow detection method is conducted using LAB color space and double thresholding rules. Thereafter, considering the illumination direction and shadow area information training samples are collected. An improved parallelepiped classification method is applied to classify the image pixels into building and non-building areas. Finally, simple morphological operations are executed to increase the accuracy. METHODOLOGY The proposed automatic building detection using supervised classification has three main steps: (Fig. 1). Step1: Shadow detection based on novel double thresholding technique: Shadows occur in regions where the sunlight does not reach directly due to obstruction by some object such as buildings. In this paper, we propose a novel double thresholding technique to detect shadow areas from a single Google Earth image. In order to detect shadow information automatically we convert the image from RGB to LAB color space. Since the shadow regions are darker and less illuminated than their surroundings, it is easy to extract them in the luminance channel which gives lightness information. Indeed, information of the luminance channel is utilized because of its capability in separating the objects with low and high brightness values in original image. Consequently, we put a default and a little bit coarse threshold in the range of (70 -90) for our images with 256 bits' depth. Utilizing this threshold allows shadow areas to be detected; but, simultaneously some of vegetation regions are detected inaccurately because of their low luminance values. To separate the vegetation and shadow areas from each other we utilize Otsu's (Otsu, 1975) automatic gray-level thresholding witch is very effective in isolating the bimodal histogram distribution. Although there are some mistakes in eliminating true shadow pixels, but they cannot be very effective in reducing our method's accuracy to detect the buildings. ( Fig. 2b). In addition, we use some simple morphological operations to remove the shadow areas which are smaller than building shadows. In this way, we can protect our algorithm from the negative effects of tree shadows in next steps. Step2: Supervised classification Supervised classification is a process of categorizing pixels into several numbers of data classes on their values which are extracted from training sites identified by an analyst. Collecting training areas manually by an expert makes this method as a non-automated model of categorizing of data. Since we aim to detect the buildings automatically from the satellite images, our proposed method should be provided with training areas which are selected in an automated way. In this study, shadow evidence is used to overcome this limitation toward automatic supervised classification. Then, an improved parallelepiped supervised classification is conducted to classify the image into building and non-building areas. 1-Automatic collection of training areas Training areas should be well-representative of their class. Besides, shadows are features that can be easily detected as darkest areas in the image which gives a robust clue of the buildings. Therefore, we collected the training areas with respect to the illumination angle and the shadow areas which are detected in step one by composing buffer zone considering shadow shapes and sizes. Indeed, each buffer zone has the same length of the shadow edges adjacent to the building, and it has five pixels in width. Since collecting the training areas adjacent to the shadow edges cannot be a good representation of that building class, and it might contain shadow pixels, the buffer zone is shifted 3 pixels toward inside building in regard to illumination angel. (Fig. 2.c). The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-3, 2014 ISPRS Technical Commission III Symposium, 5 -7 September 2014, Zurich, Switzerland 2-An improved parallelepiped classification The parallelepiped classifier uses the class limits and stores range information related with all the classes to determine if a given pixel falls within the class or not. For each class, the minimum and maximum values are used as a decision rule to classify the image pixels as buildings, subsequently the unclassified pixels are assigned to the non-building areas. The parallelepiped classification method has disadvantages as follows: 1-The decision range that is defined by the minimum and maximum values may be unrepresentative of the spectral classes that they in fact represent. 2-It performs poorly when the regions overlap because of high correlation between the categories. 3-The pixels remain unclassified when they are not in the range of any classes. In order to overcome the limitations we proposed a new thresholding method based on standard deviation of the classes, in which the pixels, which are assigned inaccurately as buildings due to these limitations, are removed considering this threshold. Furthermore, the overlapped classes, which illustrate the regions of the building values, do not affect the final results because the all training areas are merged and demonstrate the same class as building. Moreover, although in the parallelepiped classification method the remaining unclassified pixels is a very big problem for classification of the whole image when the training areas are complete and represent all the features in the entire image. However, it cannot be a disadvantage for our proposed method, because this manner of the parallelepiped classification makes it optional and ideal for our method when we try to collect training areas from the building regions instead of all the features in the images. Due to lack of information about the features except the building areas in the image, we can only use a classification method that can assign pixels just to the specific predetermined training areas, and keeps other pixels unclassified which belong to other features that are not buildings. In this study, after collecting the training areas and removing the noises by a standard deviation thresholding process, the minimum and maximum values of the each training area are calculated to determine the districts of that building class. Consequently, all the pixels in the image classified considering these districts, indeed, the pixels whose are inside these districts labeled as building and others labeled as non-building. Step3: Post-processing and finalizing the results Although the detected buildings reveal the regions that might be the feature of interest, many false alarm areas, a set of morphological image processing operations such as openings, closings and fillings applied to the single binary image that is the output of the classified image. The opening operation generally smoothes the contour of an object, breaks narrow strips, and eliminates stray foreground structures that are smaller than the predetermined structure; Therefore, larger structures will remain. On the other hand, the closing operation not only tends to smooth sections of contours, but also fuses narrow breaks and long thin gulfs, eliminates small holes, and fills the gaps in the contour. Despite of filling of gaps in previous morphological operations, it is seen that these operations are not effective in filling larger holes. Therefore, we used morphological filling operation in order to overcome this problem at the beginning of the post-processing operations. (Fig. 2d). 3-1-Image Datasets We tested our automatic building detection method on seven high resolution Google Earth images which have three bands (RGB), and they have acquired from different sites in Ankara, Turkey. The images selected specially to represent diverse building characteristics such as the sizes and shapes of buildings, their proximity and different color combinations of building roofs. The test images are showed in Fig. 3, and we provided the detected buildings in the second column for each test image. 3-2-Accuracy Assessment Strategy The final performance of the proposed automated building detection method is evaluated by comparing the results with the reference data which are generated manually by a qualified human operator. In this study, we utilized both pixel-based and object-based quality measures. Initially all the pixels in the image are classified into four classes as follows: 1-True Positive (TP): Both manually and automated methods classified the pixel as building. Where . denotes the number of pixels assigned to each distinct class, and 1 F -score is the combination of Precision and Recall into single score. The object-based performance of the proposed method has been tested using the measures given in Eqs. (1)-(3). To do that, we classify a resulted building object as TP if it has at least 60% pixel overlap ratio with a building object in the reference data. Whereas, we classify a resulted object as FP if the resulted object of the proposed method does not coincide with any of the building objects in the reference data. In addition, FN class assigned to a resulted object when it corresponds to a reference object with an overlap under 60%. Therefore, the object-based Precision, Recall and 1 F -score values for each test image were computed. 3-3-Results and Discussion We illustrate the detection results of the proposed method in Fig. 3. Visual interpretations of the results show that the developed method is robust and representative by detecting most of the buildings without producing too many FN pixels in the images which include buildings with divers roof colors, texture, shape, size and orientation. In addition to visual illustration, the numerical results of the proposed method are listed in Table 1 which also support these findings. With regard to pixel-based evaluation, the overall mean ratio of precision and recall are computed as 88.4% and 71.7%, respectively. Further, the calculated pixel-based F1-scores for all test images are 71.7%, which indicate promising results for such a divers and challenging set of test data. Moreover, for the object-based evaluation, the overall mean ratios of precision and recall are calculated as 85.3% and 87.2%, respectively, and these results correspond to an overall object-based F1-score of 84.8 %. Considering the complexity and various conditions in the test images involved, this is a reliable pixel-based automatic building detection performance. According to the numerical results in Table 1, the lowest pixelbased precision ratio (49.2%) is produced by the test image #7. The reason of this poor pixel-based performance in comparison with other test image performances is the proximity of the spectral reflectance values of the buildings and the background image. Whereas, the test image #3 produces the lowest objectbased precision performance ratio as 50% due to big differences of contrast values between two sides of the buildings according to the illumination angle. However, it produces pixel-based precision ratio as 90%, which shows a robust result in terms of pixel-based performance. In Fig. 3, #2 test image results show the efficiency of our proposed method in detecting buildings from dense urban areas where the buildings are so close to each other, and it produces high object-based precision, recall and 1 F -score ratios as 87.1%, 85.9% and 86.5%, respectively. The #1, #3, #4 and #7 test images are representative examples of various colors, shapes and sizes of the buildings which detected by our proposed automatic building detection method and is resulted in fairly good performance in both pixel-based and object-based assessments. Based on discussed quantitative and qualitative evaluations, we can deduce that the proposed building detection method works fairly well and has robust performance despite of such diverse challenging test images. CONCLSION AND FUTURE WORKS The majority of building detection methods has one or more limitations in automatic detections of buildings. There may be some restrictions about density of buildings areas such as urban, sub-urban and rural areas. In addition to these restrictions, there are some limitations related to shape, color and size of the buildings. To overcome most of these problems, we proposed a novel approach. This method can detect buildings without influencing from their geometry characteristics. Moreover, this method provides an automatic training area collection to seed the supervised classification methods. In this study, a novel shadow detection method based on double thresholding using RGB images is proposed and, the parallelepiped classification model is improved to detect building regions. This method is still has some incapability in separating non-building from building areas when they have similar spectral values. However, we believe that our method will supply great help for building detection applications in big scales in future. As a future work, satellite images that offer NIR band in addition to RGB bands will be used to improve the accuracy of the shadow detection results. In addition, the image-processing operator will be enriched in order to boost the detection accuracy.
v3-fos-license
2018-12-05T06:44:40.207Z
1992-10-01T00:00:00.000
84311914
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://septentrio.uit.no/index.php/rangifer/article/download/1044/998", "pdf_hash": "956db263e31f8e5d9455ce1ff87a865ef39ea48c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2527", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "956db263e31f8e5d9455ce1ff87a865ef39ea48c", "year": 1992 }
pes2o/s2orc
A serological , retrospective study in reindeer on five different viruses Serological investigations o n different cervidae have indicated the presence of several viruses c o m m o n i n domestic ruminants (Thorsen et al, 1977; L a w m a n et al., 1978; Elazhary et al, 1981; Dieter ich, 1981, K o c a n et al, 1986, D i a z et al., 1988, K o k l e s et al, 1988, Liebermann et al, 1989). In addition viruses k n o w n to be present i n domestic ruminants or related w i t h such have been isolated f r o m cervidae (Romvary, 1965; Net t le ton et al, 1980; Boros et al, 1985; Net t le ton et al, 1986; R o c k b o r n et al, 1990). Pathological lesions have o n l y at few instances been connected w i t h the isolation of a virus (Romvary, 1965; D i a z et al, 1988; Weber et al, 1982; R o c k b o r n et al, 1990) although mucosal disease l ike and other lesions have been reported (Richards et al, 1956; Feinstein et al, 1987, D i a z , 1988; Steen et al, 1989). In the present study a serological retrospective investigation was carried out i n semidomesticated reindeer on five different viruses. Serum samples were obtained f r o m t w o woodland (Vastra Kikk i j aure and Angesa) and t w o mountain herds (Tannas and umbyn) . The samples emanated f r o m the years 73, -74, -75, -77 and -82. Altogether 50 randomly selected cl inical ly healthy animals were tested (Table 1). The animals f r o m Tannas were kept at the N a t i o n a l Veterinary Institute, S tockholm, close to domestic ruminants. Investigations were performed by screening for antibodies against parainf l u e n z a l virus (PI-3), bovine herpesvirus type 1 ( B H V 1 ) w h i c h cross-reacts w i t h reindeer herpes isolates ( R o c k b o r n et al, 1990), bovine vira l d i arrhoea virus ( B V D V ) and mammalian reovirus types 1 ans 2. A n t i b o d y screening was performed b y using the standard E L I S A tests of our institute. A n t i b o d y titres against PI-3 were found i n all herds and i n more than half of the investigated animals (54 %). Titers against B H V 1 were fou n d i n 14 animals (28 %) f r o m three herds and against B V D V i n three animals (6 %) f r o m t w o herds (Table 1). A l l animals were seronegative against the t w o reoviruses. The results obtained were in accordance w i t h earlier reports (Elazhary, 1981; Dieter ich , 1981; Rehbinder et al, 1985). The keeping of reindeer close to domestic ruminants seems not to have affected the results). Ant ibodies against reovirus types 1 and 2 were not found i n this study but have been reported f r o m fa l low deer (Dama dama), red deer (Cervus elaphus), roe deer (Capreolus capreolus) and sika deer {Cervus nippon) (Lawman et al, 1978). It seems evident that reindeer, as wel l as other deer, may be exposed to and infected w i t h viruses w i t h o u t showing signs of cl inical diseases (Thorsen et al, 1977; R o c k b o r n et al, 1990) but under certain circumstances at least some of these viruses may produce disease or In the present study a serological retrospective investigation was carried out in semidomesticated reindeer on five different viruses.Serum samples were obtained from two woodland (Vastra Kikkijaure and Angesa) and two mountain herds (Tannas and umbyn).The samples emanated from the years 73, -74, -75, -77 and -82.Altogether 50 randomly selected clinically healthy animals were tested (Table 1).The animals from Tannas were kept at the National Veterinary Institute, Stockholm, close to domestic ruminants.Investigations were performed by screening for antibodies against parain-fluenzal virus (PI-3), bovine herpesvirus type 1 (BHV-1) which cross-reacts with reindeer herpes isolates (Rockborn et al, 1990), bovine viral diarrhoea virus (BVDV) and mammalian reovirus types 1 ans 2. Antibody screening was performed by using the standard ELISA tests of our institute. Antibody titres against PI-3 were found in all herds and in more than half of the investigated animals (54 %).Titers against BHV-1 were found in 14 animals (28 %) from three herds and against BVDV in three animals (6 %) from two herds (Table 1).All animals were seronegative against the two reoviruses. The results obtained were in accordance with earlier reports (Elazhary, 1981;Dieterich, 1981;Rehbinder et al, 1985).The keeping of reindeer close to domestic ruminants seems not to have affected the results). Antibodies against reovirus types 1 and 2 were not found in this study but have been reported from fallow deer (Dama dama), red deer (Cervus elaphus), roe deer (Capreolus capreolus) and sika deer {Cervus nippon) (Lawman et al, 1978). It seems evident that reindeer, as well as other deer, may be exposed to and infected with viruses without showing signs of clinical diseases (Thorsen et al, 1977;Rockborn et al, 1990) but under certain circumstances at least some of these viruses may produce disease or Table 1.Antibody titres in reindeer from four different herds.contribute to disease outbreaks of a multifactorial genesis (Rockborn et al, 1990). In Table 2 are listed some known relations between the investigated agents and some maladies of a multifactorial genesis common in domestic ruminants and possibly present in reindeer. The connection between herpesvirus infection and outbreaks of necrobacillosis in reindeer (Fig. 1) is stated by Rockborn et al.> (1990).Rehbinder and Nordkvist (1983), however, pointed out that anything that causes abrasions or other injuries to the oral mucosa may contribute to outbreaks of necrobacillosis.In this respect also BVDV can be regarded as a possible primary causative agent of necrobacillosis in reindeer. In reindeer severe outbreaks of pasteurellosis have been reported (Magnusson, 1913;Brandt, 1914;Skjenneberg, 1957;Nordkvist and Karls¬ son, 1962;Kummeneje, 1976).Predisposing factors such as environmental stress, parasites and viral infections have been considered.In this context neither PI-3, herpesvirus nor BVDV may be eliminated as parameters of a multifactorical genesis (Jubb et al, 1985).Keratitis in reindeer is also a disease of multifactorical genesis and mainly caused by different management factors, such as stress, dust, corneal abrasions etc. (Rehbinder, 1977).In farmed deer herpesvirus of Cervidae type 1 (CHV-1) is known to be involved in outbreaks of ocular disease (Nettleton et al, 1986) and thus the possibility that herpes virus plays a role in outbreaks of ocular disease in reindeer can not be excluded. Both BHV-1 and BVD viruses are well known as causative agents in abortions and perinatal mortality in cattle Qubb et al., 1985), but the role of herpesvirus and BVDV in abortions in reindeer is unkown.In addition, in reindeer, abortions are considered to take place under stressfull situations (Skjenneberg and Slagsvold, 1968;Rehbinder, 1975) and thus again a multifactorical genesis can not be ruled out. Cataracts (Fig. 2) may occur in cattle as a result of BVDV and BHV-1 infections (Williams and Gelatt, 1981a;William and Gelatt, 1981b).Cataracts are infrequently seen in reindeer (Fig. 2) and the etiology is unknown. Still several questions are unanswered in this context such as; what is the significance of PI-3, BHV-1 and BVDV in the herds examined?Are reindeer capable of transmitting these agents to domestic ruminants either by direct contact or via blood feeding insects?Transmission of BVDV by blood feeding flies has been reported (Tarry et al., 1991) and reindeer are exposed to heavy attacks from mosquitos and biting flies (Kadnikov, 1989). Are reindeer acting as reservoirs for these viral agents?Are reindeer capable of transmitting these agents to other cervidae?Are the serologically detected viral agents identical with viruses of domestic ruminants or are they only crossreacting strains such as the reindeer herpesvirus type 1 strain which crossreacts with bovine herpes virus type 1 (IBR) and probably does not infect cattle (Ek-Kommonen et al., 1982)?Conclusions: The present observations indicate that PI-3, BHV-1 and BVDV cause natural infections in reindeer herds in Sweden.Viral infections may play an important role inducing various disease complexes.The significance of these viruses is, however, not yet understood.Hence, further investigations, e.g.virus isolation attempts and transmission experiments ought to be undertaken in order to clarify the role and significance of these viruses in disease outbreaks in reideer. animals kept at the National Veterinary Institute, Stockholm Fig. 1.Tongue from reindeer which suffered from oral necrobacillosis.
v3-fos-license
2016-05-04T20:20:58.661Z
2014-09-01T00:00:00.000
10577215
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.4274/jcrpe.1404", "pdf_hash": "f7de7e2242d20fb10373f904279e2d86745dd2f1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2528", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "f7de7e2242d20fb10373f904279e2d86745dd2f1", "year": 2014 }
pes2o/s2orc
One Base Deletion (c.2422delT) in the TPO Gene Causes Severe Congenital Hypothyroidism Objective: Congenital hypothyroidism (CH) is the most common neonatal endocrine disorder and mutations in the TPO gene have been reported to cause CH. Our aim in this study was to determine the genetic basis of CH in two affected individuals coming from a consanguineous family. Methods: Since CH is usually inherited in autosomal recessive manner in consanguineous/multi-case families, we adopted a two-stage strategy of genetic linkage studies and targeted sequencing of the candidate genes. First, we investigated the potential genetic linkage of the family to any known CH locus using microsatellite markers and then screened for mutations in linked-gene by Sanger sequencing. Results: The family showed potential linkage to the TPO gene and we detected a deletion (c.2422delT) in both cases. The mutation segregated with disease status in the family. Conclusion: This study demonstrates that a single base deletion in the carboxyl-terminal coding region of the TPO gene could cause CH and helps to establish a genotype/phenotype correlation associated with the mutation. The study also highlights the importance of molecular genetic studies in the definitive diagnosis and accurate classification of CH. Introduction Congenital hypothyroidism (CH) is the most common neonatal endocrine disorder with an incidence of 1/3500 live births. It causes mental retardation and growth delay unless a timely and proper treatment is introduced (1). Molecular genetics analyses facilitate definitive diagnosis and accurate classification of CH and might also describe patient-specific targets for alternative treatment of the disease. About 2% of CH is familial and to date, 11 causative genes have been described for the pathogenesis of inherited CH (2). Table 1 shows the details of all these loci and associated clinical phenotypes. Some of these genes are associated with primary thyroid dysgenesis (3,4) and some with thyroid dyshormonogenesis (TDH) (5). Currently, there are seven genes known to cause congenital TDH which encode for proteins involved in thyroid hormone biosynthesis (6). Major steps in thyroid hormone synthesis include oxidation and covalent linkage of iodide to tyrosine residues of thyroglobulin (TG) catalysed by thyroid peroxidase (TPO) enzyme (7). This latter reaction requires hydrogen peroxide (H2O2) as the final electron acceptor (8,9). Therefore, the generation of H2O2) is a critical step in the synthesis of thyroid hormones (10). A defect in the system that generates H2O2), resulting in CH, has been reported previously (11,12,13). Two dual oxidases (DUOX1 and 2) have recently been identified as the components of the thyroid H2O2)-generating system (14,15). TPO is a thyroid-specific heme peroxidase localised in the apical membrane of thyrocytes and plays a central role in the thyroid hormone biosynthesis. Mutations in TPO causing permanent CH are mostly inherited in an autosomal recessive fashion and to date, more than 60 distinct mutations have been described in this gene (8,9). To investigate the genetic background of CH, our group of investigators have developed a two-tier strategy combining genetic linkage studies and full sequencing of candidate genes in familial cases and to date have identified several mutations in different CH genes (16,17,18,19,20,21,22,23,24,25,26,27,28). In the current study, we aimed to determine the genetic cause of CH in a consanguineous family with two affected siblings. Here, we report a homozygous one-nucleotide deletion (c.2422delT) in the TPO gene detected in both cases and its associated clinical phenotypes. Methods The genetics of CH reported in our previous studies (18,19,20,21,22,23) were investigated in two new cases born to a consanguineous Turkish family. When the older sister was first diagnosed at the age of nine months, she had growth retardation. Her hormone levels before treatment were thyroid stimulating hormone (TSH): 720 µIU/mL (normal range: 0.3-5), total thyroxine (T4) <0.9 µg/dL (normal range: 6.6-17.2), free T4 (fT4) <0.09 ng/dL (normal range: 0.9-2.3), total triiodothyronine (T3) <0.17 ng/mL (normal range: 1.05-3.45) and free T3 (fT3) <0.2 pg/mL (normal range: 2-5). This patient is currently 12 years old and has severe mental retardation. Her younger sister was diagnosed much earlier, on the eighth day of birth and had thyroid enlargement detected by thyroid scintigraphy. Hormone levels in this second patient were TSH: 860 µIU/mL (normal range: 0.3-5), total T4 <0.7 µg/dL (normal range: 6.6-17.2), fT4 <0.12 ng/dL (normal range: 0.9-2.3), total T3 <0.15 ng/mL (normal range: 1.05-3.45) and fT3 <0.2 pg/mL (normal range: 2-5). This patient is now eight years old and her development is normal. Hypothyroid phenotype is permanent in both cases and they require continuous T4 treatment. The parents and a healthy sister are all free of any signs or symptoms of hypothyroidism. Informed consent was obtained from the family and venous blood samples were collected from all family members. All procedures performed were in accordance with the Declaration of Helsinki and the study was approved by relevant IRBs/Ethics Committees. DNA was extracted by using standard methods and stored at -20 °C until analysed. Potential Linkage Analysis First we performed linkage analysis to all 11 known CH loci in all family members with microsatellite markers (Table 1). Fluorescent labelling of one oligonucleotide of each primer pair enabled the sizing of PCR products in a capillary electrophoresis machine by the use of GeneMapper v4.0 software suite (Applied Biosystems, Warrington, UK). By combining genotypes for each microsatellite marker, we constructed haplotype tables for each family member. As autosomal recessive inheritance was assumed in consanguineous families, homozygosity of a particular haplotype for a locus in cases accompanied by heterozygosity of the same haplotype in both parents was taken as suggestive of linkage to that locus. Direct Sequence Analysis of the TPO Gene The TPO gene was sequenced by conventional Sanger sequencing and the primer sequences and PCR conditions are available upon request. PCR products were size-checked on 1% horizontal agarose gels and cleaned up using MicroCLEAN (Microzone, Haywards Heath, UK) or gel-extracted using QIAquickTM Gel Extraction kit (Qiagen, Crawley, UK). The purified PCR products were sequenced in both forward and reverse directions using the ABI BigDye Terminator v3.1 Cycle Sequencing kits on an ABI Prism 3730 DNA Analyzer (Applied Biosystems, Warrington, UK). Analysed sequences were then Results Haplotype tables were constructed for each family member by combining the scores for each marker to observe the segregation of the genotype along with the disease status. The linkage analysis using these tables indicated a potential linkage to the TPO locus in the family, i.e. both CH cases were homozygous for a disease-associated haplotype, while both parents and the healthy sister were all heterozygous for the same haplotype ( Figure 1). Assuming an autosomal recessive inheritance model which is the most likely pattern in consanguineous families, these results suggested that the disease-associated haplotype segregated with the disease status in the family. Therefore, we proceeded to sequence the coding region (and flanking sequences) of the TPO gene in all members of the family. Direct sequencing analysis revealed a homozygous deletion of T nucleotide in codon 808 of the TPO gene (c.2422delT) in both affected siblings, which results in a frameshift mutation and leads to an early stop codon in exon 14 of the gene (p.Cys808AlafsX24). The parents and the unaffected sister all carried the mutation at heterozygous state which is consistent with the linkage results ( Figure 2). Codon 808 is located in exon 14 of the 17-exon TPO gene and this mutation is expected to result in a truncated protein product which will lack carboxy-terminal catalytic domains. This in turn could render TPO protein completely non-functional. Discussion It is well known that CH, if untreated, may lead to severe developmental delay and genetic defects have been long been implicated in the etiology of the disease. Currently, improving genetic analyses provide a powerful tool to unravel the pathogenesis of the disease and as the number of causative genes grows, the underlying molecular mechanisms become clearer in increasing number of patients. This is especially important for TDH as it is often inherited autosomal recessively where both parents usually are healthy carriers of a mutation in a particular causative gene. To date, 11 causative genes have been described for CH, seven (TPO, TG, NIS, PDS, IYD, DUOX2, DUOXA2) of which are associated with TDH phenotype and encode proteins involved in the biosynthesis of thyroid hormones (6). We developed a two-tier strategy, combining genetic linkage and sequencing techniques, to investigate the mutations in all causative CH genes, but our efforts were most fruitful with the TDH phenotype. This strategy is still cost-effective compared to full sequencing of all known genes as some of these genes are considerably large. We have recently also been engaged in developing a new testing strategy based on next generation sequencing (NGS) technology covering all causative CH genes in one set. As the prices of NGS are constantly decreasing, in the near future, it might be feasible just to sequence all known-genes in all CH patients. In the present study, exploiting the same strategy, we delineated the genetic background of the disease in a consanguineous family and detected a homozygous TPO deletion (c.2422delT) in both affected siblings which segregated with the disease status in the family, i.e. both obligatory carrier parents and one unaffected sibling were all heterozygous for the same mutation. This mutation was first described by Bakker et al (29) as pathogenic and associated with total iodine organification defect. These authors have concluded that this mutation entirely abolishes the function of the TPO enzyme. The severe phenotype in their patients was also evidenced by very low plasma thyroid hormone concentrations (both of T3 and T4) and highly elevated TSH levels. Radioiodine uptake and perchlorate discharge test results for our cases were not available, but the very low thyroid hormone levels associated with very high levels of TSH in our patients are in line with their observations and confirm the severe phenotype caused by c.2422delT mutation. Therefore, it would be plausible to suggest that there is a firm genotype/phenotype relationship associated with this TPO mutation. In conclusion, we state that in these two patients, the CH was caused by c.2422delT TPO mutation and that this mutation is associated with severe CH. Our study contributes to the establishment of a firm genotype/phenotype relationship associated with this mutation. Molecular genetic studies as such would allow the description of exact etiology and pathogenic mechanism of the disease in particular patients.
v3-fos-license
2021-07-30T13:14:38.847Z
2021-07-30T00:00:00.000
236504070
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2021.704473/pdf", "pdf_hash": "5bd6d2a9ed8bdb807d0117decd5d93e2a6488140", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2530", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "5bd6d2a9ed8bdb807d0117decd5d93e2a6488140", "year": 2021 }
pes2o/s2orc
Clean Label Trade-Offs: A Case Study of Plain Yogurt Consumer demand for clean label has risen in recent years. However, clean label foods with simple and minimalistic ingredient lists are often expensive to produce and/or may possess less desirable sensory qualities. Accordingly, understanding consumer preferences regarding the clean label trend would be of great interest to the food industry. Here we investigate how ingredient lists and associated sensory quality descriptions may influence consumer preferences using a hypothetical choice experiment. In particular, we test the impacts of four common stabilizers (carrageenan, corn starch, milk protein concentrate, and pectin) and textural characteristics on preferences and willingness to pay for plain yogurt. A total of 250 yogurt consumers participated in the study. The results of a mixed logit analysis suggest that clean labeling significantly increases the likelihood of consumer choice, while poor texture reduces consumer choice. More importantly, the negative impact of poor texture seems to be less significant for clean label yogurts compared to that for yogurts with longer ingredient lists. Among all stabilizers, corn starch in particular has a significant negative impact on consumer choice. The estimated average consumer willingness to pay for clean labels is between $2.54 and $3.53 for 32 oz yogurt formulations. Furthermore, clean labels minimize the negative impact of textural defects with consumers willing to pay an estimated premium of $1.61 for the family size yogurt with a simple ingredient list. Results of latent class modeling reveal two classes of consumers with similar patterns of demand who prefer clean labels and, on average, would rather purchase a yogurt with a textural defect than opt out of purchasing a yogurt entirely. Implications for the food industry are discussed. INTRODUCTION In recent years, consumers demand for specific dietary and nutritional characteristics in their foods [e.g., reduced sugar, free from artificial preservatives; (1)]. This shift in consumer preference has resulted in a strong push in the food industry to remove certain ingredients through reformulation (2). Common ingredients targeted for removal include those that are synthetically derived (e.g., Red 40, artificial flavors) and have long, "chemical-sounding" names [e.g., carrageenan, methyl crystalline cellulose; (3)]. Although these ingredients are deemed safe by regulatory agencies, they are perceived as harmful by consumers due to their lack of familiarity (4,5) and risk perception of chemicals (6,7). While many factors likely play roles in the demand for "clean label" foods, existing research suggests that health and sustainability concerns, for example, motivate consumers to seek such products (8)(9)(10). Additional research investigating the clean label trend suggests that consumers prefer short ingredient lists that contain familiar, minimally processed ingredients (11,12). Accordingly, food companies have made great efforts in reformulating their products to achieve cleaner labels (2,13). The move away from highly-processed ingredients in the food industry can be seen as companies across the retail landscape strive to clean up their labels. In the United States, major food companies began cleaning up their ingredient lists around 2010 (3). For example, Hershey's began reformulating their products by replacing their sugar beet-derived sugar, a crop grown primarily from genetically modified (GMO) seeds, with sugar cane-derived sugar in 2015 (14). A year later, Campbell launched their clean label line of soups called "Well Yes!", which contained no artificial flavors or colors, and/or modified starches (15). Today, cleaner labels are ubiquitous across multiple food categories, including bakery, soft drinks, snacks, prepared soups, and dairy products (16,17). Ingredient blacklists compiled by influential retailers, such as Kroger and Whole Foods, are one source of criteria for companies striving to develop cleaner labels (18). Common clean label reformulation effort involves either complete removal of undesirable ingredients or their replacement with more natural alternatives. The latter process is generally expensive and time-consuming as those ingredients are often more costly, and the resulting products often possess less desirable sensory characteristics compared to their original counterparts (19,20). In particular, the ingredients that are considered undesirable by consumers are often designed or modified to maximize their functionality within a food. Thus, replacement of these ingredients (e.g., modified corn starch) with natural alternatives (e.g., native corn starch) can result in an increase in the ingredient usage rates, an increase production costs, and/or potentially poor sensory characteristics (21)(22)(23)(24). The alternative of complete ingredient removal often has similar challenges (25). Although consumers state a preference for cleaner labels, consumers' behaviors and actions sometimes contradict their preferences (26), especially when other factors are involved. Arguably, the two upmost important factors might be sensory characteristics and price of the product in question. Sensory attributes such as flavor, texture, and appearance are commonly identified as product characteristics of high importance to consumers (27)(28)(29). For example, consumers are unwilling to compromise "taste" for health benefits in functional foods (30,31). Price is another factor that impacts purchase behavior. Streletskaya et al. (32) showed that price increases through taxes have the potential to reduce purchase of unhealthy foods leading to reduced intake of certain undesirable nutrients (i.e., calories, cholesterol, etc.). However, in some situations (particularly for higher income consumers) price may have less of an impact on demand for food compared to non-economic factors (33). Thus, while the costs to produce a clean label food increase relative to its original formulation and the increased costs are passed on to the consumer, it is unclear what premium consumers may be willing to pay for clean label foods. Furthermore, while consumers might, on average, have a higher willingness to pay for clean label products, this price premium might not be high enough to cover the costs of reformulation, similarly to premia and cost dynamics of organic foods (34,35). The tradeoffs between label cleanliness, sensory characteristics, and price are of particular interest to companies considering reformulation, as it is unclear how these factors influence each other. Yogurt is a food product category where significant reformulation efforts have been made to satisfy consumer demand for clean label (36). Reformulation efforts have targeted eliminating ingredients such as artificial coloring agents, chemical preservatives, and modified starches (37)(38)(39)(40). For sensory characteristics, creamy mouthfeel, and smooth appearance seem to be critical in yogurt (41,42), along with a lack of or minimal syneresis [i.e., expulsion of liquid whey from the yogurt (white mass); (43)(44)(45)(46)]. To achieve such sensory characteristics, stabilizers and thickeners are commonly used in yogurt products (47). Most common food stabilizers and thickeners are various polysaccharides (48) such as carrageenan, corn starch, and pectin. Milk and whey protein concentrate are also commonly used in yogurt because they can add higher protein content while modulating thickness (49) and improving texture (50). Despite the common usage of stabilizers and thickeners in food, research has shown that consumers are not particularly knowledgeable about them (51). More importantly, recent study suggests that stabilizers and thickening agents are perceived as generally unnatural by yogurt consumers, compared to other ingredient categories such as sugars, preservatives, and coloring agents (52). Consequently, the food industry has put forth great efforts to replace highly processed stabilizers or to remove them entirely from their products (17,53,54). The overarching goal of this study is to look at consumer demand for clean label while considering related sensory characteristics and price changes. To achieve this goal, we employ a hypothetical choice experiment, which is commonly used to examine how product characteristics affect consumer product choice. When defining "clean label" for the purpose of this study, we focused on four different stabilizers/thickening agentscarrageenan, corn starch, milk protein concentrate (MPC), and pectin-that are commonly used in yogurt manufacturing. We have created different ingredient lists for plain yogurts that range from just cultured pasteurized milk to yogurts that also included all four stabilizers and formulations in between 1 . Of note, the stabilizers used in this study were selected based on the results of a recent survey conducted by Maruyama et al. (52), ranging from relatively natural (pectin, corn starch) to relatively unnatural (MPC, carrageenan). As our study examines the potential tradeoffs consumers might be willing to make between different yogurt characteristics, sensory characteristics and price were considered as other key factors. Consumers Plain yogurt consumers were recruited through an existing pool of consumers from the Center for Sensory and Consumer Behavior. Additionally, flyers advertising the study were posted around the campus and electronic advertisements were sent though a university email newsletter. In order to qualify for the study, all respondents were required to fill out a screening survey. The inclusion criteria for the study participation were: (1) fluency in written and oral English, (2) age between 18 and 65 years, (3) a consumer of family-sized tubs of plain yogurt at a frequency of at least once every other week, and (4) not employed or involved in the food and beverage industry. The last criterion was set to ensure that only the responses of lay consumers were captured. Qualified respondents were invited to participate in the study. Written consent was obtained from each respondent prior to participation in the study. A total of 629 consumers expressed interest in our study and filled out the screening survey. Of those interested, only 336 met the inclusion criteria outlined above and a total of 250 consumers participated in the choice experiment due to scheduling availability. The study protocol was approved by the university's institutional review board (IRB-2019-0187). Ingredient List The ingredient lists of yogurts on the market were reviewed to determine which stabilizers were used in yogurt formulations. Additionally, industry experts in yogurt manufacturing were consulted. Four different stabilizers/thickening agents were chosen as ingredients of interest for this study: carrageenan, corn starch, MPC, and pectin. Carrageenan is derived from a type of red seaweed called Irish moss (55) and commonly used in synergy with other stabilizers to prevent syneresis and improve texture (56). While previous research (57)(58)(59)(60) has questioned the safety and toxicity status of carrageenan, numerous reviews (61)(62)(63)(64) and studies (65-67) support carrageenan's safety status. Corn starch, a starch-based polysaccharide, is a common household ingredient that is used to thicken soups, sauces, and fruit preparations. In general, native starches are considered to be clean label by researchers (68)(69)(70). MPC is a milk-derived ingredient obtained through membrane filtration of fluid milk. Similar to corn starch, it increases viscosity and minimizes syneresis in yogurt (71). In addition to its functionality as a stabilizer and thickener, MPC also serves as a means of protein fortification making it appealing to manufacturers looking to market high protein yogurts (71). Pectin is well-known in the food industry as a gelling agent and it functions as a stabilizer in yogurt and acidified dairy beverages (72). In dairy applications, it can be used as a stand-alone stabilizer or in combination with other stabilizers or thickeners. It is commercially derived from the cell walls of plants and primarily sourced from apples and citrus fruit (73). From a cost standpoint, pectin is more expensive than other stabilizers, but it is considered the stabilizer of choice for yogurts positioned as natural (74). Texture Characteristics Two possible texture characteristics were given as choices: overall good texture, or a texture with defects. A yogurt free of textural defects was described as, "Consumers generally perceive the texture of this yogurt to be good and free of defects (e.g., smooth, creamy), " while a yogurt possessing some defects was described as, "Consumers generally perceive the texture of this yogurt to have some defects (e.g., grainy, lumpy)." Price The yogurts in the hypothetical choice experiment were offered at $2.99, $4.49 2 , and $7.36 per 32 oz family size tub. These price levels were based off the retail prices of commercially available yogurts found in local grocery stores. Choice Experiment Experimental design is based off standard practice (75) using a D-efficient design for a generic unlabeled format generated by Stata's create package. To minimize the cognitive burden on our respondents, a block design was incorporated into our experiment. One of two blocks, each with 11 choice sets, was randomly presented to the respondent, as described below. Each participant was invited to attend one 25-min moderated session on campus. A cheap talk script was delivered at the start of the experiment, highlighting that even though the choices made in the experiment were hypothetical, respondents should carefully consider their yogurt preferences and budget constraints. Cheap talk scripts have been shown to be an effective tool for mitigating hypothetical bias (76). Each choice set was presented to respondents individually, and respondents were instructed to consider each choice set on its own merits. All respondents made selections in 11 choice sets (see Figure 1 for an example of a choice set). Following the choice experiment, the participants were directed to rate their agreement regarding a series of different statements on a 10-point scale anchored at 1 indicating a strong disagreement and 10 indicating a strong agreement with the statements presented. Specifically, respondents were asked, "When I purchase yogurt I care about [taste/flavor/texture, price, ingredient list, the presence of milk protein concentrate, the presence of pectin, the presence of corn starch, and the presence of carrageenan]." These statements were aimed to assess the importance of quality (taste/flavor/texture,) price, ingredients, and the presence of MPC, corn starch, pectin, and carrageenan. After rating the statements presented, respondents were instructed to fill out a brief socio-demographic questionnaire. Demographics The general socio-demographic characteristics of our sample population are summarized in Table 1. The mean age of our sample was 39 years old (18-65 years old; SD: 13.44 years old); 76% were female; 79% were Caucasian; 59% reported annual household incomes below $75,000 and 24% reported incomes of $100,000 or greater. Thirty-seven percent of respondents held bachelor degrees, 32% held master's degrees, 9% held professional/doctoral degrees, and 21% of respondents held less than a bachelor degree. Sixty-four percent of respondents reported to live in 2-4 person households and 26% reported to grocery shop for children 18 years old or younger within their households. These consumer demographic characteristics are similar to those reported in peer-reviewed research examining yogurt consumers (34,(77)(78)(79)(80)(81). Lastly, 92% of our sample reported to be primary shoppers within their households. Primary shoppers were defined in our study as individuals who are responsible for at least half of all household grocery purchases. Mean Statement Ratings Mean ratings and standard errors of agreement to attributerelated statements are summarized in Table 2. On average, Quality, specified as flavor, texture, and/or appearance, was rated as the most important attribute, followed by Price, Ingredients, Corn Starch, Carrageenan, Milk Protein Concentrate, and Pectin. Model Following common practice, we use an alternative specific mixed logit model (82) to analyze the choice experiment responses with utility specified as: where x njt is the vector of observed choice attributes of alternative j in the choice set t, β n is the vector of parameters of interest that is unobserved for each decision maker n and varies in the population with density f (β|θ * ), where θ are the parameters of the distribution of β in the population, assumed to be triangular for the price of yogurt coefficient, and the random variable e njt is random and independent and identically distributed (IID) extreme value type 1. The mixed logit probability takes the standard form when the density function of f (β) is continuous: We then assume a discrete mixing distribution, modifying (2) to reflect a latent class model. The choice probability for the latent class model takes the following form: where there are M segments and the share of population in segment m is s m . The latent class model enables segmentation of consumer choices into different "classes" according to preference patterns. The number of classes in the model was specified based on the AIC and BIC criteria comparison across models with up to four consumer classes, resulting in a two-class model being identified as the best fit. While the mixed logit model with alternative specific constants provides a general characterization of consumer preferences and allows for parameter heterogeneity for price, the latent class logit model segments participants into distinct classes, allowing for additional insights into consumer demand for clean label yogurts. Both models are estimated using Stata 15 with standard errors clustered at participant level to accommodate the fact each participant made 11 choices. In the following results section, estimated impacts of our parameters of interest will be reported as odds ratios, which is the exponentiated form of the parameter coefficients. Additionally, we can use coefficients from our logit models to estimate willingness to pay (WTP) for statistically significant parameters. WTP estimates are obtained by using the following formula: where β atttribute is the estimate or coefficient for significant parameters of interest (i.e., clean label, poor texture, etc.), and β price is the estimated price coefficient. Alternative Specific Mixed Logit With Core Yogurt Attributes Model (Specification 1) In our base model specification (Specification 1), a mixed logit was used to model the impact of price and individual ingredients (i.e., carrageenan, corn starch, etc.) on the likelihood of choosing a yogurt. The impact of each ingredient is modeled using dummy variables. Dummy variables are binary categorical explanatory variables, which take on a value of 1 if a specific condition is met (i.e., pectin is present, corn starch is present, etc.) and a 0 if otherwise (83). Results of Specification 1 reveal an odds ratio of 0.60 (p < 0.001) for the price parameter (see Table 3 Table 4). We introduce a new parameter, poor texture, which allows us to control for yogurts possessing a textural defect. Poor texture has a negative impact on choosing a yogurt with an odds ratio of 0.44 (p < 0.001). Therefore, holding all parameters equal, consumers are less likely to purchase a yogurt with a known textural defect relative to a yogurt that where a textural defect not known, and they are willing to pay, on average, $1.83 to avoid a textural defect. Alternative Specific Mixed Logit Model, Texture, and Clean Label Controls (Specification 3) In our third specification, we model the impact of price, individual ingredients, textural defects, and clean labeling [an ingredient list free of added ingredients (i.e., stabilizers)] on the odds of choosing a yogurt. The new dummy variable, clean label, controls for clean label ingredient lists (i.e., our baseline ingredient list which contains only cultured pasteurized milk). Inclusion of such a control is important for our analysis as it teases out consumer preference for a clean ingredient list. This allows us to further unravel how each tested ingredient impacts consumer purchase behavior beyond a general preference for a clean, simple ingredient list. This model specification reveals, again, similar impacts for the price (0.58) and poor texture (0.29) (p < 0.001). Thus, a textural defect (i.e., poor texture) significantly decreases the odds of purchasing a yogurt, and on average, consumers are willing to pay an average premium of $2.26 to avoid a textural defect. The parameter clean label, which allows us to control for yogurts that have clean ingredient lists, was found to have an odds ratio 6.78 (p < 0.001). Therefore, holding all other characteristics equal, consumers are 6.78 times more likely to purchase a yogurt with a clean label than a yogurt without a clean label. Based on its WTP estimate, consumers are willing to pay an average premium of $3.53 for a 32 oz yogurt with a clean label. Looking beyond a clean label, we find that MPC along with pectin are no longer statistically significant (see Table 3). Our results suggest consumers are less likely to purchase a yogurt containing corn starch (0.49, p < 0.001) or carrageenan (0.83, p < 0.05), and they are, on average, willing to pay an estimated $1.33 and $0.34 to avoid them in yogurt (see Table 4), respectively. Alternative Specific Mixed Logit Model, Texture, and Clean Label Interactions (Specification 4) Our final mixed logit model specification (Specification 4) examines not only how price, ingredients, clean labels, and texture impact consumer choices for yogurt, but also includes an interaction between clean labels and poor texture. The interaction in this specification allows us to examine whether the impact of a textural defect may depend on whether a yogurt is clean label. Consistent with the results of our previous specifications, we find that that consumers are less likely to purchase a yogurt with a textural defect (0.26, p < 0.001; see Table 3) and, on average, are willing to pay $2.83 more to avoid bad texture (see Table 4). The clean label odds ratio is 3.31 for this model, which is relatively large. Thus, based on this model consumers are more likely to purchase a yogurt if it has a clean label, and are willing to pay an average premium of $2.54 for a clean label based on the parameter's WTP estimate. Looking at the interaction, the odds ratio for clean label × poor texture is 2.16 (p < 0.001), suggesting that even with a known textural defect, a clean label on a yogurt increases the odds of a purchase. Hence, consumers may be willing to accept a textural defect in a yogurt if it has a clean label, and we estimate that they might be willing to pay an average premium of $1.61 for a clean ingredient list. Lastly, beyond clean labels we find that consumers were less likely to purchase a yogurt containing corn starch (0.46, p < 0.001), and they are willing to pay, on average, $1.67 more to avoid it in their yogurt. No other ingredient-specific parameters were found to be statistically significant in this particular model. Latent Class Model Results of our latent class model fit with two classes are displayed in Table 5. This analysis examines the differential impact of price, individual ingredients, clean labels, and textural defects on the odds of selecting a yogurt for two different consumer groups 3 . Class 1 comprises of ∼38% of our sample. For consumers in this class, both price and corn starch were found to have negative impacts on the odds of choosing a yogurt with odds ratios of 0.51 and 0.43, respectively (p < 0.001). Consumers in this class are less likely to purchase a yogurt containing corn starch and, on average, are willing to pay $1.26 to avoid it (see WTP estimates in Table 6). Similar to the results of our Model 4 mixed logit, none of the other three stabilizers were found to have a significant impact on the odds of choosing an alternative for Class 1 consumers. Also similar to the results our mixed logits, Class 1 consumers are more likely to purchase a yogurt if it has a clean label and they are willing to pay a premium of $3.81, on average, for a clean ingredient list. The odds ratio for a clean label is 13.08 for these consumers (p < 0.001). Segmentation analysis revealed similar results for the impact of textural defects. Consumers are more likely to purchase a yogurt with good texture, odds ratio of 24.05 (p < 0.001), compared to yogurts with poor texture, odds ratio of 6.45 (p < 0.001). However, odds ratios for both poor and good textures are positive, suggesting that despite the documented importance of sensory qualities in food, consumers are more likely to purchase a yogurt with textural defects rather than opt out of the purchase entirely. Class 1 consumers are willing to pay a premium of $4.72 for a yogurt free of textural defects and $2.77 for a yogurt with a defect. Using the difference between WTP estimates for good and poor texture parameters we can estimate the average premium that consumers are willing to pay to avoid textural defects. Here, Class 1 consumers are willing to pay an average of $1.95 to avoid textural defects in their yogurt. The remaining and larger portion of our sample (62%) belongs to Class 2. Similar to Class 1, both price and corn starch were found to have negative impacts on the odds of selecting an alternative with odds ratios of 0.67 and 0.56, respectively (p < 0.001; see Table 5). WTP estimates reveal that, on average, consumers in this class are willing to pay a premium of $1.42 to avoid corn starch in yogurt (see Table 6). The odds ratio for clean label was 5.50 (p < 0.001), thus clean labels are also important for consumers in this class and they are, on average, willing to pay a premium of $4.22 for a yogurt with a clean ingredient list. Considering the texture parameters for Class 2, the odds ratios for good and poor texture were 96.74 and 29.3, respectively. The textural attribute WTP estimates for this class reveal that Class 2 consumers are willing to pay a premium of $11.32 for a yogurt free of textural defects and $8.36 for a yogurt with a defect. In line with Class 1, consumers in Class 2 would rather purchase a yogurt with some level of textural defect rather than not purchase a yogurt at all. However, to avoid textural defects, consumers in Class 2 are willing to pay an average premium of $2.96. DISCUSSION Previous research has supported that naturalness (84) and food qualities (27)(28)(29) are important to consumers. Others have shown that labeled attributes (i.e., organic) can also impact consumers' perception of product quality (42,85). However, to best of our knowledge, this study is the first to examine the interactive impacts of ingredients and potential quality (textural) defects on consumer choice of foods. Additionally, the current study evaluates the impact of ingredients, considered both "clean" and conventional, on willingness to pay for yogurt. Specifically, we examined the demand for clean labeled plain yogurt using an a computerized in-person choice experiment. Our experimental design included 35 hypothetical yogurts presented in 11 choice sets consisting of two yogurt options, and an opt out "no purchase" option in each choice set. Our results summarized in this paper reveal some interesting findings. First, we find that clean label (i.e., ingredient list lacking added stabilizers and/or thickening agents) increase the odds of choosing a plain yogurt compared to those containing one or more stabilizers. For 32 oz family size plain yogurt, the average willingness to pay premium for a clean label is between $2.54 and $3.53. Our analysis consisted of four mixed logit model specifications that increased in complexity. In our base specification (Specification 1), all four stabilizers were found to have a significant, negative impact on consumer choice. However, as we added parameters to tease out the impact of a clean label ingredient list and textural defects in our model using a step-wise approach, we found different revelations in consumer preferences for the ingredients we tested. Our most exhaustive specification (Specification 4) revealed that, beyond a clean label, corn starch was the only ingredient tested that consumers may be specifically avoiding in plain yogurts, beyond in general looking for a label with a minimum number of ingredients. This finding may be worth considering for yogurt companies that currently produce plain yogurts with corn starch, particularly for those looking to clean up their labels. We estimate that on average consumers may be willing to pay between additional $1.33 and $2.34 (per 32 oz) to avoid corn starch on a yogurt label. Maruyama et al. (52) reported that corn starch was rated as more natural compared to both MPC and carrageenan by consumers. The combined findings reported here and the previous results of Maruyama et al. (52) suggest that while corn starch may be perceived as a relatively natural and familiar ingredient in general, its presence may not be considered acceptable in the context of plain yogurt. Consumers may consider a yogurt that requires thickening to be of lower quality, resulting in the observed lower WTP. As other thickeners and their functions are less familiar to consumers (52), they are avoided as part of the demand for a minimal ingredient label, rather than avoided specifically. The exact mechanism and the potential explanations for the observed behavior around corn starch in yogurt warrant further investigation. It is worthwhile to note that these findings may not directly apply to flavored yogurts. Additional research focusing on consumer demand for clean labeled flavored yogurts is recommended. Second, this study found that a textural defect decreases the odds of purchasing a yogurt. Based on our mixed logit analysis, the average premium to avoid a textural defect may fall between $1.83 and $2.83. While segmentation analysis did not reveal much variation in consumer preferences, it suggested that consumers are more likely to purchase a yogurt with a textural defect rather than opt out of a purchase entirely. One caveat with respect to the impact of textural defects is the limited description of the defect presented in our experiment. In our study, the only textural defect examples were the descriptors "grainy" and "lumpy" [the full description was given as "Consumers generally perceive the texture of this yogurt to have some defects (e.g., grainy, lumpy)]". These descriptors were presented together on a single attribute level, thus we cannot untangle whether one defect (i.e., grainy) is more impactful than the other (i.e., lumpy). It is possible that consumers would respond differently to these defects as well as some others. Examples of other yogurt textural defects include weak and firm/gel-like yogurt body (86). A weak body defect is characterized by a runny, liquid-like texture. On the other hand, a firm/gel-like defect is characterized by excessive firmness, which impacts how easily a yogurt compresses and melts away in the mouth during consumption. We recommend further research that examines the specific impacts of types of textural defects on consumer acceptance and their willingness to pay for clean label yogurt. Third, we examined the impact of textural defects in relation to a clean ingredient list. The interaction included in the fourth mixed logit model specification revealed that a clean label on a yogurt might temper the negative impact of a textural defect. This finding may offer reassurance to companies that have not been successful in producing clean label yogurts with ideal texture; consumers may actually be willing to accept a less than ideal texture in yogurts with clean ingredient lists. We estimate the average willingness to pay for a clean label yogurt with a poor texture to be an average premium of $1.61. Additional insights into consumer preferences were collected from our respondents in a questionnaire that was administered directly following the choice experiment. Recall, participants rated their agreement on a series of different statements designed to capture the importance of various yogurt-related attributes (see Table 2). A 10-point scale anchored at 1 indicating strong disagreement and 10 indicating strong agreement with each statement presented was used. This allowed us to capture the average stated preference of the attributes tested. On average, the highest rated attribute was yogurt quality, defined as characteristics such as taste and texture. Quality was followed by price and ingredients. Of note, the attribute "ingredients" refers to ingredients in general and does not specify the ingredients (i.e., stabilizers) tested in our study. Interestingly, while ingredients were rated as being relatively important to consumers, the presence of corn starch, carrageenan, MPC, or pectin were not rated to be very important. These seemingly conflicting findings in our questionnaire demonstrate some of the limitations of stated preferences studies. While choice experiments utilize stated preferences, elicitation of preferences through our experiment allowed us to examine the impact of these attributes in the context of an ingredient list and in the presence or absence of textural defects. In doing so we find that, in the case of plain yogurt, having a clean label matters and consumers are willing to accept some level of textural defects in a clean label yogurt. CONCLUSIONS Price and quality are important attributes for consumers, but in our choice experiment, consumers displayed a clear preference for clean labels, specifically, a minimal ingredient list. The results of our hypothetical choice experiment reveal that, on average, consumers may be willing to pay between $2.54 and $3.53 for a clean label on a plain 32-ounce container of yogurt, and the negative impact of textural defects can be attenuated by the positive effect of a clean label in our study. Our latent class modeling revealed two consumers classes with similar preference patterns that would, on average, prefer to purchase yogurts with textural defects rather than opt out of purchasing yogurt entirely. Altogether, this presents important implications for policy makers that are considering introducing policies reducing producers' ability to use particular ingredients (due to health or environmental concerns): consumers are willing to take some textural deficiencies in return for a cleaner label. This is good news for the food manufacturers as well, as reformulation of food products is often associated with performance challenges. Additionally, our results suggest that a marginal approach to cleaning up a food label might not be appreciated by consumers for all ingredients. Specifically, our results suggest that removing just one thickener out of a longer ingredient list would only have a positive impact on consumer demand in case of starch, and not lead to an appreciable increase of consumer WTP for other ingredients. Our results are a step in the direction of examining the complex issues of clean labels, product sensory performance, and consumer demand. However, the use of hypothetical scenarios can lead to hypothetical bias on respondent choices (87), highlighting a limitation of the study. Hypothetical bias often results in inflated estimates of willingness to pay in stated preference valuation studies like a choice experiment (88). To mitigate bias, a cheap talk script was delivered to each respondent prior to starting the experiment. Cheap talk scripts have been used as a tool for mitigating bias in choice experiments (89) and have been shown effective in obtaining more reliable estimates (90). As a next step in examining the impact of varying ingredients and sensory characteristics on consumer preferences, we recommend a revealed preference valuation study such as an experimental auction, where consumers can actually taste yogurts and evaluate textural defects prior to placing a bid and potentially parting with money for real products. Considerations for such an auction would be procuring or making yogurts with ingredient lists and sensory attributes of interest. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Oregon State University IRB board. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS SM, NS, and JL contributed to conception and design of the study. SM and NS performed the statistical analysis and wrote the first draft of the manuscript. JL wrote sections of the manuscript. All authors contributed to manuscript revisions, read, and approved the submitted version. FUNDING This research was supported through funding provided by BUILD Dairy.
v3-fos-license
2020-10-28T19:20:27.136Z
2020-10-04T00:00:00.000
224801578
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://xmed.jmir.org/2020/1/e24765/PDF", "pdf_hash": "5b0dfe35f34283f8ff3e99d80343ee74afd78a36", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2531", "s2fieldsofstudy": [ "Medicine" ], "sha1": "86eecd6aa959dc1176f68bf341d8843939081c26", "year": 2020 }
pes2o/s2orc
Peer Review of “A Machine Learning Explanation of the Pathogen-Immune Relationship of SARS-CoV-2 (COVID-19), and a Model to Predict Immunity and Therapeutic Opportunity: A Comparative Effectiveness Research Study” <jats:p /> Introduction Asymptomatic patients who are infected with SARS-CoV-2 have neither clinical symptoms nor abnormal chest imaging. However, these patients have the same infectivity as infected patients with symptoms [1]. Moreover, adult asymptomatic patients have been found to have the same viral loads as symptomatic patients [2]. Studies have shown that age appears to influence whether an infected person is susceptible to illness. Those under the age of 20 years have approximately half the morbidity probability as those over the age of 20 [3]. This improbability of becoming ill from SARS-CoV-2 infection is especially interesting because young children have been found to have 10 to 100 times the viral load as older children and adults, and disproportionately remain asymptomatic [4]. What has been unknown for SARS-CoV-2 are three questions to which the answers are suggested in this study. First, which immunological variables are statistically significant, and how important is each in predicting asymptomatic status? Second, which of those variables, if any, have a strong negative correlation, or relationship, with disease severity (ie, asymptomatic patients' levels are significantly higher than symptomatic patients)? And third, is there an algorithmic or formulaic model of prognostic biomarkers that can accurately predict morbidity-who will be asymptomatic if infected, and who is at risk of more severe symptoms and disease progression-and why? Methods This study was based on secondary data published as a supplement in Nature Medicine in June 2020 [14]. Therein, immunological factors were measured in 74 patients in the Wanzhou District of China. They were diagnosed as SARS-CoV-2 positive by reverse transcriptase-polymerase chain reaction (RT-PCR) in the 14 days before observations were recorded. The median age of the 37 asymptomatic patients was 41 years (range 8-75 years); 22 were female and 15 were male. For comparison, 37 RT-PCR test-positive patients were selected and matched to the asymptomatic group by age, comorbidities, and sex [14]. In this study, five algorithms, or types, of machine learning-a kind of artificial intelligence employing robust brute-force statistical calculations-were applied to a data set of 74 observations of 34 immunological factors in order to attempt three things: (1) to develop a model to accurately predict which patients will be asymptomatic or symptomatic if infected with SARS-CoV-2; (2) to determine the relative importance of each immunological factor; and (3) to determine if there is any level of a subset of immunological factors that can accurately predict which patients are likely to be immune or resistant to SARS-CoV-2. Minitab 19, version 19.2020.1 (Minitab LLC), was used to calculate means, 95% CIs, P values, and two-sample t tests of statistical significance. Correlation coefficients were also computed using Minitab via Spearman rho since the data were distributed nonparametrically. A second classification and regression tree (CART) algorithm was also applied in Minitab to cross-validate decision tree results from R in Rattle. Minitab's CART methodology was initially described by Stanford University and University of California Berkeley researchers in 1984 [15]. The Rattle library, version 5.3.0 (Togaware), in the statistical programming language R, version 3.6.3 (CRAN), was used to apply five machine learning algorithms-a decision tree, extreme gradient boosting (XGBoost), linear logistic model (LLM), random forest, and support vector machine (SVM)-to learn which model, if any, could predict asymptomatic status and how accurately. Rattle randomly partitioned the data to select and train on 80% (n=59), validate on 10% (n=7), and test on 10% (n=7) of observations. Two evaluation methods were used: (1) plots of linear fits of the predicted versus observed categorization; and (2) a pseudo-R 2 measure calculated as the square root of the correlation between the predicted and observed values. Pseudo-R 2 measure results were evaluated twice, each using for evaluation data that were held back by being randomly selected during partitioning and averaging the two accuracy findings for the final results. Rattle's rpart decision tree was also used to identify if any levels of one or more immunological factors could accurately diagnose someone as asymptomatic (ie, via rules). The decision tree results reported here used 20 and 12 as the minimum number of observations necessary in nodes before the split (ie, minimum split). The trees used 7 and 4 as the minimum number of observations in a leaf node (ie, minimum bucket). The random forest analysis in Rattle began by running a series of differently sized random forest algorithms, ranging from 50 to 500 decision trees, to learn the optimum number of trees to minimize error. Each random forest consisted of a minimum of six variables, which was closest to the square root of the number of statistically significant variables (ie, 34). The lowest error rate was approximately 200 decision trees. The five machine learning models and CART classification trees were run, including and excluding SCGF-β to identify if there were alternative prognostic biomarkers and levels in the immune profile that could accurately classify and predict SARS-CoV-2 immunity. Results In total, 34 of the 53 immunological factors (64.2%) were indicated as statistically significant by P values <.05 from a Spearman rho correlation. Of those 34 factors, 31 were statistically significant with P values <.01. Conversely, 35.9% of the 53 immune factors had no statistically significant association with whether a patient was asymptomatic or symptomatic to SARS-CoV-2. Table 1). When SCGF-β was included in the machine learning analysis, two algorithms predicted and classified SARS-CoV-2 immunity or resistance by being asymptomatic with 100% accuracy: a decision tree and XGBoost. When SCGF-β was excluded, a random-forest algorithm predicted and classified SARS-CoV-2 asymptomatic and symptomatic cases with 94.8% AUROC (area under the receiver operating characteristic) curve accuracy (95% CI 90.17%-100%) (see Table 2). Notably, both the rpart decision trees and CART classification trees independently identified three prognostic biomarkers at specific levels that could classify asymptomatic and symptomatic cases with 95%-100% accuracy. When SCGF-β was included, all asymptomatic cases had levels >127,656.8, while all symptomatic cases had levels <127,656.8 ( Figure 1). When SCGF-β was excluded, as a type of contingency analysis to understand prognostic biomarker levels in other factors better, IL-16 accurately classified asymptomatic cases >44.59 and symptomatic cases <44.59 in 90.4% of the cases. In the remaining 9.6% of cases where IL-16 >44.59, all had macrophage colony-stimulating factor (M-CSF) >57.13 ( Figure 2). Two-sample t tests for the four factors with the highest positive and negative correlation coefficients, interquartile ranges, outliers, and levels between asymptomatic and symptomatic patients that were statistically significant were computed to ordinally rank factors by their correlation coefficients ( Figure 3). Principal Findings While it has been speculated that stem cells may play a role in SARS-CoV-2 and other zoonoses' resistance, prior research has focused on different stem cell involvement than SCGF-β [16][17][18]. Previous research has also established that stem cells can inhibit viral growth by expressing IFN-γ-stimulated genes and have been particularly effective against influenza A H5N1 virus and resulting lung injuries [19,20]. Stem cell therapy has been hypothesized as a treatment for SARS-CoV-2; however, there is no record in the literature specific as to which factors may influence SARS-CoV-2 infections, favorably or unfavorably, or to what degree until now [21]. Researchers have recently found that symptomatic patients generally have a more robust immune response to SARS-CoV-2 infection, culminating in cytokine storms in the worst cases. Conversely, asymptomatic patients have been found to have a weaker immune response [14]. Because infections are causal to immune response, of particular interest in this study were the most impactful immune-related variables that negatively correlated with asymptomatic status (ie, variables that were greater for asymptomatic patients than symptomatic patients) (marked with a superscripted "c" in Table 1). This paper's overarching importance is the identification of immunological factors for diagnoses, treatments, and preclinical prophylactic immune-based approaches to SARS-CoV-2 in the first 7 months of a pandemic that experts now opine will last decades [22]. Immunostimulant approaches are especially valuable because, unlike antivirals and vaccines, they may be given later in the course of the disease to optimize outcomes [21]. The primary importance of this work is machine learning algorithmic models that can predict with high accuracy whether someone, once infected, will be asymptomatic or symptomatic from SARS-CoV-2. This knowledge gives clinicians new tools to identify populations in advance who appear to be at higher risk of danger from the virus. Such devices, especially once reproduced in a more extensive study, may also inform policy decisions as to who needs to shelter in place. Finally, because of the scale of this pandemic and practical constraints as to how many vaccination doses can be manufactured and how quickly this can be done, such tools may become valuable in prioritizing vaccine administration to those in greatest need because they have a higher biological and immunological risk. This work's secondary importance is a description of the cytokine and chemokine profile that is associated with asymptomatic or symptomatic SARS-CoV-2 infections. It enables a better understanding of the pathogen-immune relationship. These profiles provide insights into the biological pathways critical for SARS-CoV-2 progression. As one example, stem cell factors secrete multiple factors that regulate immune cells and modulate them to restore tissue homeostasis. These results suggest that higher levels of SCF-β (stem-cell factor-beta) may better control immune responses to prevent the more robust reactions universally associated so far with highly symptomatic patients and, further, prevent high morbidity and mortality cytokine storms. A better understanding of the pathogen-immune relationship may enable researchers to prevent and treat patients with SARS-CoV-2 infection more effectively with therapeutics currently untested and unused. This knowledge may also extend to similar zoonotic coronaviruses in the future. The tertiary importance of this work is identifying three immune factors and precise levels that appear to be prognostic biomarkers as to whether someone, once infected with SARS-CoV-2, will be immune or resistant, as demonstrated by being asymptomatic or not. These insights also suggest new candidates for therapeutic research focused on the relatively newly identified and ill-understood SCGF-β and its role in the immunological process. The quaternary importance of this work is further proof that machine learning methods can accurately and quickly identify critical elements of disease dynamics that accelerate understanding and improve outcomes during pandemics. Moreover, it is an example of how a "dry" data science laboratory can link to clinical or "wet" laboratory science for real-world applications. Limitations This study has several limitations. First, it is unknown from the data set how many days passed between exposure to the virus and immunological testing, or whether it was universally the same number of days. Second, because immune profiles are temporally sensitive, ideally, several tests would have been taken over several days, which did not occur (R Jankord, PhD, July 22, 2020). Third, immunological signaling and processing are multifactorial and complex. Therefore, it is unclear why SCGF-β levels are categorically high in asymptomatic patients and low in symptomatic patients, or whether they are causal to SARS-CoV-2 response. Fourth, combinatorial and sequential analysis of these immunological elements may be an important future research area to optimize therapeutic research outcomes. Fifth, at least one study in a leading journal, The Lancet, found that Chinese SARS-CoV-2 case data may have been misreported by as much as 400% [23]. That study, and much higher case and fatality numbers in over 200 countries, have created distrust and skepticism of SARS-CoV-2-related data originating from China. Future research could ameliorate these limitations and focus on a more extensive study group to attempt to reproduce the results. Moreover, a prospective case-control study of patients with decreased SCGF-β levels and supplementation that was protective against SARS-CoV-2 severity and symptoms would be invaluable validation. Conclusion One implication of these findings is that if we can predict the 80% of society who may be immune or resistant to SARS-CoV-2, or asymptomatic, it may profoundly impact public health intervention decisions as to who needs to be protected and by how much. If, for example, 80% of the shelter-in-place orders and the resultant dramatic reduction in economic and social activity could have been prevented by accurately predicting who is at low risk of infection, the economic benefits alone may have been valued in US$ trillions. The second implication of these findings is evidence that elevated levels of SCGF-β, IL-16, and M-CSF may have a causal relationship with SARS-CoV-2 immunity or resistance, and may have utility as diagnostic determinants to (1) inform public health policy decisions to prioritize and reduce shelter-in-place orders to minimize economic and social impacts; (2) advance therapeutic research; and (3) prioritize vaccine distribution to benefit those with the greatest need and risks first.
v3-fos-license
2022-12-14T14:27:40.474Z
2017-06-10T00:00:00.000
254607300
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10811-017-1157-8.pdf", "pdf_hash": "16c15f3030a868458672a2dbb267511601f89b1c", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2534", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "16c15f3030a868458672a2dbb267511601f89b1c", "year": 2017 }
pes2o/s2orc
A comparison of methods for the non-destructive fresh weight determination of filamentous algae for growth rate analysis and dry weight estimation The determination of rates of macroalgal growth and productivity via temporal fresh weight (FW) measurements is attractive, as it does not necessitate the sacrifice of biomass. However, there is no standardised method for FW analysis; this may lead to potential discrepancies when determining growth rates or productivity and make literature comparison problematic. This study systematically assessed a variety of lab-scale methods for macroalgal FW measurement for growth rate determination. Method efficacy was assessed over a 14-day period as impact upon algal physiology, growth rate on basis of FW and dry weight (DW), nitrate removal, and maintenance of structural integrity. The choice of method is critical to both accuracy and inter-study comparability of the data generated. In this study, it was observed that the choice of protocol had an impact upon the DW yield (P values = 0.036–0.51). For instance, those involving regular mechanical pressing resulted in a >25% reduction in the final DW in two of the three species studied when compared to algae not subjected to any treatment. This study proposes a standardised FW determination method employing a reticulated spinner that is rapid, reliable, and non-destructive and provides an accurate growth estimation. Introduction Macroalgae encompass a phylogenetically diverse range of macroscopic plants, mainly of marine origin. They are key constituents of marine ecosystems and are a commercially and environmentally valuable natural resource. For instance, algae are renowned for their potential as a feedstock for renewable bioenergy and are already mass cultivated for food and phycocolloid industries. Furthermore, they may be grown for wastewater amelioration purposes or bio-prospected for valueadded products (Fleurence 1999;Zemke-White and Ohno 1999;Hafting et al. 2012;Borowitzka 2013;Schiener et al. 2015). The increased realisation of the commercial potential of macroalgae as a direct product or as a feedstock for further processes has necessitated the optimisation of current practices and the development of a range of new tools and cultivation approaches (Griffiths et al. 2016). Furthermore, determination of the impact of abiotic and biotic conditions on biomass productivity during an experimental timeline requires the development of a set of standardised methods, which allows comparisons to be made between both treatments and experiments. Determining algal biomass productivity through its temporal growth rate is one of the most fundamental aspects of algal research in biological, environmental, and engineering fields. For example, monitoring algal growth of taxa, including Electronic supplementary material The online version of this article (doi:10.1007/s10811-017-1157-8) contains supplementary material, which is available to authorized users. Cladophora, in Integrated Multi-Trophic Aquaculture (IMTA), where they perform a key bioremediation role (de Paula Silva et al. 2008), or for potential biomass applications such as bioenergy (Lawton et al. 2013), is critical to assessing both performance and productivity. Yet, there remains no standardised approach for determining growth. The primary parameters to consider when quantifying biomass are reproducibility, reliability, and applicability. Other desirable facets of a quantification method include: ease of use/speed and minimal/no damage to the biomass, where the latter issue is especially relevant for the accurate assessment of growth rates. Errors associated with determining total biomass, or growth rate, can lead to inaccuracies in estimating productivity and economic potential, as well as difficulties with literature comparison. It is therefore important to standardise procedures that are both accurate and reliable. Furthermore, the method deployed has to be applicable for the species being studied as macroalgae, and algae, in general, have varied phenotypes and growth habits. These characteristics effectively dictate the approach that may be applicable. In most cases, this is straightforward: for instance, many micro-algal taxa are unicellular and their growth can be quantified by counting the cells in a given volume of water, e.g. using either a haemocytometer, a Coulter counter (Guillard and Sieracki 2005;Marie et al. 2005), or alternatively by methods employing absorbance (Das et al. 2011) or light scattering (Yamaoka et al. 1978). For multicellular algae, these approaches are unsatisfactory as optical methods require a uniform suspension of material so that a linear relationship with biomass (weight or cell number) may be determined. In contrast, for large species of seaweed, such as members of the Lamiariales, changes in biomass can be determined by temporally measuring the length of the fronds, which can reach more than 60 m in length (Kain 1982;Bold and Wynne 1985;Dean and Jacobsen 1986;Hepburn and Hurd 2005). However, the morphology of the thalli of some macroalgal species can be quite varied, ranging from simple blades to more structurally complex forms made up of parenchyma and corticated filaments (Hurd et al. 2014). Therefore, determining the biomass of species with variegated or multifarious thalli can be complex. A commonly employed method to determine growth rate is based on the dry weight (DW) of the organism, usually achieved by drying in an oven, freeze drier, or by the sun (Mata et al. 2010;Sharma et al. 2013). Although this approach is reliable, simple, and reproducible, the drawback is that it involves sacrificing the whole of the biomass sample, making the determination of growth rates impossible over a time course. When assessing temporal growth using DW, the problem of sacrificing samples can be overcome by utilising multiple replicates. However, this approach has its own constraints and pitfalls such as the time taken to ensure the sample has fully dried, a requirement for a large working area and other resources, as well as potential limitations in availability of biological material where sacrificing material would compromise the accuracy of the experiment. Image analysis is a possible option as a method to determine yields and growth rates. This approach has previously been successfully applied to macroalgae where individual Cladophora filaments on agar have been measured temporally using light microscopy (de Paula Silva et al. 2008). However, from a practicality perspective, this approach is better suited for screening projects involving individual filaments. Macleod et al. (2016) used an alternative imaging software for the analysis of biofouling coverage on buoys as proxies for renewable energy structures in the marine environment. This approach, although simple and time efficient, could only provide data on coverage and not on the biomass of the adhering flora and fauna. Although imaging software is becoming a lot more powerful, making these techniques more readily applicable, there are still constraints including the time and resources required for analysis. Additionally, the threedimensional and often fractal nature of seaweed makes determination of any correlation between each image and its corresponding DW, or productivity, challenging. In many environmental and applied studies, both mass and growth rates of macroalgae are expressed as fresh weight (FW) (Gordon and McComb 1989;Peckol and Rivers 1995;Rivers and Peckol 1995). Despite the fact that FW is widely assessed, no standardised method has been agreed and thus comparisons between different studies are challenging. For instance, the FW of the chlorophyte Cladophora has been determined employing a variety of approaches, the most common of which is drying with a sorbent material, i.e. filter paper (Robinson and Hawkes 1986;Planas et al. 1996;Pinowska 2002;Lamai et al. 2005). However, variations in material used, application time and pressure will inevitably lead to differences in the volume of water removed. In some studies, FW is mentioned but no method of determination is reported (Ozimek et al. 1991;Choo et al. 2004;Lawton et al. 2013). On this basis, it may not be feasible, or valid, to draw conclusive inferences when comparing data from different studies. Another key consideration is the morphology of algal species. Filamentous algae are multicellular, multifarious, and often quite fragile (Robinson 1983), which makes accurate growth rate and biomass quantification problematic. If the viability of the algae is impaired during fresh weight quantification, for instance due to excessive pressure or dehydration, this may have major implications on the accuracy of any assessments of subsequent growth. A promising, yet seldom employed method, with seemingly low mechanical impact, involves dewatering filamentous algae using a reticulated spinner (RS). In their respective in situ studies on ecology and IMTA, both Peckol et al. (1994) and de Paula Silva et al. (2008; employed this approach to remove excess water from Cladophora. However, these studies did not detail the number or duration of iterations with the reticulated spinner, thus making a comparison difficult due to the possibility of non-standardisation in approach. Furthermore, FW was only assessed at the beginning and end of each experiment lasting 10-14 days, and the daily growth rate inferred from the two data points. This non-intrusive method could have the potential to be used periodically during an experiment to determine growth rate, with little-to-no physiological detriment to the organism, thus providing a higher degree of resolution to productivity data. The ability to accurately determine FW and productivity is the cornerstone of any algal research, and the development and use of a robust, standardised lab-scale method is an absolute necessity. This study aimed to investigate the suitability of a variety of methods for the FW determination of filamentous macroalgae employing model strains of Cladophora and Spirogyra. This is the first study of its kind to make a concerted effort to assess FW methodologies in terms of reliability and reproducibility, as well as their biological impact in terms of viability, growth, and nutrient uptake. Furthermore, the objective was to adopt a dewatering technique that is a good indicator of DW and has no detrimental impact upon the algae over a time course, therefore maintaining their original experimental purpose. The marine taxa were cultivated in 250 mL Guillard's F/2 medium (see http://www.ccap.ac.uk/pdfrecipes.htm), based on artificial seawater at 33.5 g L −1 (Instant Ocean, Nemo's World, UK) (Guillard and Ryther 1962). The freshwater isolate S. varians was grown in 250 mL of Jaworski's Medium (see http://www.ccap.ac.uk/pdfrecipes.htm). The cultures were incubated in an illuminated shaker (Sartorius Stedim Biotech, Germany) at 24°C, under an 18:6 h (light/dark) photoperiod with 30-40 μmol photons m − 2 s − 1 of photosynthetically active radiation (PAR 400-700 nm) (LM-100 Light Meter, Amprobe, Germany) at 100 rpm. After a 7day acclimation period, 35.7 mg FW sub-samples determined employing the reticulated spinner, beaker + reticulated spinner (B+RS) method described in Table 1, were inoculated into triplicate 100 mL flasks containing 50 mL of the experimental media and then incubated as outlined above for 14 days. Samples were aseptically removed in a laminar flow five times over the 14-day growth period (MSC Advantage, Thermo Scientific) for FW and nutrient determination. Fresh weight determination Seven different techniques for algal FW determination were assessed in this study. The different biomass dewatering methods involved centrifugation with a reticulated spinner, gently blotting with filter paper, agglomeration using a perforated crucible, and pressing between microscope slides or a combination of the above. The methods used are described in detail in Table 1. During the 14-day incubation period, the algal biomass was removed a total of five times and the different methods applied, followed by gravimetrical weighing with an analytical balance (PS-60, Fisher Brand, UK) for FW determination. After each assessment, the algal samples were transferred back to their original flasks and returned to the standardised cultivation regime. Optimisation of reticulated spinner FW determination Some of the methods tested removed excess water from algal biomass by centrifugation using a small Chef'n Salad Spinner (Electronic Supplementary Material), referred to from hereon as reticulated spinner (RS). This operates using a lever, which, when pressed, rotates an internal basket. The basket has a diameter of 370 mm, with elliptical or circular perforations of maximal and minimal sizes of 18.5 mm × 3 mm to 3 mm × 3 mm, respectively. The optimal duration of dehydration using the reticulated spinner was determined for all algal species. Initially, samples were measured using the B method, as described in Table 1, and then sequentially spun in 15 s intervals, for a total of 120 s, with the FW determined after each step by weighing using an analytical balance (PS-60, Fisher Brand, UK). Dry weight determination After 14 days, all algal samples were harvested and rinsed with deionised water to remove extracellular salts and nutrients. Excess water was removed using the B+RS method for C. coelothrix and C. parriaudii or the perforated crucible + reticulated spinner (PC+RS) method for S. varians and samples frozen ( Table 1). The frozen algal biomass was then freeze-dried overnight (Modulyo 4K freeze dryer), or until a <5% variation in final mass was achieved. The lyophilised biomass was weighed gravimetrically using an analytical balance (PS-60, Fisher Brand, UK) to determine its DW. Microscopy The effect of the procedures on gross cellular morphology was examined using an inverted microscope (Eclipse TE2000-U, Nikon, UK). After 14 days, samples were mounted on a microscope slide with a small volume of growth medium, to avoid desiccation. Filaments on the periphery of the culture were selected for ease of visualisation and were examined under a 100× objective lens. Images were captured using a CoolSNAP HQ2 camera (Photometrics) assisted by MetaMorph® Microscopy Automation and Image Analysis Software (Molecular Devices). Residual nutrient determination The concentration of nitrate in the culture media was measured for each of the five sampling days, as well as day 0. Soluble nitrate was measured by ion chromatography (883 Basic IC Plus, Metrohm, UK), equipped with a peristaltic pump, an 863 Compact Autosampler, a Metrohm A sup 5250/4.0 mm column, and a 850 Professional IC conductivity detector. The eluent employed was 3.2 mM sodium carbonate and 1 mM sodium bicarbonate per L of dH 2 O. A MSM Suppressor, operated at 10 MPa, was used to suppress the eluent, using 0.1 M H 2 SO 4 , 0.1 M oxalic acid, and 5% (v/v) acetone per L of dH 2 O as the regenerant. Blanks and internal standards were analysed periodically to ensure the accuracy of the method. Optimised method-validation of the temporal FW/DW relationship To ensure that FW growth rates determined with the optimal method from Table 1 were an accurate measurement of biomass growth, the constancy of the relationship between FW and DW growth rates was determined. A total of 15 flasks of each algal species were inoculated and incubated under the standard regime as outlined above. The algal biomass in these flasks was harvested on days 0, 3, 5, 10, and 14 following the B+RS method for Cladophora sp. or the PC+RS method for S. varians (Table 1) and the FW determined gravimetrically using an analytical balance (PS-60, Fisher Brand, UK). Three flasks of each algal species were subsequently sacrificed for the determination of their DW, as outlined above. Growth rates for FW and DW were determined according to the formula prescribed by Yong et al. (2013): whereW t and W 0 isthefinal andinitialmass anddisthe time(days). Statistical analysis All experiments were performed in triplicate and the experimental error was calculated and expressed as one standard deviation (SD). The significance of difference in the DW yield of macroalgal samples periodically subjected to a variety of dewatering methods was obtained by one-way ANOVA with Tukey's post hoc analysis (P = <0.05; n = 3). Pearson correlation coefficients, r, were used to assess the temporal relationship between FW and DW. All statistical analysis was performed using Minitab Statistical Software version 17. Results and discussion In this study, a variety of methods, described in Table 1, were assessed for the determination of the FW of three species of Beaker + filter paper B+FP B, followed by gently pressing the biomass with GF/F filter paper (FP) and then weighed gravimetrically. B + reticulated spinner + filter paper B+RS+FP B+RS, followed by gentle pressing of the biomass with GF/F filter paper and then weighed gravimetrically. Beaker + cavity microscope slide a B+MS a B, followed by placing the biomass between two cavity microscope slides (MS) to remove excess water and then weighed gravimetrically. Perforated crucible a PC a Cultures were poured through a perforated crucible (PC) (Coors Gooch crucible) and then weighed gravimetrically. Perforated crucible + reticulated spinner a PC+RS a PC, followed by reticulated spinner centrifugation with optimised time (see section below) and then weighed gravimetrically. Positive control + C The positive control was only weighed at the end of the experiment. Therefore, it remained unperturbed during the experimental period. a Employed with S. varians filamentous macroalgae: C. coelothrix, C. parriaudii, and S. varians (Electronic Supplementary Material). These three species have differing physical appearances and growth characteristics. Cladophora coelothrix grows quite slowly in tightly knit Bclusters^with thick cell walls, whereas C. parriaudii grows quickly in a loose skein. Cladophora has been described as an Becological engineer^: they are a robust, bloom-forming species and have shown high removal rates of nutrients and heavy metals (Deng et al. 2006;Deng et al. 2009;de Paula Silva et al. 2012;Zulkifly et al. 2013;Liu and Vyverman 2015). Furthermore, they are resistant to grazers (Zulkifly et al. 2013), making them strong candidate species for wastewater bioremediation (de Paula Silva et al. 2012). Spirogyra varians has a central core of biomass, from which helical-shaped filaments grow toward the water surface. These filaments are very fragile, tending to fragment when disturbed. The three species were selected as model organisms to explore the applicability of dewatering methods across a range of phenotypes. Systematic measurement of FW, final DW, FW/DW ratio, and NO 3 − uptake and microscopic image analysis were used to ascertain the viability, growth, and metabolic activity of the algae periodically subjected to the different harvesting methods. Optimisation of the reticulated spinner Some of the harvesting methods tested employ a reticulated spinner, which has the ability to rapidly remove extracellular water from filamentous algae and hence facilitate accurate FW determination. In order to ensure a consistent level of water removal, the operation of the reticulated spinner was standardised. The FW of the three algal species was determined after each 15 s spin, up to a maximum duration of 120 s (Fig. 1). There was a reduction in the overall weight corresponding to 77-81% of the original wet weight, irrespective of the species studied. This indicated the potential applicability of the method to a wide range of filamentous taxa. The majority of water removal, i.e. 61-68%, occurred within the first 15 s. This was followed by a reduction in the rate of weight change, with minimal further water removal after 90 s operation, corresponding to a reduction in mass up to 75-80%. Additional spinning, beyond 90 s, resulted in a further reduction in mass of less than 1.5% for all species tested. A spinning time of 90 s was adopted for the reticulated spinner. It is recommended that a similar approach is employed when implementing and standardising this method for different algal taxa, varying amounts of algal biomass, or when cultivating in very different conditions, such as extremes of salinity (Angell et al. 2015). Dewatering efficiency for the tested methods Although DW is an accurate measure of biomass, its determination necessitates the sacrifice of the culture. However, FW determination is non-destructive and can be reiterated across a time series to give high-resolution productivity data. In this study, cultures were harvested periodically over a 14-day period to obtain FW, and on the final day, DW was also determined and an FW/DW ratio obtained (Fig. 2). This ratio would be expected to inform how strong an indicator of biomass a particular dewatering method is: the lower the ratio, the more efficient the dewatering method should be. However, the size of the error bars will also indicate how reproducible each method is, therefore providing a more accurate and consistent measure of actual productivity. Of the methods tested, B and PC required the least mechanical effort, but they resulted in high FW/DW ratios for the three species, ranging between 26-44 and >60, respectively for Cladophora sp. and S. varians (Fig. 2). Although one would anticipate that these methods would not result in any physical damage to the alga and therefore have no deleterious effects on metabolic function or growth, they did have a high degree of error that was associated with the greater volume of unpredictable water carry-over, making these methods unsuitable for implementation. Conversely, beaker + filter paper (B+ FP) and beaker + reticulated spinner + filter paper (B+RS+FP) have FW/DW ratios of <10 for S. varians and <4 for both species of Cladophora, with low error throughout. These methods involved lightly pressing the biomass with an absorbent filter paper (Table 1) and resulted in the highest removal of water from the biomass (Fig. 2). The B+RS method, which removes water centrifugally, also has a great degree of highly consistent residual water removal, with ratios of 6.3 (± 0.3) and 8.6 (± 0.2) for C. coelothrix and C. parriaudii, respectively. Methods B+RS, beaker + cavity microscope slide (B+MS), and PC+RS all had a similar degree of water removal when employed with S. varians: ratios were 20.3 (± 0.05), 18.6 (± 6.1), and 19.3 (± 4.2), respectively. The final DW obtained for the different harvesting methods is shown in Fig. 3. The + C corresponds to biomass grown and harvested without any additional dewatering procedures being applied and hence acted as a positive control. Variations in the final DW were observed for the different methods adopted, which indicated that there was an impact on the algal growth. The DW obtained for the different treatments of C. coelothrix ranged between 13.13-16.43 mg and no statistically significant differences in the yield were observed (P value = 0.51). This species grows in tightly knit Bclustersâ nd has a basal cell wall thickness of up to 15 μm, which may make it resistant to mechanical damage (Leliaert and Coppejans 2003). Although the choice of FW method was not significant for C. parriaudii (P = 0.102), the DW yield was reduced from 15 mg in the positive control to 11.3 and 9.6 mg when employing methods B+FP and B+RS+FP, respectively. As shown in Table 1 and Fig. 2, methods that involve pressing the biomass with absorbent filter paper tend to have a low and reproducible FW/DW ratio, indicating good water removal. However, high levels of water removal and a low growth indicated that damage occurred during the sampling and FW determination procedures. The less dense growth habit exhibited by the C. parriaudii may make them more susceptible to physical damage when harsher dewatering approaches were applied. For instance, commonly used largescale harvesting techniques, such as centrifugation and crossflow membrane filtration, can exert large amounts of shear stress that can damage and lyse micro-algal cells (Chen et al. 2011;Bilad et al. 2013). In the case of S. varians, the choice of FW method had a significant impact upon the DW yield, P value = 0.036. It was observed that the biomass of S. varians was prone to fragmentation when disturbed, and obtaining sufficient biomass to ascertain FW was problematic for several of the methods employed. For instance, disintegrating into a suspension of short filaments meant that the algae would tend to pass through the apertures of the reticulated spinner and were challenging to gently blot with a piece of filter paper. The use of a cavity microscope slide (B+MS) was intended to reduce filament loss and to minimise damage caused by actively blotting or from effects of desiccation. A pre-collection step, involving pouring the contents of the flask through a perforated crucible (PC in Table 1), was incorporated into the harvesting protocol for this alga. The apertures were small enough to retain most of the biomass and agglomerate it, allowing it to then be subjected to a further dewatering method with the reticulated spinner. As can be observed (Fig. 3), the PC method had no obvious impact on the biomass levels of S. varians obtained when compared to the control. However, implementing the Fig. 2 Final fresh weight to dry weight ratio of C. parriaudii, C. coelothrix, and S. varians, grown for 14 days (100 rpm, 24°C, light intensity of 30-40 μmol photons m −2 s −1 , 18:6 h L/D photoperiod), periodically harvested, and dewatered following the methods: beaker (B), beaker + reticulated spinner (B+RS), beaker + filter paper (B+FP), beaker + reticulated spinner + filter paper (B+RS+FP), beaker + cavity microscope slide (B+MS), perforated crucible + reticulated spinner (PC+ RS), and perforated crucible (PC). More detailed descriptions on each method can be found in Table 1 (n = 3, error bars denote 1 SD) Fig. 3 Final dry weight (DW) of C. parriaudii, C. coelothrix, and S. varians, grown for 14 days (100 rpm, 24°C, light intensity of 30-40 μmol photons m −2 s −1 , 18:6 h L/D photoperiod), periodically harvested, and dewatered following the methods: beaker (B), beaker + reticulated spinner (B+RS), beaker + filter paper (B+FP), beaker + reticulated spinner + filter paper (B+RS+FP), beaker + cavity microscope slide (B+MS), perforated crucible + reticulated spinner (PC+RS), and perforated crucible (PC). More detailed descriptions on each method can be found in Table 1 [n = 3 (except S. varians B+ C^n = 8), error bars denote 1 SD]. For each species, means that do not share a letter are significantly different from one another, P = <0.05 PC step prior to utilising the RS method increased the DW yield from 10.8 to 12.4 mg, compared to employing the B+RS technique alone. Variation in the FW/DW ratio is dependent upon the growth conditions. For instance, Angell et al. (2015) found that the FW/DW of Ulva ohnoi was the greatest when cultivated in low to optimal salinities and the lowest when exposed to high salinity. This difference in ratio was most likely caused by a change in osmotic potential. Care should be taken when determining FW/DWacross a range of environmental variables or cultivation conditions. However, this is not the case of the present study as the algae were grown under the same conditions. The FW/DW ratio was also found to depend upon the dewatering methods applied. Those involving spinning (B+ RS, B+RS+FP, and PC+RS) or blotting with filter paper (B+ FP, B+RS+FP) will result in a lower FW/DW ratio than those that apply minimal pressure, such as pouring through a perforated crucible (PC). Furthermore, the FW/DW ratio obtained and its degree of error will also depend upon the species or morphology of the alga that it is applied to. For instance, the FW/DW values varied between species using the same method due to differences in water retention, both intra-and extracellularly. Finally, the DW yield is also species specific. The choice of dewatering method will have minimal impact upon robust cultures, with thick cell walls or protective growth habits, such as C. coelothrix. In contrast, fragile species like S. varians are more strongly influenced by the choice of dewatering method, with more stringent methods compromising the viability of the culture. Furthermore, S. varians requires a pre-collection step to ensure the minimisation of biomass losses, which would further reduce the DW yield. Physiological assessment The reduced DW yields observed for some of the species may be due to the viability of the biomass being compromised as a result of the different protocols employed. Images of the harvested algae subjected to methods + C, B+RS or PC+RS, and B+FP were taken, to ascertain whether the algae showed any physical damage (Fig. 4a-i). Healthy, undamaged filaments were observed in the positive control treatment for all three species (Fig. 4a, d, and g). The filaments were considered to be phenotypically normal as they exhibited the characteristic large breeze-block type cells, with typical green colouration throughout the cells. Furthermore, C. coelothrix displayed some branching, indicative of growth (Fig. 4a). Cladophora cultures that were periodically harvested using the B+RS method (Fig. 4 b, e) and the amended PC+RS method for S. varians (Fig. 5h) were similar in appearance to the positive controls, with only some superficial damage visible for C. coelothrix. In contrast, algal cultures periodically harvested using the B+FP treatment (Fig. 4c, f, and i) displayed obvious damage, with their cellular contents having been expelled and with chloroplasts observed in large, often discoloured conglomerates attached to the outside of the cell wall. Although the absorbent filter paper removed superficial water, it was assumed that it caused some shear or mechanical stress upon the organism in the process. The greater parity between the FW/DW ratio for methods employing a filter paper (Fig. 2) was potentially not only due to the removal of superficial and interstitial water but this approach may also have removed intercellular fluid, resulting in cellular injury, as the cells diminished in size, and were in some cases devoid of contents. The image analysis evidencing the presence or absence of mechanical or physical damage to the algal cellular morphology is in agreement with the corresponding DW data (Fig. 3). The methods employed to determine FW growth might not be appropriate as they adversely impact upon the viability of the cell. Although methods B+FP and B+RS+FP offer a good estimation of the DW yield, this comes at a cost. In addition to a reduction in the DW yield, visual imaging indicated that the B+FP technique clearly damaged all cultures tested. Given the methodological similarity between B+FP and B+RS+FP, it may be inferred that employing the B+RS+FP method results in comparable levels of cellular damage. On the other hand, B+RS and PC+RS have similar biomass yields compared to the + C for all three species, whilst providing an accurate estimation of DW and with negligible obvious damage to the algae. Disparities in nutrient uptake were observed, depending on the FW assessment approach employed, further indicated that under some treatment regimes, physiological damage had occurred (Fig. 5). For all species tested, the positive control cultures demonstrated a high capacity to remove NO 3 − from the media, with ∼45, 55, and 65% removal for C. coelothrix, C. parriaudii, and S. varians, respectively ( Fig. 5a-d). In general, cultures subjected to the mildest dewatering methods (Table 1/ Fig. 2) demonstrated the highest nitrate removal capacity: B with 59% and B+RS with 45-62% removal for both species of Cladophora, whereas 76 and 95% NO 3 − removal was observed for S. varians with methods PC and PC+RS, respectively. Conversely, algae subjected to protocols that featured mechanical pressing (B+FP, B+RS+FP, and B+MS) were amongst those with the lowest rates of nutrient uptake. This indicated that the more harsh methods had a detrimental effect on the algae in terms of algal metabolism. This was further exemplified by the discrepancy in nutrient uptake between algae subjected to the FW methods after day 2, which became increasingly pronounced with each successive harvest. Conversely, in comparison with the + C, algae treated using the B+RS or PC+RS protocols had similar nutrient uptake capabilities for all species tested. This suggests that these harvesting methods have little-to-no impact on the physiological integrity of the organism. This aspect is particularly important for small-scale algal systems where routine sampling is required and sampled algae are returned to the cultivation system. The results obtained in this study may be partially explained by the differences in morphology and algal growth strategy of the taxa studied. Members of the genus Cladophora are characterised by their multi-nucleate cells arranged in either branched or unbranched filaments. Their cell wall is primarily composed of highly crystalline cellulose I (Bold and Wynne 1985;Hoek et al. 1995). As previously mentioned, C. coelothrix (see http://www.ccap.ac.uk/ourcultures.htm) typically grows in floating clusters or mats, which are tightly wound (Electronic Supplementary Material). This characteristic provides mechanical protection to the cells and it was noted in this study that C. coelothrix was largely unaffected by the FW determination methods employed. In contrast, C. parriaudii (see http://www.ccap. ac.uk/our-cultures.htm) tends to grow in a loose skein, with filaments that grow rapidly outwards to any vacant space; this growth strategy will mean that the younger, less robust filaments are likely to be more susceptible to mechanical damage (Electronic Supplementary Material). This was observed in this study (Figs. 3, 4f, and 5b) where the DW yield, physical damage, and a reduced metabolic capability/ nutrient sequestration were observed in cultures subjected to the more stringent dewatering methods. Less mechanically stressful treatments, such as B+RS, were better suited for this species. Spirogyra are almost exclusively found in freshwater and are characterised by growing in unbranched filaments with an intracellular helical ribbon of chloroplasts (Whitton 1999). Spirogyra varians (see http://www.ccap.ac.uk/ourcultures.htm) grows as a benthic mass, with filaments intertwined in a helical arrangement growing toward the Fig. 4 Plates of C. coelothrix (a-c), C. parriaudii (d-f), and S. varians (g-i) taken with an inverted microscope after a 14-day growth trial (100 rpm, 24°C, light intensity of 30-40 μmol photons m −2 s −1 , 18:6 h L/D photoperiod) with frequent harvesting using different methods described in Table 1: positive control (+ C) (a, d, and g), beaker + reticulated spinner (B+RS) (b, e), beaker + filter paper (B+FP) (c, f, and i), and perforated crucible + reticulated spinner (PC+RS) (h). W denotes the cell wall, CL indicates the chloroplasts, P is the pyrenoid, and CY highlights the multi-nucleate cytoplasm that contains pyrenoids, chloroplasts, and vacuoles water surface (Electronic Supplementary Material). The filaments are thin, fragile, and readily fragment when agitated or swirled (Chapman and Chapman 1973;Hoek et al. 1995). This propensity for the colony to disintegrate meant that many of the approaches employed were unsuitable owing to the frailty of its filaments. Investigating the temporal relationship between FW/DW under optimal harvesting conditions In comparison with methods for DW determination, B+RS and PC+RS are rapid, are less energetically expensive to perform, and are non-destructive to the algal sample. In order to ensure that FW measurements using B+RS and PC+RS were reliable indicators of DW (Fig. 6a, c, and e) and consequently of biomass growth (Fig. 6b, d, and f), the relationship between FW and DW was determined for a 14-day incubation period. It was noted that there is a strong positive relationship between the FW and DW mass: Pearson correlation coefficients were determined as r = 0.871, 0.948, and 0.954 for C. coelothrix, C. parriaudii, and S. varians, respectively, with P values of <0.001 and with low error throughout. Interspecies variation in biomass growth rate can be clearly determined using the B+ RS and PC+RS methods. The initial Bdip^in growth observed for S. varians (Fig. 6e, f) was assumed to be caused by the fragmentation of the colony and incomplete retention of the biomass on the perforated crucible. One of the purposes of this study was to be able to assess the feasibility of using non-destructive FW measurements to determine macroalgal growth rates, instead of using sacrificial DW measurements. The FW/DW ratios of 6.3, 8.6, and 19.3 for C. coelothrix, C. parriaudii, and S. varians, respectively (Fig . Fig. 5 The temporal removal of nitrate from the media by C. coelothrix (a), C. parriaudii (b), and S. varians (c, d). Their growth was assessed periodically using different protocols: beaker (B), beaker + reticulated spinner (B+RS), beaker + filter paper (B+FP), beaker + reticulated spinner + filter paper (B+RS+FP), beaker + cavity microscope slide (B+MS), perforated crucible + reticulated spinner (PC+RS), and perforated crucible (PC) ( Table 1). Nitrate was measured in the media using ion chromatography (n = 3, [except S. varians B+ C^where n = 8] error bars denote 1 SD) Fig. 6 The FW, DW, predicted DW, and rates of FW and DW growth of the three species of algae: C. coelothrix (a, b), C. parriaudii (c, d), and S. varians (e, f). The temporal relationship between FW and DW (a, c, and e) was assessed using Pearson's correlation coefficients, r. On each harvest day, triplicate flasks were harvested and the algal biomass was dewatered using either the optimised beaker + reticulated spinner (B+RS) method for Cladophora species or the perforated crucible + reticulated spinner technique (PC+RS) for S. varians (Table 1). The DW was attained by freezing the samples overnight followed by overnight lyophilisation (n = 3, error bars denote 1 SD) 2), were applied to the temporal FW measurements from Fig. 6, in order to predict the temporal DW for each species and compare against actual DW yields (all determined for identical culture conditions). This allows the accuracy of the FW method to be demonstrated. The results indicated that the B+RS or PC+RS methods can be used to estimate the DW yield of filamentous macroalgal species across time. In addition, the growth rates of FW and DW are comparable. This further demonstrated that the values are closely related and that the prescribed FW methodology can be used as a strong estimation of DW productivity. The constancy of the FW to DW relationship, irrespective of species, backed up with statistical evidence, demonstrates that a reticulated spinner is a reliable and accurate method for generating samples for FW determination and consequently DW estimation. Moreover, the ability to accurately assess productivity between species mean this approach can be a useful tool for a variety of scientific applications, including experimental growth screening. Conclusions This is the first study to systematically assess a range of dewatering approaches to determine the FW of filamentous macroalgae at lab scale using effectiveness, reliability, practicality, and biological and physical impact as factors. The results demonstrate differences in the effectiveness of a variety of dewatering methods and the physical and metabolic implications at both species and genus levels. This study proposes a method involving a reticulated spinner that is rapid, robust, inexpensive, and easily implemented or standardised for other algal taxa or amounts of biomass. This method marries together high accuracy in biomass assessment due to excellent dewatering capabilities, with negligible impact upon algal performance, assessed as growth, nitrate removal, and structural integrity. Further studies are required for the scaling up of this method for larger cultures at pilot and full scale, which can include assessing and standardising the application of a gentle spinning cycle using a washing machine (Mata et al. 2016).
v3-fos-license
2021-09-13T13:12:54.541Z
2021-09-13T00:00:00.000
237486857
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.715640/pdf", "pdf_hash": "031bfcb1739e0a2453791d29dfb7b0f2d21cbc78", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2536", "s2fieldsofstudy": [ "Medicine" ], "sha1": "031bfcb1739e0a2453791d29dfb7b0f2d21cbc78", "year": 2021 }
pes2o/s2orc
A Quantitative Comparison of Multispectral Refraction Topography and Autorefractometer in Young Adults Purpose: Purpose of this study is to evaluate the measuring consistency of central refraction between multispectral refraction topography (MRT) and autorefractometry. Methods: This was a descriptive cross-sectional study including subjects in Sun Yat-sen Memorial Hospital from September 1, 2020, to December 31, 2020, ages 20 to 35 years with a best corrected visual acuity of 20/20 or better. All patients underwent cycloplegia, and the refractive status was estimated with autorefractometer, experienced optometrist and MRT. We analyzed the central refraction of the autorefractometer and MRT. The repeatability and reproducibility of values measured using both devices were evaluated using intraclass correlation coefficients (ICCs). Results: A total of 145 subjects ages 20 to 35 (290 eyes) were enrolled. The mean central refraction of the autorefractometer was −4.69 ± 2.64 diopters (D) (range −9.50 to +4.75 D), while the mean central refraction of MRT was −4.49 ± 2.61 diopters (D) (range −8.79 to +5.02 D). Pearson correlation analysis revealed a high correlation between the two devices. The intraclass correlation coefficient (ICC) also showed high agreement. The intrarater and interrater ICC values of central refraction were more than 0.90 in both devices and conditions. At the same time, the mean central refraction of experienced optometrist was −4.74 ± 2.66 diopters (D) (range −9.50 to +4.75D). The intra-class correlation coefficient of central refraction measured by MRT and subjective refraction was 0.939. Conclusions: Results revealed that autorefractometry, experienced optometrist and MRT show high agreement in measuring central refraction. MRT could provide a potential objective method to assess peripheral refraction. INTRODUCTION Myopia is by far the most common refractive error and a dominant reason of visual impairment globally (1), with a prevalent rate of 10-30% of adults in many countries and 80-90% of young people in some parts of East and South-East Asia (2,3). Myopia of −6.00 diopters (D) or more severe is called high myopia and is often causes visual impairment due to complications such as posterior uveioma, choroidal new vascularization, retinal detachment and so on (4). Reducing the high myopia incidence rate and improving the quality of life are the goal of prevention and treatment of myopia. Animal experiments have provided details about myopia: hyperopic defocus increases axial elongation, while myopic defocus decreases axial elongation (4)(5)(6)(7)(8)(9)(10)(11). Retinal peripheral visual signals, which are basically the sum of regions, can contral central refractive development independent of central visual experience. The effectiveness of optical defocus in changing axial elongation depends on retinal defocus degree (12,13). However, there are generally four methods to evaluate eccentric refractive errors (14): subjective eccentric refraction (15), wavefront measurements with an aHS sensor (16), streak retinoscopy (17), and photo refraction with a power refractor (14). However, these methods can only detect a small area of the retina and cannot accurately detect the peripheral defocus of each region of the retina. Further, the process has high requirements for patient cooperation, and it is cumbersome, time-consuming, and difficult to adapt to clinical practice (18,19). MRT is a new instrument using multispectral imaging technology (MSI). MSI is an emerging technology based on imaging and spectroscopy. It is the result of remote sensing technology as a kind of analysis tool and can obtain information on the measured target simultaneously from the spectral and spatial dimensions. MRT can detect the refraction of each part of the retina within a range of 30 • at the posterior pole of the retina, especially the refraction of the fovea of the macula. The usage of different technologies in the MRT and other devices above talking about may result in differences in measurements. Because treatment centers use different topographic devices, differences in such measurements might lead to differences between diagnostic or treatment centers. Therefore, we evaluated agreement between MRT and autorefractometer to see whether they could be interchangeable used or not. The data offered by each device should be maintained consistent at different measurements so that results can be used in research. Hence, we evaluated the repeatability of the devices' measurements to decide their effectiveness and availability. METHODS The present research was a descriptive cross-sectional study, and it was conducted in accordance with the tenets of the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Local ethical approval (SYSEC-KY-KS-2021-061) was obtained from the Ethics Committee of Sun Yat-sen Memorial Hospital at Sun Yat-sen University, Guangzhou, China. The medical records of consecutive patients in Sun Yat-sen High(< −6.00D) 93 Hyperopia Low(+0.25∼+3.00D) 10 Medium(+3.25∼+5.00D) 2 High(> +5.00D) 0 SD, standard deviation. Memorial Hospital from September 1, 2020 to December 31, 2020 were reviewed. The inclusion criteria were as follows: (1) subjects ages 20 to 35 years, (2) subjects with a best corrected visual acuity of 20/20 or better, (3) subjects with MRT results, (4) subjects with the refraction results of autorefractometer and experienced optometrists. Exclusion criteria were as follows: (1) intraocular pressure higher than 21 mmHg, (2) a history of ocular diseases or previous ocular surgery that may influence refraction or axial length, such as corneal and lens diseases, and (3) a history of corneal contact lens, such as orthokeratology. The refractive errors of all eyes were measured by both an autorefractometer (AR-360A, NIDEK Co., Ltd, Japan), an experienced optometrist, and MRT (version 1.0.5T05C; Thondar, Inc.) Thirty minutes before examination, a cycloplegic agent (one drop of 0.5% tropicamide along with 0.5%) phenylephrine hydrochloride (Sinqi Pharmaceutical Co. Ltd., Shenyang, China) was applied 3 times (with 5 min between each application). The mean of three consecutive autorefraction readings was collected as the refractive error value measured by the autorefractometer. MRT measurements use MSI technology for central refraction measurements. To compare the two devices, the central refraction values from the two devices were analyzed. All examinations were performed by the same experienced doctor. Statistical analysis was performed using SPSS (version 23). Intra-class correlation coefficient (ICC) and repeated measurement analysis of variance (ANOVA) were used to evaluate the repeatability of the equipment. Pearson's correlation coefficient, paired t-test, and Bland-Altman plots were used to compare the two devices. A value of P <0.05 was taken to indicate statistical significance. RESULTS This study enrolled 290 eyes of 145 subjects. Baseline characteristics of the subjects are shown in Table 1. All 290 eyes were measured by one technician, and all values were collected for further analysis. The mean central refraction Then we analyzed the result of MRT measurement and subjective refraction. The mean subjective refraction was −4.74 ± 2.66D, its range was −9.50 to +4.75D. The mean of the difference between the central refraction obtained by the MRT measurement and the experienced optometrist was 0.26 ± 0.87D, and its 95% confidence interval was −1.44 to +1.95 (Figure 3). The intra-class correlation coefficient of central refraction measured by MRT and subjective refraction was 0.939 (Figure 4, P < 0.001). DISCUSSION The study demonstrates that the central refraction obtained by autorefractometer devices, experienced optometrists, and MRT shows high repeatability and reproducibility. Our results indicate that MRT is a valid and safe method for measuring central refraction error in healthy eyes, particularly for mild myopia. Furthermore, the values showed a high correlation between the two devices. Comparing with autorefractometer or experienced optometrist, measured values by MRT showed a statistically significant shift toward hyperopia. This difference is about 0.20 D (comparing with autorefractometer) to 0.26D (comparing with subjective refraction). It suggests that the accommodation reflex may still have played a role in these participants. We consider the MRT test a more powerful tool to measure the full hyperopic refractive error. To date, current research suggests that the surrounding area of the eye also plays an important role in controlling the growth of the eye and the development of refractive errors. Peripheral hyperopic defocus of the retina is one of the causes of myopia. If the defocus degree of the retina, especially the peripheral defocus, can be measured effectively and accurately, it will be helpful in preventing myopia. The reason for using objective methods to test peripheral defocus is to try to find the "gold standard" compared with how conventional subjective refraction is used. An ideal screening test should be perfect in specificity, sensitivity, and positive predictive value, but we still can't find any screening method that achieves this level of accuracy. The current methods used to measure peripheral refraction are more difficult to evaluate due to poor retinal image quality, optical aberration and low retinal resolution, which may result in insufficient retinal image sampling (14). MRT is a new instrument using MSI that can accurately measure the refraction of each part of the retina, and in a sense, it can replace the role of autorefractometer measurement. However, Two-way random effects model where both individual effects and measure effects are random. a The estimator is the same whether the interaction effect is present or not. b Type An intraclass correlation coefficients using an absolute agreement definition. because the device is a new technology, its accuracy must be compared with the traditional gold standard. Autorefractometers have been used for several decades. They are used in optometric practice all around the world, primarily as a starting point for ophthalmologists or optometrists to assess subjective refraction (20). Autorefractometers are currently the gold standard for testing refration of the central retina. In young adults, most of the time, we would use cycloplegic refraction to detect refractive errors, which is also the gold standard now. Hence, if the central refraction measured by MRT and autorefractometers is consistent in cycloplegic cases, we can assume that the MRT accurately reflects the level of refraction of each part of the retina. MRT is a rapid, accurate, and noninvasive refractometer, and it has excellent specificity and sensitivity. The data of our experiment, under cycloplegic conditions, confirmed that compared with traditional refractometers, MRT can accurately measure central refraction, and the results are closely related to those of autorefractometers. There was no significant difference between the two devices (Pearson correlation coefficient test, P < 0.001). To our knowledge, this is the first report of the consistency between MRT and the autorefractometer. Nevertheless, our experiment has limitations. The subjects of this experiment were Asian individuals ages 20-35 who were treated at Sun Yat-sen Memorial Hospital, and no other ethnic groups were involved. Therefore, further experiments are needed to prove whether this instrument is suitable for other populations. The results included in our study were mostly myopic patients and a few hyperopic patients; there were no patients with high hyperopia, because the greatest proportion of myopic patients in China. In the future, we will collect the results of hyperopic patients, especially those with high hyperopia, to clarify the accuracy of the instrument. In conclusion, results revealed that autorefractometry and MRT show high agreement in measuring central refraction. MRT can accurately reflect the refraction of the retina. It can therefore be used as a potential objective method to measure peripheral defocus of the retina. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Sun Yat-sen Memorial Hospital at Sun Yat-sen University. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS YLi, ZY, and YZ contributed to conception and design of the study. ZY, RZ, and JW organized the database. YLi and ZL performed the statistical analysis. YLi wrote the first draft of the manuscript. RZ, ZY, ZL, and YZ wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
v3-fos-license
2024-06-01T06:16:13.018Z
2024-05-31T00:00:00.000
270148573
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6394689CE8E45465B0286B707C381777/S1073110524000536a.pdf/div-class-title-addressing-bioethical-implications-of-implementing-diversion-programs-in-resource-constrained-service-environments-div.pdf", "pdf_hash": "9159a7e1e321a532e36865273dfc7c6b33e65c7f", "pdf_src": "Cambridge", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2537", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2f904e2b1cbda2288d9d2f521183dc01af705f5f", "year": 2024 }
pes2o/s2orc
Addressing Bioethical Implications of Implementing Diversion Programs in Resource-Constrained Service Environments Abstract The opioid epidemic demands the development, implementation, and evaluation of innovative, research-informed practices such as diversion programs. Aritürk et al. have articulated important bioethical considerations for implementing diversion programs in resource-constrained service environments. In this commentary, we expand and advance Aritürk et al.’s discussion by discussing existing resources that can be utilized to implement diversion programs that prevent or otherwise minimize the issues of autonomy, non-maleficence, beneficence, and justice identified by Aritürk et al. D rug overdose deaths in the U.S. continue to increase at an alarming rate. 1 This epidemic demands developing, implementing, and evaluating innovative, research-informed practices such as diversion programs. 2However, when advancing biomedical practice, it is critically important to consider bioethical implications.Aritürk et al. have articulated important considerations for implementing diversion programs in resource-constrained service environments. 3Like many popular initiatives requiring intentional systemic change, the theoretical framework outpaces the operational framework by years, if not decades. Aritürk et al.'s identification and explanation of essential considerations related to unavailable, inappropriate, and inaccessible resources guide practitioners, policy-makers, researchers, community members, and others working to innovate and advance practice aimed at addressing substance abuse and related issues of community health and safety.We expand and advance Aritürk et al.'s discussion of these important considerations by discussing existing resources that can be utilized to implement diversion programs that prevent or otherwise minimize, as much as possible given existing resources, the issues of autonomy, non-maleficence, beneficence, and justice identified by Aritürk et al. Implementation Frameworks As Aritürk et al. point out, implementing feasible and potentially effective police and justice system-led diversion programs to address substance abuse while avoiding undesired negative consequences is challenging.Implementation frameworks provide useful guidance for designing effective implementation plans.There are a multitude of useful implementation frameworks and related resources.For example, RE-AIM, or Reach Effectiveness Adoption Implementations Maintenance, 4 is a commonly used planning and evaluation framework. 5 Diversion Program Frameworks and Support In addition to the general guidance provided by implementation frameworks, there are resources that provide specific guidance and support for the implementation of sustainable, research-informed, and effective diversion programs.Entities such as the Police, Treatment, and Community Collaborative (PTACC); 7 the Police Assisted Addiction & Recovery Initiative (PAARI); 8 and TASC 9 offer leadership, advocacy, and education to support diversion programs.They also provide supportive resources such as TASC's decision-making tool to guide the development of pre-arrest diversion programs. 10Emerging research to help guide practice, in general and in such a way to address challenges as identified by Aritürket al., includes a national survey to assess diversion programs; 11 research examining outcomes of Seattle's pre-arrest law enforcement assisted diversion (LEAD) program; 12 a process evaluation of the development and implementation of the Tucson Police Department's pre-arrest diversion program; 13 and research examining its feasibility, acceptability, and outcomes. 14se of these resources can help avoid and/or address the challenges identified by Aritürk et al. Evidence-Based Practices (EBPs) Addressing substance abuse in the community via diversion programs necessitates the collaboration of police, courts, and treatment providers.These entities need to collaborate to facilitate the identification by police and the courts of people with substance abuse issues and the connection of these individuals with treatment providers.As such, diversion programs provide the opportunity to holistically implement EBPs into multiple systems. To prevent or address the challenges identified by Aritürk et al., the development and implementation of diversion programs should focus on the strategic, planned implementation of EBPs.Evidence-based and validated screening tools, such as the UNCOPE 15 that identifies individuals at high risk for substance misuse who would benefit from treatment interventions, are relevant for police, courts, and treatment providers.Evidence-based, validated assessments, such as the Global Appraisal of Individual Needs (GAIN), 16 are critical to identifying and responding appropriately to individual needs for substance misuse treatment and co-occurring needs.EBPs such as motivational interviewing 17 and peer support models 18 to encourage engagement in substance misuse treatment can be implemented by police, court personnel, and treatment providers.Other EBPs, particularly cognitivebehavioral therapy (CBT) and CBT-based treatment models like The Seven Challenges, are commonly used in substance abuse treatment to effectively address substance abuse and co-occurring issues. 19emoranda of understanding (MOUs) indicating partnering entities' commitment to implementing EBPs and to agreed-upon processes of collaboration can help support ongoing diversion program implementation.MOUs between treatment providers and court-led supervised diversion programs can include negotiated limited reporting of program participant substance misuse from treatment providers to the courts to address bioethical considerations raised by Ariturk et al. as well as to facilitate self-disclosure in treatment 20 and, perhaps as a result, support client perception of trustworthy therapeutic relationships of all clients regardless of racial/ethnic minority status. 21any professional entities and governmental funding initiatives support the implementation of EBPs and Addressing substance abuse in the community via diversion programs necessitates the collaboration of police, courts, and treatment providers.These entities need to collaborate to facilitate the identification by police and the courts of people with substance abuse issues and the connection of these individuals with treatment providers.As such, diversion programs provide the opportunity to holistically implement EBPs into multiple systems. Community Responsive Approach Of primary importance, diversion programs should be informed by people with lived experience, as programs for people who have substance abuse disorder should be implemented with their active participation.Relevant to challenges identified by Ariturk et al., developers of diversion programs should engage in community-based participatory research 27 to identify substance abuse treatment and related needs of the community and to take a research-informed approach to direct culturally and population appropriate action to address health disparities related to access to affirming and effective substance abuse treatment and related services.The use of peer support models and the diversification of the police, justice system, and treatment provider workforce to reflect more accurately the demographic characteristics of the community population also ensure that diversion programs are informed by the community, not just a subset of it, as well as by people with lived experience, in a culturally respectful and relevant manner.The importance of measurement and evaluation cannot be overstated.With multiple systems partners aligning and integrating common goals and outcomes, the risk of unintentionally causing harm or maleficence can be mitigated through thoughtful, cooperative, and consensual data capture and analysis of data through a lens of equity.Both process and impact evaluations should be developed to intentionally address historical concerns of disparity in healthcare and criminal justice institutions.Efficiency or costbenefit evaluations can capture policymakers' atten-tion system-wide, including legislators at the local, state, and federal levels, which can prove beneficial in propagating community-based diversion efforts with fiscal and statutory support. Timeliness of Advancing Diversion Programs Pre-arrest diversion represents a systemic change to deeply entrenched healthcare and criminal justice norms.As Aritürk et al. point out, ethical care for populations affected by substance abuse and mental illness cannot occur without changing these systems, a change that requires innovation, creativity, and courage.Social system infrastructure and governance are notorious for the inertia of the status quo, with few having the moral courage to push against conventional reasoning, settling for the same results that come with the same effort.Scalable efforts, with programs designed with the capacity at hand, can have an impact.Evaluation of the impact can elicit further change, with program success breeding interest and interest breeding greater capacity. Practitioners and researchers can capitalize on the growing awareness and acceptance of diversion programs, particularly pre-arrest, unconditional models of diversion that deflect those afflicted away from the criminal justice system and into the healthcare system, to advance positive systemic change.The acceptance of pre-arrest diversion programs has reached the highest level in the US-it is codified in the White House's National Drug Control Strategy 28 and SAM-HSA's strategic plan. 29Consequently, it is easier to advance pre-arrest diversion within the current political and social context than in previous contexts, a situation that should be exploited for the benefit of community health and safety. Too many people are dying, too many people from marginalized communities in particular, and too many are going to jail and prison for simply suffering from an untreated illness.Diversion is an investment in these communities who have historically experienced disinvestment.Diversion, particularly pre-arrest diversion without supervision, is an alternative to traditional criminal justice responses that destigmatizes mental illness and substance abuse in a meaningful and intentional way while saving lives.Care should be taken to design and implement diversion programs that, as Aritürk et al. advocate for, "promote health and reduce harms while preserving the dignity and autonomy of justice-involved individuals with behavioral health needs." RE-AIM provides guidance on translating research into action for sustainable implementation of effective evidence-based interventions in community and other settings.With the continuing development of the field of implementation science, there are also several reference books that provide guidance and expertise from leaders in the field of resources to resource-limited settings.These professional entities include, for example, the American Society for Evidence-Based Policing (ASEBP),22the Rx and Illicit Drug Summit, 23 and the Evidence-based Practices Resource Center of the Substance Abuse and journal of law, medicine & ethics INDEPENDENT The Journal of Law, Medicine & Ethics, 52 (2024): 76-79.©The Author(s), 2024.Published by Cambridge University Press on behalf of American Society of Law, Medicine & Ethics.direct
v3-fos-license
2018-04-03T05:43:56.169Z
2015-12-29T00:00:00.000
3756123
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jaclinicalreports.springeropen.com/track/pdf/10.1186/s40981-015-0025-2", "pdf_hash": "3cfd97e824713395eaa103c79dca089907a6c80d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2538", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3cfd97e824713395eaa103c79dca089907a6c80d", "year": 2015 }
pes2o/s2orc
A case of myocardial infarction caused by obstruction of a drug-eluting stent during the perioperative period We report a patient who developed drug-eluting stent (DES) thrombosis induced by discontinuation of dual antiplatelet therapy (DAPT) and subsequently had a massive surgical site bleed caused by restarting heparin and DAPT during the perioperative period. An 85-year-old man visited a local hospital owing to complaints dyspnea. He was diagnosed with laryngeal cancer and was scheduled for a total laryngectomy. Preoperative examinations showed an anteroseptal myocardial infarction. A DES was placed at segment 6 of the coronary artery and DAPT was initiated 27 days before surgery. After admission to our hospital, DAPT was replaced with unfractionated heparin. On the operation day, heparin was discontinued, and a tracheotomy, total laryngectomy and right hemi-thyroidectomy were performed. While recovering from anesthesia, ischemic ST elevation appeared. Cardiac catheterization revealed complete obstruction of the DES by a white thrombus. After recanalization, heparin and DAPT were restarted, and bleeding occurred. The next day, total blood loss was 2755 mL and surgical hemostasis was performed. Because his serum creatine kinase value was elevated at the cessation of heparin, anticoagulation by unfractionated heparin could not have prevented platelet thrombosis. Therefore, we should performed the tracheostomy to secure the patient’s airway under DAPT or only aspirin therapy a month after the DES implantation, and performed the laryngectomy and right hemi-thyroidectomy five months after the first surgery. This case is serious warnings of perioperative major adverse cardiac events induced by discontinuation of DAPT; unfractionated heparin was an insufficient safeguard against platelet thrombosis, and perioperative massive bleeding induced by restarting antiplatelet and anticoagulation therapy. In addition, a series of human errors, which the cardiologist chosen DES regardless of scheduled total larygectomy, the discontinuation of antiplatelet therapy shortly after a DES placement, and the surgical staffs failed to share the elevated serum CK and CK-MB values, caused life-threatening complications. Background Obstruction of a drug eluting stent (DES) during the perioperative period is a possible and potentially lethal complication of the procedure. To prevent the obstruction of a DES, dual antiplatelet therapy (DAPT), consisting of aspirin and a P2Y12 receptor inhibitor, should be continued for at least a year after DES placement [1]. Therefore, treatment with a bare-metal stent (BMS), which needs DAPT for at least a month [1], and/or plain old balloon angioplasty (POBA) are recommended for patients who have ischemic heart disease and are anticipating non-cardiac surgery [1]. In patients undergoing emergency surgery within a month of DES placement, assessing the risk of bleeding or DES obstruction is difficult. Herein, we report a patient who developed both DES thrombosis and a massive surgical site bleed during the perioperative period. Case presentation An 85-year-old man, preserving normal cognitive function, complained of progressive dyspnea and visited a local hospital, where he was diagnosed with laryngeal cancer and scheduled for a total laryngectomy. A preoperative electrocardiogram and echocardiography showed anteroseptal myocardial infarction without symptoms. On 27 days before the laryngectomy, the cardiologist at the local hospital placed a DES at segment 6 (#6) of the anterior descending coronary artery and initiated DAPT, including 100 mg of aspirin and 75 mg of clopidogrel, despite the cardiologist recognized the patient was scheduled laryngectomy. Then, he was admitted to our hospital 10 days before the laryngectomy. Preoperative echocardiography showed anteroseptal hypokinesis and a left ventricular ejection fraction of 36 %. A 12-lead electrocardiogram showed a slight ischemic ST elevation in leads V1-3 and an abnormal ST-T in leads aVL and V2-6 ( Fig. 1). Cardiologists at our hospital assessed that the patient's myocardium perfused by #6 had no function, and obstruction of the DES would have little effect on cardiac function from the echocardiographic finding. The cardiologists started 400 U/h of unfractionated heparin as a substitute for DAPT 6 days before the laryngectomy. Five days before the laryngectomy, we were consulted about the patient and warned the surgeons about the risks associated with discontinuation of DAPT. Six hours before the laryngectomy, heparin was discontinued. The surgeons checked the serum creatine kinase (CK), CK-MB and activated partial thromboplastin time (aPTT) values at the cardiologist's direction, three hours before the laryngectomy. An hour before the laryngectomy, a central clinical laboratory staff noticed the abnormal values of the CK and CK-MB (Table 1), which were reported to one of the surgeons by phone. The reported surgeon failed to inform us about the abnormal values. The patient entered the operating room in a wheelchair. He showed no significant changes in 3-lead electrocardiogram and did not complain of a chest pain. Oxygenation and 3 mg/h of nicorandil were started, and an arterial line was placed. Then, tracheotomy was performed under regional anesthesia with supplementation of fentanyl. After the tracheal intubation via tracheostomy, general anesthesia was induced and maintained with propofol (2.5-1.3 μg/mL), remifentanil (0.33-0.13 μg/kg/min), fentanyl (total: 0.3 mg) and rocuronium. Circulation was supported by continuous nicorandil and phenylephrine, and bolus ephedrine. A total laryngectomy and right hemithyroidectomy were performed. Duration of the surgery was 211 min, and blood loss during surgery was 7 mL. Little change was observed in his 3-lead electrocardiogram during surgery. After the completion of surgery, anesthetic agents were discontinued. During recovery from anesthesia, the patient's heart rate and arterial pressure elevated to 100/min and 190/80 mmHg, respectively, and 0.5 mV of ischemic ST elevation was confirmed in lead II of the electrocardiogram. Therefore, the intravenous nicorandil was increased to 6 mg/h and bolus nicardipine (0.4 mg) and landiolol (10 mg) were administered for reduction of blood pressure and heart rate. The 12-lead electrocardiogram showed an ischemic ST elevation in leads V1-5 (Fig. 2), and the patient complained of chest pain with gestures after recovery. A bolus of 10 mg of morphine and continuous 1 μg/kg/min of isosorbide dinitrate were administered and we quickly called the cardiologist, who immediately decided to perform percutaneous coronary intervention. The patient was transferred to the cardiac catheterization laboratory under sedation by dexmedetomidine (0.4 μg/kg/h). We regretted to find the elevated CK and CK-MB values at the time. At the laboratory, complete obstruction of the DES was observed (Fig. 3), and aspiration of the white thrombus, caused by a subacute thrombosis (SAT), was performed. After recanalization of the DES (Fig. 4), the patient's chest pain and the ST elevation in the electrocardiogram disappeared, and 400 U/h of unfractionated heparin was restarted. After that, the patient was transferred to the intensive care unit (ICU), where 100 mg of aspirin and 3.75 mg of prasugrel were administered through a feeding tube. Then, bleeding from the surgical site began and the patient's hemodynamic status gradually deteriorated. Therefore, heparin, aspirin and prasugrel were stopped. On the morning of the 1st postoperative day (POD), the patient's aPTT was still markedly prolonged (Table 1), and the total blood loss reached 2755 mL. In addition, 1120 mL of red blood cells (RBCs) and 480 mL of fresh frozen plasma (FFP) were transfused. Thereby, surgical hemostasis and transfusion of RBCs (920 mL) and FFP (480 mL) were performed. Aspirin and prasugrel were restarted on the 3rd POD. Hemorrhage and re-occlusion of DES did not reoccur, and the patient was discharged on foot on the 34th POD. Discussion 2014 ACC/AHA guideline recommend that, for patients who require PCI but are scheduled for elective noncardiac surgery in the subsequent 12 months, balloon angioplasty or BMS implantation followed by 4 to 6 weeks of DAPT are reasonable strategies [1]. For patients with DES who must undergo urgent surgical procedures that mandate the discontinuation of DAPT, however, it is reasonable to continue aspirin if possible and restart the P2Y12 inhibitor as soon as possible in the immediate postoperative period [1,2]. And the study showed that incidence of major adverse cardiac event (MACE) was significantly higher (10-15 %) during the first 30 days after the stent implantation [3]. In our case, placement of a DES was the primary issue for the patient, who urgently required tracheostomy because of airway narrowing. The discontinuation of antiplatelet therapy shortly after the myocardial infarction and DES placement induced the SAT, which was the secondary issue. Bridging with anticoagulants, such as low-molecularweight heparin, after interruption of DAPT during the perioperative period is advised [2]. In Japan, unfractionated heparin, having no evidence and used empirically, is also advised [4]. But the newer perioperative guideline recommend continue DAPT in patients undergoing urgent noncardiac surgery during the first 4 to 6 weeks after BMS or DES implantation unless the risk of bleeding outweights the benefit of stent thrombosis prevention [1]. In addition, there has not been any evidence demonstrating the efficacy of bridging with anticoagulants using heparin. In this case, we performed anticoagulation therapy using unfractionated heparin. Nevertheless, the serum CK-MB value of the patient was elevated at the time of cessation of heparin despite a slightly prolonged aPTT ( Table 1). The all surgical staffs failed to share the elevated serum CK and CK-MB values at that point and failed to cancel the surgery, which were the third issue. And a white thrombus in the DES was confirmed at the time of recanalization. Although his preoperative aPTT was not markedly prolonged, anticoagulant therapy of 400 U/h of unfractionated heparin could not have prevented platelet thrombosis in this case. It has been reported that the cumulative incidences of bleeding 30 days after surgical procedures were significantly higher in patients with DAPT than in patients with single or no antiplatelet therapy [5]. In this case, the first surgery was performed after discontinuation of both antiplatelet and anticoagulation therapy. Therefore, surgical hemostasis was easily obtained and not strictly performed, and bleeding only became obvious after restarting heparin and DAPT. If the surgery was performed under continuous DAPT, however, similar or massive bleeding might have occurred in the perioperative period. Because the patient had been placed a DES, we should have first performed the tracheostomy on the patient with strict hemostasis to secure the airway under continuation of DAPT or only aspirin a month after the DES implantation. Then, six months after the DES implantation, we should have performed the laryngectomy and right hemi-thyroidectomy [1]. Because the patient might have mild impairment in expression of chest pain due to progressive laryngeal cancer and/or feel little pain due to nerve damage by old myocardial infarction, the patient could complained less chest pain at the time of visiting to local hospital and entering the operation room, which could induced misjudgment. And a series of human errors occurred and caused life-threatening complications. Human errors were as follows: the cardiologist at the local hospital chosen DES instead of BMS or POBA regardless of scheduled total larygectomy; the root cause of this incident, the discontinuation of antiplatelet therapy shortly after the myocardial infarction and DES placement, and After this case, we proposed to create the hospital task force for updating and promoting the perioperative PCI, antiplatelet and anticoagulation procedures. This task force consisted of the representatives from departments of all surgery, anesthesiology, cardiology, gastroenterology, pharmacy and hospital medical safety management office. Then, this task force announced the updated those procedures in our hospital. And now we are exploring a method of enhancing the detection of laboratory test abnormality shortly before the surgery. Conclusion Because the patient had airway stenosis with a recent placement of DES, we should have first performed the tracheostomy on the patient with strict hemostasis to secure the airway under continuation of DAPT or only aspirin a month after the DES implantation. Then, six months after the DES implantation, we should have performed the laryngectomy and right hemi-thyroidectomy. This case is serious warnings of perioperative MACE induced by discontinuation of DAPT; 400 U/h of unfractionated heparin was an insufficient safeguard against platelet thrombosis, and perioperative massive bleeding then induced by restarting antiplatelet and anticoagulation therapy. In addition, a series of human errors, which the cardiologist at local hospital chosen DES regardless of scheduled total larygectomy, the discontinuation of antiplatelet therapy, and the surgical staffs failed to share the elevated serum CK and CK-MB values, caused life-threatening complications. Anesthesiologists should promote the information about the perioperative risks and managements of PCI and DAPT to surgeons and cardiologists further widely. Consent Written informed consent was obtained from the patient for publication of this Case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
v3-fos-license
2020-08-27T13:07:00.340Z
2020-08-27T00:00:00.000
221324173
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/feart.2020.00208/pdf", "pdf_hash": "7d5122b90867ffd90eaf553495809922e12d894e", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2541", "s2fieldsofstudy": [ "Geology", "Environmental Science", "Materials Science" ], "sha1": "7d5122b90867ffd90eaf553495809922e12d894e", "year": 2020 }
pes2o/s2orc
Exploring Carbon Mineral Systems: Recent Advances in C Mineral Evolution, Mineral Ecology, and Network Analysis Large and growing data resources on the spatial and temporal diversity and distribution of the more than 400 carbon-bearing mineral species reveal patterns of mineral evolution and ecology. Recent advances in analytical and visualization techniques leverage these data and are propelling mineralogy from a largely descriptive field into one of prediction within complex, integrated, multidimensional systems. These discoveries include: (1) systematic changes in the character of carbon minerals and their networks of coexisting species through deep time; (2) improved statistical predictions of the number and types of carbon minerals that occur on Earth but are yet to be discovered and described; and (3) a range of proposed and ongoing studies related to the quantification of network structures and trends, relation of mineral “natural kinds” to their genetic environments, prediction of the location of mineral species across the globe, examination of the tectonic drivers of mineralization through deep time, quantification of preservational and sampling bias in the mineralogical record, and characterization of feedback relationships between minerals and geochemical environments with microbial populations. These aspects of Earth’s carbon mineralogy underscore the complex co-evolution of the geosphere and biosphere and highlight the possibility for scientific discovery in Earth and planetary systems. INTRODUCTION Minerals, including carbon-bearing phases, are the oldest available materials from the ancient history of our planet and other bodies in our solar system -they record information about their genetic environments and any subsequent weathering and alteration they underwent, offering a glimpse of ancient environments through deep time. In this work, we describe some of the important carbon mineral data resources, outline a number of new advances in data-driven discovery in carbon and other mineral systems, including new insights from mineral network visualizations and statistical modeling. We also preview upcoming studies and directions of research related to the diversity and distribution of mineral species, statistical modeling of complex, multidimensional data objects and their underlying trends, tectonic drivers of mineralization, characterization of relationships between microbial populations, their expressed protein functions and the geochemical environment, quantification of preservational and sampling bias present in the mineralogical record, and a number of predictive algorithms including those which predict formational environments of minerals as well as the location of previously unknown mineral localities. Carbon minerals are particularly compelling for multidimensional analysis due to their diverse range of bonding behaviors, paragenetic modes, mineral properties, and ages. Carbon minerals are some of the first condensed phases formed in a solar system and among the hardest materials known, yet carbon minerals are also some of the latest occurring and most ephemeral crystalline phases. Carbon has the ability to behave as a cation, anion, or neutral atom, allowing bonding with itself and 80+ other elements, with a variety of bonding coordination numbers including 2, 3, and 4, and valance states of −4, +2, and +4 (Hazen et al., 2013a). Many of the first crystals formed in our cooling solar system were refractory carbon-bearing phases, including diamond (C) (Lewis et al., 1987), graphite (C) (Amari et al., 1990), and moissanite (SiC) (Zinner et al., 1987). Carbon and its mineral phases are intrinsically linked to organic and biological processesbiomineralization is responsible for a significant portion of rhombohedral carbonates on Earth's surface and organic minerals make up nearly 15% of the 411 known carbon mineral species (as of June 2019; rruff.info/ima). Carbon's widely varying character offers a fascinating opportunity to employ rapidly developing advanced analytics and visualization techniques to characterize its complex, multivariate systems and answer previously inaccessible questions at the interface of Earth, planetary, and life sciences. MINERAL DATA RESOURCES The International Mineralogical Association List of Mineral Species The International Mineralogical Association (IMA) list of mineral species (RRUFF.info/IMA) is part of the RRUFF Project (Lafuente et al., 2015) -a mineral library and series of databases with the goal of providing robust, diverse mineralogical data, including high-quality chemical, spectral, and diffraction data, the IMA list of approved mineral species, the American Mineralogist Crystal Structure Database (AMCSD; RRUFF.geo.arizona.edu/AMS/amcsd.php), mineral locality age information (see "Mineral Evolution Database" section below), and other mineral properties (see "Mineral Properties Database" section below). The IMA list allows users to search the over 5400 (as of June 2019) mineral species by name, chemical composition, unit-cell parameters and crystallography, crystal structure group, paragenetic mode, and the availability of ancillary data including crystal structure files in the AMCSD or direct RRUFF Project analyses. This database also provides useful information about each mineral species, including composition, oldest known age, crystal structure group, and unit-cell parameters along with corresponding compositions, all of which can be downloaded in a number of machinereadable file formats. Lastly, this page offers links to a number of related informational pages and websites, including the Handbook of Mineralogy (Anthony et al., 2003), measured data in RRUFF Project databases, crystal structure files in the AMCSD, mineral locality information at Mindat.org (see "Mindat.org" section below), and age and locality data in the Mineral Evolution Database (MED). The Mineral Evolution Database (MED) The Mineral Evolution Database (MED; RRUFF.info/Evolution; Golden et al., 2016;Prabhu et al., 2020) was created to support mineral evolution and ecology studies -studies that examine and characterize spatial and temporal mineral diversity and distribution in relation to geologic, biologic, and planetary processes (Hazen et al., 2008(Hazen et al., , 2011(Hazen et al., , 2013c(Hazen et al., ,b, 2014(Hazen et al., ,b, 2016(Hazen et al., , 2017a(Hazen et al., ,b, 2019bHazen and Ferry, 2010;McMillan et al., 2010;Golden et al., 2013;Hazen, 2013Hazen, , 2018Hazen, , 2019Grew and Hazen, 2014;Zalasiewicz et al., 2014;Grew et al., 2015;Hystad et al., 2015bHystad et al., ,a, 2019aLiu et al., 2017aLiu et al., ,b, 2018bMa et al., 2017;Morrison et al., 2017b;Glikson and Pirajno, 2018). The MED contains mineral locality and age information extracted from primary literature and the mineral-locality database, Mindat.org. As of 14 June 2019, 15,906 unique ages for 6,253 directly dated mineral localities, documenting 810,907 mineral-locality pairs and 194,090 mineral-localityage triples are available in the MED. Specific to the known 411 carbon-bearing phases, there are 8,635 dated carbon mineral localities, 94,677 carbon mineral-locality pairs, and 20,773 dated carbon mineral-locality pairs available in the MED, as of June 2019. These data have been assembled and documented to maximize the accuracy and transparency of age associations, which include data on specific mineral formations, mineralization events, element concentrations, and/or deposit formations. The MED interface allows many sorting and displaying options, including sorting by age or locality name, as well as displaying all of the queried minerals at a given locality or displaying a line of data for each mineral-locality pair. These data are available for download directly from the MED (RRUFF.info/Evolution) with various file format options. The Mineral Properties Database (MPD) The Mineral Properties Database (MPD; Morrison et al., 2017b;Prabhu et al., 2019a) was created with the goal of better understanding the multidimensional, multivariate trends amongst mineral species and their relationships to geologic materials, preservational and sampling biases, and geologic, biologic, and planetary processes. At present, this database contains dozens of parameters, including age, color, redox state, structural complexity (Krivovichev, 2012(Krivovichev, , 2013(Krivovichev, , 2016(Krivovichev, , 2018Hazen et al., 2013b;Grew et al., 2016;Krivovichev et al., 2017Krivovichev et al., , 2018 and method of discovery associated with copper, uranium, and carbon minerals. Ongoing efforts are in place to expand to minerals of each element of the periodic table. These data, coupled with those of the MED, offer the opportunity to study changes in redox conditions through deep time and are the basis for mineral network analysis studies (Morrison et al., 2017b;Perry et al., 2018;Hazen, 2019;Hazen et al., 2019b). This database will be publicly available through the RRUFF Project on the Open Data Repository platform (ODR; opendatarepository.org). The ODR interface will maximize the flexibility with which users view, explore, subset, and download data of interest. Mindat.org Hudson Institute of Mineralogy's Mindat.org is an interactive mineral occurrence database with a wealth of information on mineral localities around the globe, as well as Apollo Lunar samples and meteorites. At present, mindat houses nearly 300,000 mineral localities, with >1.2 million mineral-locality pairs and nearly one million mineral photographs. A large majority of the mineral occurrence information available on Mindat.org is from published literature, but users can also add localities, mineral-locality pairs, photographs, and references. The MED directly interfaces with mindat, harnessing and incorporating the huge amount of mineral locality data held in mindat. It has been an important resource for scientific research and discovery -many studies on the diversity and distribution of minerals on Earth's surface have relied in part on mindat mineral locality information (Hazen et al., 2008(Hazen et al., , 2011(Hazen et al., , 2013c(Hazen et al., ,b, 2014(Hazen et al., ,b, 2016(Hazen et al., , 2017a(Hazen et al., ,b, 2019bHazen and Ferry, 2010;McMillan et al., 2010;Golden et al., 2013;Hazen, 2013Hazen, , 2018Hazen, , 2019Grew and Hazen, 2014;Grew et al., 2015;Hystad et al., 2015aHystad et al., ,b, 2019aLiu et al., 2017aLiu et al., ,b, 2018bMa et al., 2017;Morrison et al., 2017b). These studies include those of Carbon Mineral Ecology detailed below. Global Earth Mineral Inventory (GEMI) The Global Earth Mineral Inventory (GEMI 1 ) is a Deep Carbon Observatory (DCO) data legacy project born out of the diverse data types collected in conjunction with the DCO's broad range of scientific driving questions (Prabhu et al., 2019a. Specifically, Prabhu et al. (2019aPrabhu et al. ( , 2020 aimed to support and facilitate scientific discovery by merging and integrating DCO data products, such as the MPD and MED, into a digestible, accessible, and user-friendly format for exploration, statistical analysis, and visualization. Therefore, GEMI is a 1 https://dx.deepcarbon.net/11121/6200-6954-6634-8243-CC faceted, searchable knowledge graph or network in which each node represents a feature of the MED and MPD -allowing users to explore, query, and extract the specific subset of data or combinations of data necessary for their research goal. CARBON MINERAL ECOLOGY Statistical approaches are particularly useful in characterizing surface and near-surface environments where biology and reaction kinetics play a major role in mineral formation and stability, as opposed to the dominance of thermodynamics in the subsurface. Mineral ecological studies employ the MED and Mindat.org to examine and characterize the spatial diversity and distribution of mineral species on planetary bodies Hystad et al., 2015a,b;Grew et al., 2017;Liu et al., 2017aLiu et al., , 2018a. "Mineral species" in this case are those recognized by the IMA Commission on New Minerals, Nomenclature and Classification (CNMNC), which often does not account for subtle variations in chemistry or formational processes (see section "Natural Kind Clustering"). Previous studies have found that minerals on Earth's surface follow a distinct trend, a "Large Number of Rare Events" (LNRE) frequency distribution in which most mineral species are rare, occurring at fewer than five geologic localities, and only a few species are very common Hystad et al., 2015a,b). The discovery of an LNRE frequency distribution across all mineral systems on Earth enabled the modeling of accumulation curves and, thereby, the prediction of the number of missing or previously unknown mineral species that occur on Earth but have yet to be discovered. Carbon minerals are no exception to the LNRE trend and Hazen et al. (2016) explored their ecology -discovering that in addition to the 400 known carbon mineral species, there were likely at least 145 more species awaiting discovery. Hazen et al. (2016) delved into the likely candidates of missing species, generating accumulation curves for carbon minerals with and without oxygen, hydrogen, calcium, and sodium. They predicted that, of the 145 as-yet undiscovered carbon minerals, 129 would contain oxygen, 118 would contain hydrogen, 52 would contain calcium, and 63 would contain sodium. This study led to the Carbon Mineral Challenge (mineralchallenge.net) -a DCO initiative to engage scientists and collectors in finding and identifying the missing carbon phases. As of June 2019, the Carbon Mineral Challenge boasts 30 new mineral species approved by IMA, a number of which were predicted in Hazen et al. (2016). At the time of the initial mineral ecology studies, it was understood that the models and the predictions based upon them were to be treated as lower limits of the estimate of missing mineral species. This is due, in part, to sampling bias toward wellcrystallized, colorful, or economically valuable specimens. An additional constraint is the advent of new, unforeseen technology that can identify and distinguish minerals at increasingly finer scales. While it is difficult to predict the next technological advance, we can attempt to develop better models to make predictions on our existing data. With this in mind, Hystad et al. (2019a) developed a new Bayesian technique for modeling mineral frequency distribution and predicting the number of undiscovered mineral species on Earth's surface. Hystad et al. (2019a) updated the prediction of the number of missing mineral species on Earth from the previous minimum estimate of 6394 (Hystad et al., 2015a,b) to an increased estimate of 9308 with 95% posterior interval (8650, 10,070). Note that this new, higher value is still a low estimate due to the unknowns of future technology. Here, we apply the Poisson lognormal model of Hystad et al. (2019a) to the currently known 411 carbon mineral species and their 50,095 localities (with 94,677 minerallocality pairs, 22% of which have associated ages, as of June 2019). Figure 1A illustrates the carbon mineral frequency distribution with a Poisson lognormal LNRE model overlaid in blue. The frequency distribution is used to generate an accumulation curve (Figure 1B), which models the expected number of carbon minerals species as a function of the number of localities characterized, and therefore predict the expected number of carbon mineral species currently present on Earth's surface, many of which remain undiscovered. The new estimate of carbon mineral diversity on Earth is 993 with a 95% posterior interval of (759, 1268), up from the former prediction of 548 carbon mineral species . Note again that, as with the above, this prediction should be considered a lower estimate given the unknowns associated with future technological advances and their impacts on mineral discovery. Network Analysis Network analysis, a subfield of graph theory (Otte and Rousseau, 2002;Clauset et al., 2004;Newman, 2006;Kolaczyk, 2009a;Abraham et al., 2010;Newman and Mark, 2010), is particularly useful for visualizing many variables in a multidimensional system in a digestible and meaningful way, particularly when the questions rely on the interrelationships of many entities and their properties, as is the case in mineralogical systems in the context of Earth and planetary processes. Networks are composed of nodes (or vertices) representing entities and edges (or links) between the nodes symbolizing a relationship between two connected nodes. Nodes can be sized, shaped, colored, etc. according to any variables of interest. Likewise, edges can be directed, colored, texturized, or their thickness can be adjusted to represent any parameter of choice and the length of edges can be scaled in proportion to the strength of the connecting variable. With all of these options, it is possible to display upward of eight variables within one network. Network renderings are projections from N-1 dimensional space (where N is the number of different mineral species) into two or three dimensions, although the multidimensionality is preserved in the original data object and therefore in any statistical metrics derived from the network data. Network metrics fall into two categories, the first of which are "local" metrics that describe the role and significance of individual nodes in a network. Local metrics include degree, which is the number of links connected to a given node, and betweenness, a measure of the number of geodesic (shortest) paths that pass through a given node. The second type of metrics are "global" and are used to evaluate overall trends within a network and allow for comparison of different networks, such as networks of minerals of different elements, from different environments or The frequency distribution of carbon minerals on Earth's surface. The x-axis, "frequency class," is the number of minerals that occur at a locality. The y-axis is the number of mineral species that occur at exactly the corresponding frequency class (i.e., nearly 100 carbon mineral species occur at exactly one locality). The blue line represents the Poisson lognormal LNRE model. (B) Accumulation curve for the mean number of carbon mineral species versus the number of localities sampled, N, based on the Poisson lognormal LNRE model. Today, there are 411 known carbon mineral species based on N = 92,466 sampled localities. As N approaches infinity, the median number of predicted carbon mineral species is 993 with a 95% posterior interval of (759, 1268). Frontiers in Earth Science | www.frontiersin.org planetary bodies, or a series of networks over a time interval. Global metrics include density, which is the number of links divided by the number of possible links (i.e., a measure of the interconnectedness of a network), and centralization, a measure of how central a network's "most central" node is relative to how central all the other nodes are (i.e., indicating whether or not there are many highly interconnected nodes or if there are a few key "broker" nodes). Additionally, there are a number of network modularity and community detection algorithms, which allows users to determine if there are distinct groups within their network and what nodes belong to those groups. With further exploration, users can determine what characteristics are shared within each group and/or between groups. Furthermore, random forest or decision tree algorithms can offer insight into the relative importance or weight of each characteristic to the network partitioning. Mineral Network Analysis Mineral network analysis, which is a powerful approach to exploring complex multidimensional and multivariate systems, facilitates a holistic, integrated, higher-dimensional understanding of Earth and planetary systems (Morrison et al., 2017b;Hazen et al., 2019a,b). The renderings of Fruchterman-Reingold force-directed (Fruchterman and Reingold, 1991;Csardi and Nepusz, 2006) mineral coexistence networks herein are of two types: unipartite and bipartite. Interactive, manipulatable versions of these networks, including node labels, can be found at https://dtdi.carnegiescience.edu/node/4557. Unipartite Mineral Networks In the unipartite networks (Figures 2-4), each node represents a mineral species; the nodes are sized according to the frequency of occurrence of each species and colored according to chemistry, paragenetic mode, or structural complexity; the edges represent co-occurrence (which may or may not correspond to an equilibrium assemblage) of two mineral species at a locality on Earth's surface and their length is scaled inversely proportional to their frequency of co-occurrence (i.e., when two species occur together more frequently, they are closer together in the graph). Note that while the nodes of Figures 2-4 are colored according to various parameters (e.g., composition, paragenetic mode), those parameters were not coded into the network layout -meaning that the network topology and any trends are strictly a function of mineral co-occurrence. A number of interesting trends can be observed in the topologies of unipartite mineral co-occurrence networks. Firstly, the copper (Cu) networks show a high density and low centralization; in the Cu network colored by chemistry (Figure 2A), there is strong chemical segregation in which sulfides (red nodes) cluster together, as do sulfates (yellow nodes), and Cu mineral containing oxygen and no sulfur (blue nodes) (Morrison et al., 2017b;Hazen et al., 2019a,b). This chemical segregation results in chemical trend lines throughout the graph, including sulfur fugacity, f S 2 , increasing from bottom (oxides) of the graph to top (through sulfates and into sulfides) and oxygen fugacity, f O 2 , increasing from the top left (sulfides) to the bottom (sulfates and oxides). For any variable that exhibits an embedded trendline, that trend can be used to predict the value of said variable for any node in which the value is unknown. In the case of chemical variables in mineral networks representing equilibrium assemblages, this could allow for the extraction of thermochemical parameters. Secondly, Figure 2B renders the Cu network with nodes colored by crystal structural complexity (Hazen et al., 2013b;Krivovichev, 2013Krivovichev, , 2016Krivovichev, , 2018Krivovichev et al., 2017Krivovichev et al., , 2018. Structural complexity is a mathematical measure for evaluating the symmetry and chemical complexity Frontiers in Earth Science | www.frontiersin.org of a mineral's crystal structure and IMA-approved ideal chemical formula, and converting that complexity into information, measured in bits. Krivovichev et al. (2018) hypothesize crystal structure complexity exhibited by minerals has increased through deep time, with the simplest structures existing at the earliest stages of mineral evolution and becoming increasingly complex moving into modern day. In this network, there is segregation resulting in a trendline from the simplest crystal structures to moderately complex structures. The most complex structures are few and scattered throughout the network, an unexpected trend that begs further investigation alongside whether or not age of first occurrence plays a role in the structural complexity trends observed in network Figure 2B. Thirdly, the chromium (Cr) network (Figure 3) has a very low density and high centralization, with the mineral phase chromite having the highest centrality (Morrison et al., 2017b;Hazen et al., 2019a,b). The most notable feature of the Cr co-occurrence network is its strong clustering by paragenetic mode, indicating that formational environment and mode is the strongest driver for Cr mineral co-occurrence. Lastly, Figure 4 illustrates the changes in carbon mineral co-occurrence through deep time. The earliest known carbon minerals are few and form a dense, highly interconnected network with low centralization. Through time into modern day, the density decreases slightly while the centralization becomes significantly more pronounced, forming two lobes of carbon mineral populations connected by a few key nodes of high centrality beginning as early as 799 Ma and becoming very distinct at 251 Ma, contemporaneous with the end-Permian mass extinction. These two lobes comprise different populations of carbon mineral chemistry, with the left lobe containing a much higher proportion of organic carbon minerals and hydrous phases containing transition elements, lanthanides, and/or actinides, and the right lobe having a higher frequency of anhydrous phases lacking transition elements, lanthanides, or actinides. This unexpected trend and its underlying geologic or biologic implications are the subject of further study. Bipartite Mineral Network In the bipartite network rendering (Figure 5), the set of colored nodes represent carbon mineral species, sized by their frequency of occurrence and colored according to the age of the oldest known occurrence (Hazen, 2019;Hazen et al., 2019a). The other set of nodes in black represent the localities at which the carbon minerals occur, sized proportionally to their carbon mineral diversity (i.e., the number of mineral species found at a locality). The edges between nodes signifying that a mineral occurs at a locality. Mineral bipartite diagrams illustrate many relationships between carbon minerals and their locations on Earth's surface. The first surprising feature of the network is the "U-shaped" (or "vase shaped" in 3D, see "Advanced Mineral Network Visualizations" section below) locality node distribution. This topology provides a striking visual representation of mineral ecology, specifically the LNRE frequency distribution in which there are a few very common species (such as calcite and aragonite), but most species are rare. In the network graph, the most common minerals fall FIGURE 3 | Unipartite chromium mineral network. Force directed, unipartite, chromium (Cr) mineral network rendering. Nodes represent Cr mineral species, sized according to their frequency of occurrence and colored according to their paragenetic mode. Edges represent co-occurrence of mineral species at localities on Earth's surface; edge length is scaled inversely proportional to frequency of co-occurrence. at the bottom of the locality "U, " the frequency of occurrence quickly falls off moving up and out of the locality "U, " ultimately radiating outward and around the locality nodes where the majority of carbon minerals lie, most of which have small radii (i.e., they occur at very few localities). Another related feature clearly visible in the rendering is that rare mineral species tend to occur at localities rich in other rare species, as opposed to localities dominated by the more common species. This is visible at the individual node level, but also in the overall topology of the network: the mineral diversity of the localities, and therefore the size of the locality nodes, decreases from top to the bottom, as the network trends from more rare mineral species into more common mineral species. This trend gives researchers exploration targets to look for new, rare mineral species: at localities already known to host other rare mineral species. This qualitative observation can be parlayed into a quantitative method, specifically affinity analysis (see "Affinity Analysis" in the future directions section below) for predicting new locations of existing mineral species, predicting which minerals are likely to occur but have not been reported at a given locality, and possibly make predictions on the most likely FIGURE 4 | Evolving unipartite carbon mineral networks. Force directed, unipartite, carbon mineral network renderings. Nodes represent carbon mineral species, sized according to their frequency of occurrence and colored according to composition. Edges represent co-occurrence of mineral species at localities; edge length is scaled inversely proportional to frequency of co-occurrence. Each network represents a cumulative time bin in order to illustrate the changes in carbon mineralogy on Earth's surface through deep time. locations for finding new mineral species. Additionally, an embedded timeline can be observed in the carbon minerallocality network topology. The nodes of Figure 5 are colored according to the age of first occurrence; however, their ages were not coded into the network layout -meaning that the network topology is strictly a function of mineral-locality occurrence. Despite the lack of age information encoded in the topological layout, the oldest known minerals occur at the bottom of the locality "U" and radiate up and outward as the minerals become younger, with the youngest minerals skirting around the outside of the locality "U". Observing trendlines in any network system can lead to predicting missing values, but age, in particular, offers the opportunity to pin other parameters, such as chemistry, structural complexity, bioavailability, etc. to a timeline and therefore relate these parameters to geologic, biologic, or planetary events throughout deep time. Advanced Mineral Network Visualizations Network renderings are projections from multidimensional space into two dimensions and, inherently, some information is lost. Therefore, it is important to explore variations in visualization techniques that will allow the user to maximize the accuracy and amount of information rendered. With this in mind, we are developing 3D networks and also exploring virtual reality (VR) techniques for visualization and network manipulation. VR offers two primary benefits for visual analytics: (1) the ability to employ true 3D layouts that are not projected to 2D displays and offer additional insight especially for very dense networks, and (2) direct natural interaction, and observation of a network's response to such, creates an additional dimension for analysis not captured in static or non-interactive visualizations. At the following link, a video demonstration of an early VR visualization prototype of the bipartite carbon mineral-locality network in Figure 5 can be viewed 2 . The locality "U-shape" observed in the 2D version becomes a "vase" shape in 3D, with the most common, oldest carbon minerals at the base of the locality vase and the youngest, rarest carbon minerals radiate out of the top of the locality vase and down the sides. This and other networks can also be explored in an immersive fashion with VR. 2 https://www.youtube.com/watch?v=5GDnpqpOokU ORGANIC CARBON MINERALOGY IN EARLY EARTH ENVIRONMENTS AND PLANETARY SYSTEMS Currently, there are more than 50 organic mineral species approved by the IMA (Skinner, 2005;Perry et al., 2007;Echigo and Kimata, 2010;Hazen et al., 2013a;Piro and Baran, 2018), most of which form through alteration of biological materials (Oftedal, 1922;Rost, 1942;Nasdala and Pekov, 1993;Perry et al., 2007;Chesnokov et al., 2008;Witzke et al., 2015;Pekov et al., 2016;Hummer et al., 2017). Recent discoveries and new studies of organic minerals Bojar et al., 2017;Hummer et al., 2017;Mills et al., 2017) and minerals with metal-organic framework structures that contain metal centers bonded via molecular linkers into porous assemblies of different dimensionalities (Huskić et al., 2016) led to the formulation of novel geomimetic approaches in the design and synthesis of metal-organic frameworks (Huskić and Friščić, 2018;Huskiae and Frišèiae, 2019;Li et al., 2019). Most organic minerals observed on Earth today are oxalates and carboxylates of low nutrient value to microbes (Benner et al., 2010) and are therefore able to persist on a planet teeming with life. The presence of life limits the long-term survival of other organic crystals on modern Earth, but such crystals, including cocrystals, could have existed on early Earth and may currently exist on other planetary bodies (Hazen, 2018;Maynard-Casely et al., 2018;Morrison et al., 2018). Organic molecules can be created by abiotic processes (Glasby, 2006;Fu et al., 2007;Kolesnikov et al., 2009;McCollom, 2013;Sephton and Hazen, 2013;Huang et al., 2017). They have been shown to exist in many planetary settings, including meteorites (Cooper et al., 2001;Sephton, 2002;Pizzarello et al., 2006;Burton et al., 2012;Sephton and Hazen, 2013;Kebukawa and Cody, 2015;Cooper and Rios, 2016;Koga and Naraoka, 2017), comets (Kimura and Kitadai, 2015), and have been detected or are hypothesized to exist on many other planetary bodies in our solar system, including Mars, Titan, Enceladus, Callisto, and Ganymede (McCord et al., 1997;Formisano et al., 2004;Cable et al., 2012Cable et al., , 2018Kimura and Kitadai, 2015;Webster et al., 2015Webster et al., , 2018Zolensky et al., 2015;Hand, 2018;Maynard-Casely et al., 2018). On bodies with low temperatures there is also the possibility of clathrates containing and protecting organic molecules (Kvenvolden, 1995;Buffett, 2000;Shin et al., 2012;Hazen et al., 2013c;Maynard-Casely et al., 2018). Therefore, the FIGURE 5 | Bipartite carbon mineral-locality network. Force directed, bipartite, carbon mineral-locality network rendering. Colored nodes represent carbon mineral species, sized according to their frequency of occurrence and colored according to age of first occurrence. Black nodes represent carbon mineral localities on Earth and are sized according to mineral diversity (i.e., the number of mineral species found at a locality). Edges represent the occurrence of a mineral species at a locality. earliest, prebiotic minerals on Earth's surface, many of which may currently be present on other planetary bodies, were likely organic crystalline compounds, such as amino acids, nucleobases, hydrocarbons, co-crystals, clathrates, and other species that have since been consumed by cellular life here on Earth (Hazen, 2018;Morrison et al., 2018). Network Structure Quantification Many trends associated with geologic or planetary processes have been recognized in the topologies of mineral networks and a multitude of unrecognized trends also exist within mineral network topologies and/or data objects. Therefore, it is imperative to develop statistical methods for quantifying mineral network structures and relating these structures to their underlying geologic, biologic, or planetary drivers (Hystad et al., 2019b). Such methods will allow for the systematic study of network features, such as degree distribution, distribution of shared partners, centrality, clustering, connected subgraphs, and cliques, and will employ an exponential random graph model (ERGM) (Frank and Strauss, 1986;Snjiders, 2002;Hunter and Handcock, 2006;Snijders et al., 2006;Hunter, 2007;Pattison et al., 2007;Lusher et al., 2012). The models will determine whether or not the substructures within a network occur more often than would be expected by chance. They will also determine which attributes are most significant to mineral cooccurrence, or any other relationship of interest, including, for example, whether or not minerals of the same paragenetic mode tend to be found at the same location or if there is a more influential parameter. The ERGM model will be expanded to include multilevel networks (Wang et al., 2013), such as one of mineral species, their localities, and their chemical compositions. The multilevel approach will provide a means to model the complex dependence structures and interactions amongst the many network parameters. Additionally, we will employ a latent network model, which models unobserved factors that underlie the expression of network structures by incorporating latent variables (Kolaczyk, 2009b;Kolaczyk and Csárdi, 2014). Natural Kind Clustering Physical and chemical attributes of minerals are the direct product of and, as a result, encode their formational conditions and any subsequent weathering and alteration. Therefore, multivariate correlation of these attributes will allow for association of minerals to their paragenetic modes, resulting in a number of distinct "natural kinds" within a mineral species (Hazen, 2019). For example, diamond may have many "natural kinds, " including stellar vapor-deposited diamonds (Hazen et al., 2008;Ott, 2009;Hazen and Morrison, 2019), Type I (Davies, 1984;Shirey et al., 2013;Sverjensky and Huang, 2015), Type II (Smith et al., 2016), and carbonado (Heaney et al., 2005;Garai et al., 2006). Cluster analysis and classification algorithms will allow characterization and designation of various natural kinds of each mineral species and thereby relate the wealth of information contained within mineral samples to their geologic, biologic, and/or planetary origins. Designation of the natural kinds of minerals within the earliest environments of our universe is given in Hazen and Morrison (2019) and Morrison and Hazen (2020), preliminary work is underway to classify the natural kinds of many mineral species, with a particular focus on carbon-bearing phases, including diamond, calcite, and aragonite (Boujibar et al., 2019;Zhang et al., 2019). Affinity Analysis The mineral co-occurrence information stored in the MED and Mindat.org provide the means to make predictions on the most likely locations to find certain mineral species, geologic settings, deposits, and/or planetary environments, as well as a probabilistic list of minerals likely to occur at any given locality (Prabhu et al., 2019b;. Affinity analysis is a machine learning method that discovers relationships between various entities in a dataset. This method analyzes co-occurrence data and identifies strong rules based on associations between entities. This method was first introduced by Agrawal and Srikant (1994), and they present two algorithms to create association rules (i.e., the Apriori and AprioriTid algorithms). Apriori uses a bottom-up approach where subsets of entities that frequently co-occur are generated as candidates for testing against the data. The number of occurrences of the candidates are then compiled and patterns observed from the occurrence of these candidates are used to generate rules. For example, consider the following small carbon mineral dataset: Based on the occurrence of candidates, we can create the following rules: • 75% of the sets with malachite also contain calcite. • 75% of the sets with malachite also contain azurite. • 50% of the sets with malachite and calcite also contain azurite. • 25% of the sets with malachite and calcite also contain dolomite. • 25% of the sets with malachite and azurite also contain dolomite. Such association rules can be used to predict the probability of occurrence for certain minerals or mineral assemblages, given the currently known mineralogy of a locality. Therefore, this method allows for prediction of the most probable locations on Earth or other planetary bodies to find mineral species or mineral assemblages of interest, as well as certain geologic settings, deposits, or environments. Likewise, this method can assess the probability of finding any mineral species at a locality in question. In a preliminary case study on Mindat.org mineral occurrence data, pair-wise correlations (i.e., candidates of size 2) were used to predict a likely locality of the mineral species wulfenite. The model predicted the Surprise Mine, Cookes Peak District, Luna County, NM, United States as a very likely new location of wulfenite (locality 3 ). Erin Delventhal, a member of the Mindat.org management team, validated this prediction by going to Cookes Peak and positively identifying an occurrence of wulfenite (image of collected sample 4 ). These preliminary results highlight the promise of discovery with affinity analysis in mineral systems. GPlates Plate Tectonic Reconstructions GPlates is an open-source and cross-platform plate reconstruction software that enables users to incorporate any vector or raster data into digital community plate motion models (Merdith et al., 2017;Müller et al., 2018;Young et al., 2019). Incorporation of mineralogical data into plate tectonic reconstructions (Figure 6) will illuminate tectonic drivers and feedbacks of mineralization through deep time, such as identifying tectonic settings that preferentially generate or focus particular mineral species. We will begin to answer questions related to subduction conditions (i.e., depth of mantle wedge interaction and estimation of slab angle, rate of subduction, devolatilization of the subducting slab, etc.) of subduction-related mineralization, characterize mineralization associated with mantle plume and hydrothermal settings, collisional regimes, and identify mineralization clearly not controlled by tectonic influences. A video of a preliminary reconstruction model of carbon mineralization through deep time (from modern day to 1.0 Ga) can be found at https://4d.carnegiescience.edu/explore-our-science. Quantifying Preservational Bias Preservational and sampling bias is inherent to geologic materials, the magnitude of which is not uniform through time or across a system and, therefore, can be very difficult to quantify. Recent data-driven studies of mineralization associated with the Rodinian assembly (Liu et al., , 2018b have examined the differences in the mineralogy and geochemistry of igneous rocks associated with the assembly of the Rodinian supercontinent, as igneous rocks of Rodinian age tend to have different geochemical signatures than those from other supercontinents. The question remains: how much of the trend is related to conditions and processes during assembly and how much is related to preservational and sampling bias? This is evident in Figure 6 where major increases in carbon mineralization is associated with the younger mega-continent of Gondwana and supercontinent of Pangea, while the signal related to Nuna and Rodinia assembly is more subdued in the cumulative frequency plot. This question must also be asked of many other formational environments, including those relevant to carbon mineralization (e.g., carbonate platforms, carbonatites). Ongoing and future studies will attempt to quantify preservational bias in the mineralogical record by examining factors that contribute to preservation, such as mineral characteristics (e.g., solubility, hardness), common tectonic settings of mineral formation, etc. It is also important to consider human factors that govern sampling, including economic significance of the material, physical characteristics (e.g., color, crystal habit, size, luster), and scientific importance, and may result in sampling bias within datasets. These data will be used to develop statistical models for prediction of the amount of erosional loss through deep time. Microbial Populations and Mineral Systems An underlying driving principle of studying Earth's mineralogy through deep time is to gain insight into the co-evolution of the geosphere and biosphere. Mineral evolution studies characterize Earth's mineralogy during the time of life's emergence and throughout its evolution, but how do we garner an understanding of direct influence and feedback systems between Earth materials (e.g., the "geochemical environment") and microbial populations? Given the dearth of ancient microbial samples, we can examine modern day equivalents, particularly in geochemical environments most likely to be analogous to ancient environments (e.g., hydrothermal vents, hot springs). Therefore, a study is underway to employ advanced analytics and visualization, including network analysis, to characterize the complex, multidimensional, multivariate relationships between the metagenomes of extant microbial populations and their geochemical environments Buongiorno et al., 2019aBuongiorno et al., ,b, 2020Giovannelli et al., 2019). Figures 7A,B illustrate a preliminary look at bipartite networks of sampling site locations and their metagenomes (A) and mineralogy (B). A multilevel network approach (see "Network Structure Quantification" section above) and transfer learning techniques will be used to relate location, metagenomic data, and mineralogy ( Figures 7A,B), as well as aqueous geochemistry, temperature, pressure, pH, salinity, and more, and to generate models quantifying the complex relationships therein. These studies are examining trends in metagenomic and geochemical parameters across a single arc system (Barry et al., 2019a,b), across multiple systems such as volcanic arcs, mid-oceanic ridges, and hot spots, and across disparate systems around our planet, as depicted in Figures 7A,B (e.g., including settings like acid-mine drainage, permafrost, and hot springs). Targeting closely related systems, such as a single volcanic arc or all hotspot related hot spring systems, allows tight correlation of changes in geochemical conditions with changes in microbial communities due to the fact that there is less variance in the environmental parameters. Whereas a more global comparison allows for examination of all possible environmental and microbial variables. Preliminary results show distinct, complex trends in geochemical parameters related to changes in protein functions of microbial populations. DISCUSSION Motivated by understanding Earth's mineral diversity and distribution through deep time, bioavailability of redox sensitive elements during the emergence and evolution of life, biosignatures at mineralogical and planetary scales, and underlying geologic and biologic drivers of mineralization, we have made many discoveries in carbon science, including: (1) Earth's mineralogy is a function of the physical, chemical, and biological processes that are different at each stage of planetary evolution. (2) Earth's mineral diversity and spatial distribution FIGURE 6 | GPlates plate tectonic reconstruction snapshots with carbon mineral occurrences, and a cumulative frequency plot highlighting that some increases in carbon mineral occurrences are contemporaneous with changes in the supercontinent cycle. Full video (modern day to 1.0 Ga) available at https://4d.carnegiescience.edu/explore-our-science. FIGURE 7 | (A,B) Bipartite microbial population-locality and mineral-locality networks. Force-directed, bipartite networks of a preliminary analysis of interaction between metagenomes and mineralogy of the same sampling sites. (A) Metalloprotein oxidoreductases (colored nodes; enzyme commission EC1 class) and the sites where they were found (black nodes). Enzyme nodes sized according to their counts and colored by their subclass. (B) Bipartite network of the mineral diversity at the same sites. Mineral nodes in gray, sized according to their mineral diversity; site nodes in black. follows an LNRE trend, a trend that is visually represented in the topology of mineral-locality bipartite network renderings and is likely a planetary-scale biosignature. (3) Predication of as-yet undiscovered mineral species, which spurred the Carbon Mineral Challenge -an initiative that has reported 30 new carbon minerals species in less than 3 years. (4) Recognition of embedded trend lines in network topologies, such as those of chemical composition, crystal structural complexity, time, and paragenetic mode. In addition, this work has developed new tools for visualization of mineral systems, including mineral networks, as well as 3D and VR platforms thereof. Furthermore, we are exploring and are on the brink of discoveries related to (1) quantifying mineral network structures and their underlying geologic, biologic, and planetary drivers, (2) predicting mineral paragenetic mode on Earth and other planetary bodies through natural kind clustering, (3) predicting the new locations of mineral occurrence and missing minerals at specified locations on Earth's surface via affinity analysis, (4) investigating the tectonic drivers of mineralization through deep time through integration with paleotectonic reconstructions, (5) understanding the complex feedback systems controlling the relationships between mineralogy and the geochemical environment with microbial populations and their enzymatic functions, and (6) quantifying preservational and sampling bias in the mineralogical record. These recent discoveries and new research directions show great promise for further unraveling the complexities surrounding carbon mineral formation, the deep carbon cycle, and life's coevolution with Earth materials and processes. AUTHOR CONTRIBUTIONS The individuals listed with the following projects or databases provided discussion, performed analyses, and/or collected/curated data. JG, RH, RD, and SM: the International Mineralogical Association list of mineral species (RRUFF.info/IMA). CL, DH, JG, JR, RH, RD, SR, and SM: the Mineral Evolution Database (MED). AE, AP, JR, RH, RD, SM, and SK: the Mineral Properties Database (MPD). JR: Mindat.org. AE, AP, JG, JR, PF, RH, RD, SM, and SK: Global
v3-fos-license
2017-04-13T14:04:42.575Z
2014-05-29T00:00:00.000
11577370
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/gastro/article-pdf/2/4/262/9523874/gou025.pdf", "pdf_hash": "b9e404c717ca11ed52fc5551509c6e518c94aa7b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2542", "s2fieldsofstudy": [ "Medicine" ], "sha1": "bce3d0cda88bff6d93eb8e0bdbbb8a424ba143d8", "year": 2014 }
pes2o/s2orc
Small bowel bleeding: a comprehensive review The small intestine is an uncommon site of gastro-intestinal (GI) bleeding; however it is the commonest cause of obscure GI bleeding. It may require multiple blood transfusions, diagnostic procedures and repeated hospitalizations. Angiodysplasia is the commonest cause of obscure GI bleeding, particularly in the elderly. Inflammatory lesions and tumours are the usual causes of small intestinal bleeding in younger patients. Capsule endoscopy and deep enteroscopy have improved our ability to investigate small bowel bleeds. Deep enteroscopy has also an added advantage of therapeutic potential. Computed tomography is helpful in identifying extra-intestinal lesions. In cases of difficult diagnosis, surgery and intra-operative enteroscopy can help with diagnosis and management. The treatment is dependent upon the aetiology of the bleed. An overt bleed requires aggressive resuscitation and immediate localisation of the lesion for institution of appropriate therapy. Small bowel bleeding can be managed by conservative, radiological, pharmacological, endoscopic and surgical methods, depending upon indications, expertise and availability. Some patients, especially those with multiple vascular lesions, can re-bleed even after appropriate treatment and pose difficult challenge to the treating physician. INTRODUCTION Gastro-intestinal (GI) bleeding is a common and perplexing problem encountered by gastroenterologists. The small intestine is the least common site of GI bleeding but is the commonest cause of obscure GI bleed. It is estimated that upper gastro-intestinal bleeding (UGIB) (from the oesophagus to duodenum), lower gastro-intestinal bleeding (LGIB) (from the colon and anorectum) and obscure bleeding account, respectively, for 50%, 40% and 10% of total GI bleeding [1]. The small bowel is called 'the dark continent of the GI tract' because of its inaccessibility to endoscopists, due to its intra-peritoneal location, excess mobility and long length. Approximately 5% of GI bleeding occurs from the small bowel, defined as the region between the ligament of Treitz and the ileocecal valve [2]. Traditionally, various reports have included small bowel bleeding in LGIB (distal to the ligament of Treitz) or as a cause of obscure gastro-intestinal bleeding (OGIB). Recent advances have led to reclassification of GI bleeding into three categories: upper-, mid-and lower GI bleeding. If the source of GI bleeding is between the ampulla of Vater and the terminal ileum, it is designated as mid-GI bleeding [3,4]. Because of an inability to visualize the small bowel properly, patients with a small bowel GI bleed usually end up undergoing multiple diagnostic investigations, requiring multiple hospitalisations and transfusions; therefore, it is necessary to identify the cause and site of haemorrhage accurately, so as to institute appropriate, effective therapy. In last decade, the availability of advanced diagnostic innovations like capsule endoscopy (CE), double-balloon enteroscopy (DBE) and computed tomography enterography has led to better understanding of the aetiological profile of small bowel bleeding and there is a paradigm shift in the management of small bowel bleeding, with the majority of cases now being treated non-surgically. In this review, we will discuss the aetiology, current diagnostic approach and the therapeutic options available for managing patients with small bowel bleed. AETIOLOGY A variety of lesions may result in small bowel bleeding, with the aetiology of bleeding being different in various age groups ( Table 1). The commonest lesions responsible for small bowel GI bleeding are vascular, with other causes being tumours, inflammatory lesions, and medications, as well as some rare causes like haemobilia, haemosuccus pancreaticus and aorto-enteric fistula. Vascular lesions and small bowel lesions induced by non-steroidal antiinflammatory drugs (NSAID) are the common causes of small bowel GI bleeding in the elderly, whereas tumours, Meckel's diverticulum, Dieulafoy's lesion and Crohn's disease are the common causes in patients under 40 years of age [5,6]. Zhang et al. studied 385 OGIB patients and found that, in elderly patients (>65 years), vascular anomalies (54.35%), small intestinal ulcer (13.04%), small intestinal tumours (11.96%) were the common cause of small intestinal bleeding; in middle age (41-64 years) vascular anomalies (34.82%), small intestinal tumours (31.25%), non-specific enteritis (9.82%) were the major causes and in young adults (<40 years), the leading causes were Crohn's disease (34.55%), small intestinal tumours (23.64%) and nonspecific enteritis (10.91%) [6]. Angiodysplasia (angioectasia or vascular ectasia) is abnormally dilated, tortuous, thin-walled vessels, involving small capillaries, veins and arteries (Figures 1a, 1b, 1c) [7,8]. They are visualized within the mucosal and submucosal layers of the gut, are lined by endothelium with little or no smooth muscle, and lack inflammatory or fibrotic changes as well as fibrosis [8,9]. They are the most common cause of small bowel bleeds. In a systematic review by Liao et al. that included 227 studies and 22 840 small bowel capsule endoscopies, OGIB, at 66%, was the most common indication and angiodysplasia was the most common underlying lesion (50%) [10]. Meyer et al. reviewed 218 cases of arterio-venous malformations (AVM) and found that the cecum or right colon was the most common location (78%), whereas the jejunum (10.5%), ileum (8.5%) and duodenum (2.3%) are other sites for AVM [11]. Angiodysplasia is associated with various clinical conditions and syndromes. Bleeding from angiodysplasia in patients with aortic stenosis (AS)-termed Heyde's syndrome-is a well-known clinical syndrome [12][13][14][15][16]. It has been shown that high stress in aortic stenosis causes shear-dependent cleavage of high molecular weight multimers of von Willebrand's factor (vWF), leading to acquired vWF deficiency [17]: vWF is essential for the adhesion and aggregation of platelets to the sub-endothelium of damaged blood vessels. Aortic valve replacement had ameliorated the acquired vWF abnormality, suggesting an association between them [17]. An American Gastroenterology Association technical review in 2007 concluded that, even if there is an association between aortic stenosis and angiodysplasia, it is weak and often exaggerated [5]. It is proposed that patients with AS have previously non-recognised latent intestinal angiodysplasia that bleeding as a result of this acquired haematological defect. Chronic renal failure (CRF) is another condition that is associated with increased frequency of GI angiodysplasia. Karagiannis et al. studied 17 CRF patients and 51 patients with normal renal function who had presented with OGIB; 47% of patients with CRF had small bowel angiodysplasia as compared with 17.6% of those with normal renal functions [18]. Another study of patients who were on haemodialysis made similar observations and these lesions were found to be more common in the ileum [19]. Uremic platelet dysfunction is one of the supposed mechanisms for increased risk of bleeding in patients with CRF [20]. In a recent study, hypertension, ischaemic heart disease, arrhythmias, valvular heart disease, congestive heart [21]. Telangiectasias usually have associated cutaneous and mucous membrane involvement, in contrast to angiodysplasias where only the GI tract mucosa is involved. Telangiectasias lack capillaries and consist of direct connections between arteries and veins and have excessive layers of smooth muscle without elastic fibres [22]. Hereditary haemorrhagic telangiectasia (HHT) is a condition commonly associated with small intestinal telangiectasia; it usually presents with epistaxis in younger age and GI bleeding is a delayed manifestation that usually does not occur until the fifth or sixth decade of life [23]. Telangiectasias occur throughout the GI tract, but are more common in the stomach and duodenum and these patients usually present with iron deficiency anaemia (IDA), with recurrent GI bleeding occuring in 15-20% [24,25]. Turner's syndrome and scleroderma are other clinical conditions that are associated with GI telangiectasia [26,27]. Dieulafoy's lesion is a rare cause of GI bleeding that can sometimes be massive and life-threatening [28]. It is most commonly located in the stomach and the small bowel is an uncommon site [28]. In a study by an Austrian group in 284 patients with mid GI bleed, Dieulafoy's lesion was found in 10 patients (3.5%) (in the proximal jejunum in nine patients and the ileum in one patient) [29]. Nortan et al. studied 4804 episodes of acute GI bleeding, in which they identified 90 Dieulafoy's lesions in 89 patients, but only 2% of these lesions were located in the jejunum [30]. In another review of 249 cases of Dieulafoy's lesion, only 26 cases (were identified in the small bowel (beyond the ligament of Trietz) [31]. Small bowel varices are large, portosystemic venous collaterals occurring in the small intestine; they are most commonly associated with portal hypertension or abdominal surgery and are uncommon causes of GI bleeding [32]. In a review of 169 patients with bleeding ectopic varices, 17% of patients had varices in the duodenum, 17% in the jejunum or ileum and 26% of patients bled from peristomal varices [32]. In a survey by the Japan Society of Portal Hypertension, which included 173 patients of ectopic varices, duodenal, jejunal, ileal, and peristomal varices were present in 32.9%, 4%, 1.2% and 5.8% of patients, respectively [33]. A study of 37 patients with cirrhosis and portal hypertension who underwent CE reported that small bowel varices were seen in 8.1% of patients [34]. In a report from our centre, of 41 patients with portal hypertension who underwent ileocolonoscopy, ileal varices were observed in 21%; also one-third of these patients had ileal mucosal changes suggestive of portal hypertensive ileopathy [35]. Small bowel tumours are uncommon causes of GI bleeding, with primary small bowel tumours comprising about 5% of all primary GI tract neoplasm [36]. Small bowel tumours have been reported to be the second most common cause of small bowel bleeding, accounting for 5-10% of cases [37]. In a series of 49 patients, Ciresi et al. reported that benign tumours more commonly presented with acute GI haemorrhage (29% vs 6%), and were more often asymptomatic (47% vs 6%) as compared with malignant small bowel tumours [38]. Adenocarcinoma is the most common primary malignancy of the small bowel, accounting for 35-50% of small bowel tumours, whereas carcinoid tumours account for 20-40%, lymphomas 14%, and sarcomas 11-13% [39,40]. Adenocarcinomas are more common in the duodenum and proximal jejunum, whereas lymphomas and carcinoid tumours are most frequently located in the distal small bowel [40][41][42][43]; the sarcomas are evenly distributed throughout the small bowel. Data from the Connecticut Tumor Registry reported that the most common location of small bowel tumours was the ileum (29.7%), followed by the duodenum (25.4%) and the jejunum (15.3%) [44]. Gastro-intestinal stromal tumours (GIST) probably originate from the interstitial cells of Cajal, with the stomach being the most common location (50-60%) followed by the small intestine (30-35%) [45]. GI bleeding is usually due to the compression, ischaemia or infiltration of the overlying mucosa by these submucosal tumours. In a series of 47 patients with GIST, acute abdomen and small bowel bleeding were the common presenting symptoms [46]. Another study by Vij et al. reported GI bleeding to be the most common clinical presentation of GIST (69.6%) and the jejunum (17.4%) was the most common site, followed by the ileum (6.6%) and duodenum (3.3%) [47]. Other neoplasms that can be seen in the small bowel are leiomyoma, enteropathy-associated T-cell lymphoma (EATL), Kaposi sarcoma, polyposis syndromes and metastasis to the small bowel [48][49][50][51][52]. Small bowel ulcers are another important cause of GI bleeding. Although most studies report angiodysplasia as the commonest cause of OGIB, one study involving 385 OGIB patients from India reported small bowel ulcers or erosions (156 patients) as the most common cause of OGIB [53]. The authors were not able to characterize all the ulcers, but Crohn's disease (Figures 2 and 3), intestinal tuberculosis ( Figure 4) and NSAID-induced small bowel ulcers ( Figure 5) were responsible for OGIB in 42, 12, and 12 patients, respectively [53]. The prevalence of small bowel ulcers increases with age, with reported frequency of 13.04% in patients over 65, as compared with 7.27% in patients under 40 years [6]. Apart from Crohn's disease, tuberculosis and NSAID enteropathy, other causes of small bowel ulcers could be tumours, medications, non-specific ulcers, idiopathic chronic ulcerative enteritis and celiac disease [54][55][56]. Various infections of the small bowel, including tuberculosis, enteric fever and parasitic infections like hookworm ( Figure 6) can also cause small bowel bleeding. In a study involving 40 OGIB patients, small bowel tuberculosis was responsible for bleeding in 10% [57]. GI bleeding can also 265 occur in enteric fever and one study on an outbreak of 3010 cases of enteric fever reported melena in 38% of the patients, with the ileocecal area being the most common site of involvement (72%), and the ileum was involved in only 3% of patients [58]. Meckel's diverticulum is a common congenital abnormality of the small bowel-a result of incomplete closure of the vitelline duct-and affects 2-3% of the population [59]. The bleeding usually results from the ulceration of the ectopic gastric mucosa within the diverticulum. In the Mayo Clinic, experience of 1476 patients with Meckel's diverticulum, seen from 1950 to 2002, only 16% of patients were symptomatic, with GI bleeding being the most common presentation in adults (38%) [60]; its diagnosis is difficult and the technetium-99m ( 99m Tc) pertechnetate scan (Meckel's scan) has a sensitivity and specificity of 85% and 95%, respectively [60]; however, in adults the sensitivity of Meckel's scan falls to 63% because of the presence of lesser gastric mucosa in the diverticulum, as compared with children (63% vs 78%) [60][61][62]. False-negative results can also occur because of inadequate gastric mucosal cells, inflammatory changes causing oedema or necrosis, presence of outlet obstruction of the diverticulum or low haemoglobin levels [63,64]. In false-negative cases, mesenteric arteriography or DBE can help in achieving a correct diagnosis. Zheng and co-workers reported 28 children with OGIB who had negative Meckel's scan and, in 10 patients, Meckel's diverticulum was diagnosed by DBE [63]; CE is another diagnostic modality, but there is risk of capsule retention [65,66]. Jejuno-ileal diverticula are uncommon causes of small bowel bleeding, with reported frequency of 1.1-2.3% [67,68]; they usually occur at the mesenteric border, are usually multiple and more common in the jejunum. The majority of these diverticula are asymptomatic, with GI bleeding occurring in only 3.4-8.1% of patients with small bowel diverticula, but whenever the bleeding occurs it is usually massive and recurrent [69][70][71]. Aortoenteric fistula is a rare, life-threatening condition and almost always seen secondary to reconstructive aortic aneurysmal surgery. It typically involves third portion of the duodenum and presents with herald bleeding, followed by massive life-threatening bleeding [72,73]. Haemobilia is a rare cause of OGIB and is due to abnormal communication between the vessels and the biliary system. It is difficult to diagnose, however it should be suspected in any patient with a prior history of blunt trauma of the abdomen or medical procedures [74]. It usually presents as melena (90%), haematemesis (60%), biliary colic (70%) or jaundice (60%) [75]. The bleeding can occur periodically, and it can also manifest itself as massive GI haemorrhage. In recent studies the most common cause of haemobilia is iatrogenic trauma (65%), whereas earlier studies had reported accidental trauma (38.6%) as the most common cause of haemobilia [76]. Side-viewing endoscopy can directly visualize the clot extrusion or blood oozing from the papilla of Vater. Cross-sectional imaging can also help in diagnosis by showing presence of blood in the gall bladder and the biliary tree, and can also recognise various causes of haemobilia. Endoscopic retrograde cholangiopancreatography (ERCP) is rarely helpful in discerning the source of haemobilia but may be helpful in clearing the blood clots and relieving the biliary obstruction. Selective visceral arteriography of the celiac axis or superior 266 mesenteric artery is the most definitive investigation and also has therapeutic potential [76]. Haemosuccus pancreaticus is another unusual cause of GI bleeding from the peripancretic blood vessel into the pancreatic duct. It usually occurs in the setting of chronic or acute pancreatitis [77]. Other causes are various tumours, vascular lesions, congenital anomalies, iatrogenic or trauma [77]. Patients typically present with intermittent epigastric pain in the abdomen, GI bleeding (melena, haematemesis, haematochezia) and hyperamylasemia. The bleeding may be intermittent or sometimes massive. The diagnostic strategy is similar to haemobilia. Selective arteriography of the celiac trunk and superior mesenteric artery is the most sensitive diagnostic tool, with sensitivity of up to 96% [78]. There are various other causes of small bowel bleeding, such as radiation enteritis, mesenteric ischaemia, and endometriosis [79]. Additionally, in tropical countries, intestinal infestation by worms can be an important cause. One study from India, in which 21 cases of obscure GI bleeding were evaluated by push enteroscopy, found worms as the cause in 28.5% of patients [80]; therefore a proper stool examination should be done in patients from tropical countries before undertaking further invasive and expensive investigations. MANAGEMENT OF SMALL BOWEL BLEED Assessment The initial assessment of the patient with small bowel bleeding must include a good clinical history and physical examination. A small bowel bleed may present as occult-(IDA) or overt (melena or haematochezia) bleeding. It may be persistent or recurrent and can be massive, leading to shock. The history should include details of medications (NSAID, aspirin, and anticoagulants), radiation therapy, any coagulation disorder or cirrhosis, trauma, prior surgery, and recent endoscopic intervention. Family history of bleeding, recurrent epistaxis and cutaneous telangiectasia may suggest HHT. Pigmented lip lesions and family history of polyposis may point towards a diagnosis of Peutz Jegher's syndrome and the presence of spider angiomata and caput medusa will suggest portal hypertension. History of chronic pancreatitis should be sought, which may be a clue to haemosuccus pancreaticus. The classical triad (the Sandblom triad) of haematemesis, upper abdominal pain and jaundice may points toward haemobilia. Painless bleeding may suggest vascular lesions, whereas painful bleeding may be due to small bowel tumours or NSAIDrelated GI injury. As stated earlier, history of passage of worms must be carefully looked for, especially in tropical regions. Diagnosis Small bowel bleeding localization used to be a tedious and a tough job for gastroenterologists but, due to recent advances in imaging techniques, there is a paradigm shift in evaluation of patients with small bowel bleeding. Newer techniques such as capsule endoscopy, double-balloon enteroscopy, single-balloon enteroscopy (SBE), spiral enteroscopy (SE) or computed enterography play a key role in the diagnosis of small bowel bleeding. Small bowel radiography used to be the main diagnostic modality for evaluating patients with small bowel bleeding but, with the advent of deep enteroscopy and newer crosssectional imaging modalities, its use is declining. The diagnostic yield of small bowel radiography has been reported to be 5-10% in patients with suspected small bowel bleeding [81,82]. In a meta-analysis, the yields of small bowel barium radiography were 8% for any findings and 6% for clinically significant findings; in contrast, the yields of CE were 67% and 42%, respectively [83]. Small bowel radiography is unlikely to be helpful in diagnosing vascular lesions, which are the most common cause of small bowel bleed, but may help in localizing mucosal lesions in inflammatory bowel disease, tuberculosis, ulcers or small bowel tumours. The development of CT enterography (CTE) has led to improved imaging of the small bowel and its surrounding structures but, for good visualisation and better delineation of mural details, it is essential to have adequate bowel luminal distension with neutral oral contrast. It can also help in localisation of active bleeding as the presence of active GI bleeding would be seen as a focal area of hyperdense attenuation in the bowel lumen on plain scan or as focal area of contrast enhancement or extravasation into the lumen on a contrast-enhanced study [84]. Lee and co-workers evaluated the diagnostic performance of CTE in 65 patients of OGIB [85]. The sensitivity, specificity, positive predictive value, and negative predictive value of CT enterography in diagnosing the underlying cause was 55.2%, 100%, 100%, and 71.1%, respectively, and history of massive bleeding was associated with a higher diagnostic yield of CTE. When CTE is compared with CE, the results are contradictory. One study reported a low detection rate of CTE in comparison to diagnostic yield of CE (30.08% vs 57.72%) [86]. However another study reported a higher detection rate for CTE when compared with CE (88% vs 38%, respectively) [87]. The limitation of CTE is that it cannot diagnose flat lesions such as ulcers, superficial erosions, and vascular lesions (angiodysplasias or AVM) [88]; but CTE detects small bowel tumours better, which can sometimes be missed by CE, especially those tumours with a predominantly exophytic component [89]. Multi-detector CT angiography does not require bowel loop distension and is performed using intravenous contrast agent. It identifies the site of active bleeding as a focal area of hyperattenuation or contrast extravasation in the bowel lumen and has a higher sensitivity in detecting active GI bleeding than for OGIB [90]. Helical CT angiography has been shown to be more sensitive than mesenteric angiography in detecting active haemorrhage, with bleeding rates as low as 0.3 mL/min being detected in animal models. This is better than the detection threshold of 0.5 mL/min of mesenteric angiography and is close the detection threshold of 0.2 mL/min of RBC scintigraphy [91]. In a meta-analysis, pooled sensitivity and specificity of CT angiography in acute GI bleeding were reported to be 89% and 85% respectively [92]. However, inability to perform therapeutic procedures is a major limitation of CT angiography. Catheter angiography or digital subtraction angiography (DSA) is performed by intra-arterial injection of contrast following selective and super-selective cannulation of visceral arteries. A higher diagnostic yield of 61-72% has been reported in patients with active bleeding, in contrast to a low diagnostic yield of <20% in patients with inactive bleeding [93]. The advantage of this technique is the ability to perform embolization of bleeding vessels by various tools, such as absorbable gelatine pledgets, polyvinyl alcohol, microspheres, cyanoacrylates or microcoils, used alone or in combination. The success rate approaches 100% if active bleeding site is identified, with a complication rate of less than 5% [94]. To increase the diagnostic yield, provocative angiography has also been described by administration of heparin, thrombolytics and vasodilators that provoke bleeding and thus increase the yield of angiography [95]; however, with the availability of better imaging tecniques, such a risky approach is usually not recommended. Radionuclide imaging is performed with 99m Tc-labelled red blood cells and it detects active bleeding. Any extravasations of radionuclide from the vascular space can be recognised as an area of concentration and slower clearing of the activity against the background [94]. Bleeding rates as low as 0.1 mL/min can be identified and the accuracy of scintigraphy in localizing bleeding varies from 40 to 100%. However, inaccurate localization of the bleeding site is observed in 25% of patients and this is one of the most important limitations of scintigraphy [96,97]. The advantages of scintigraphy include its easiness to perform, require no patient preparation, is well tolerated and has high sensitivity in low bleeding rates. Its role in Meckel's diverticulum is already highlighted in previous section. Push enteroscopy (PE) can also help in diagnosing causes of small bowel bleeding especially if located in the proximal small bowel. Although, non-therapeutic Sonde enteroscopy can visualize the whole intestine, but it takes many hours to complete, and is now not used by most of the centres. PE allows both diagnostic and therapeutic applications, and with its help distal duodenum and proximal jejunum of variable length can be visualized. In a study of 21 patients of OGIB PE detected lesions causing GI bleeding in 42.8% of patients [80]. Another study of PE in 63 patients of OGIB found its diagnostic yield to be 41% in recurrent obscureovert bleeding, 33% in persistent obscure-overt bleeding and 26% in obscure-occult bleeding [98]. Capsule endoscopy (CE) enables the complete small bowel visualization non-invasively (Figure 1-6) but currently this investigation has no therapeutic potential. In a study of 685 patients of acute overt GI bleeding (melena, haematochezia, or haematemesis) no aetiology could be found out in 37 patients after upper and lower endoscopic evaluation and these patients were subjected to capsule endoscopic examination. The diagnostic yield of CE was found out to be 91.9% and it changed the management plan in 21 patients [99]. Carey et al. evaluated 260 patients with OGIB and found the diagnostic yield of CE to be 53% and this was higher in patients with obscure-overt (60%) GI bleeding as compared with patients with obscure-occult GI bleeding (46%) [100]. There were significant reductions in hospitalizations, additional tests/procedures, and units of blood transfused after medical interventions in the group of patients who underwent CE [95]. In one study, CE changed the treatment strategy of physicians in 31% of patients; whereas in another study, it had a major impact in patient management in one third of the patients [101,102]. In a systemic review, the pooled diagnostic yield of CE in studies that focused solely on patients with iron deficiency anemia (IDA) was 66.6% and it detected more vascular (31% vs 22.6%), inflammatory (17.8% vs 11.3%), and mass/tumour (7.95% vs 2.25%) lesions in patients who had IDA as compared with those who did not have IDA [103]. The diagnostic yield of CE has been reported to be higher than that of barium radiography (42% vs 6% respectively, p < 0.00001), CT angiography (CTA) (72 % vs 24%, p = 0.005) and standard mesenteric angiography (72% vs 56%, p > 0.05) [83]. CE also detects lesions in patients with negative findings on CTA (63%) and standard mesenteric angiography (55%) [104]. In a prospective randomized study of patients with acute overt OGIB (n = 60) comparing immediate CE with angiography, the diagnostic yield of CE was found to be significantly higher (53.3% vs 20%, p = 0.016). However, immediate CE did not have any impact on long-term outcomes including further transfusions, hospitalization for rebleeding, and mortality [105]. Milano et al. prospectively compared CE with CT enteroclysis in 45 endoscopic negative IDA patients, and reported that CE was superior to CT enteroclysis (diagnostic yield of 77.8% vs 22.2%; p < 0.001). CE was found to be better for detecting flat lesions [106]. A meta-analysis of 14 studies comparing the yield of CE with push enteroscopy for evaluation of OGIB reported higher diagnostic yield for CE 268 (63% vs 28%, p < 0.00001) [83]. CE had a higher yield for detection of vascular and inflammatory lesions but had no benefit in detection of tumours [83]. The performance of CE has also been compared with deep enteroscopy techniques. A recently published metaanalysis reported no difference in pooled diagnostic yield of CE vs DBE (62% vs 56%, p = 0.16); however, diagnostic yield of DBE significantly increased to 75% if it was performed after a positive CE, whereas it was only 27.5% if the previously performed CE was negative. [107] One study compared CE with intra-operative enteroscopy (IOE) in 47 patients and CE identified lesions in 100% of the patients with ongoing overt bleeding whereas the diagnostic yield was 67% in patients with previous overt bleeding, and 67% in patients with obscure-occult bleeding [108]. This signifies that the diagnostic yield of CE depends upon the pattern of bleeding and is highest in patients with obscureovert bleeding. Evaluation of patients of OGIB with negative CE is a challenge and some studies have suggested the role of repeat CE in these patients. Jones et al. studied repeat CE examination in 24 patients with most common indication for repeat study being recurrent gastro-intestinal bleeding and limited visualization on the initial study. On repeat CE, additional lesions were detected in 75% of patients and this lead to change in patient's management in 62.5% of patients [109]. In another study, out of 676 patients eighty-two underwent repeat CE examination and positive findings were detected in 55% of the patients and this lead to change in management in 39% patients. The main indications of repeat CE examination in this study were recurrent GI bleeding, iron deficiency anaemia and a previous incomplete study [110]. Viazis et al. studied 293 patients of OGIB, in which seventy-six patients were subjected to repeat CE examination due to non-diagnostic first test. Factors which significantly predicted the diagnosis were the change of the bleeding presentation from occult to overt, and the drop in haemoglobin of 4 g/dL or more [111]. One study reported that back to back CE within 24 hours increases the diagnostic yield in patients of OGIB with overall mean lesion-detection rates of the first and second CEs being 42.2% and 64.6%, respectively [112]. Various factors which increases the diagnostic yield of CE are patients with severe bleeding, increasing age , longer small bowel transit time, and if performed within 48 hours of bleeding [53,[113][114][115]. Patients with initial negative CE may have a variable rate of rebleeding from 5.6% to 45.1% in various studies and therefore these patients require close observation and regular follow up [116,117]. Deep enteroscopy is being used for the complete examination of the small bowel by using double balloon enteroscopy (DBE), single balloon enteroscopy (SBE) and spiral enteroscopy (SE). All these endoscopes have both diagnostic and therapeutic potential and require an over tube for advancement of the scope. In a systemic review by Xin et al. which included 12,823 DBE procedures, the mid gastro-intestinal bleeding (MGIB) was the most common indication for DBE (62.5%) and the pooled detection rate was 68% (62.9-72.8%). The lesions detected were vascular (40.4%), inflammatory (29.9%), neoplastic (22.2%), diverticulum (4.9%) and others (2.7%) [118]. The pooled minor and major complications rate were 9.1% and 0.72%, respectively with major complications being perforation, acute pancreatitis, bleeding and, aspiration pneumonia. The complication rate has been described more commonly in therapeutic DBE (4.3%) as compared with diagnostic DBE (0.8%) [119]. SBE has a single balloon at tip of overtube and has similar diagnostic and therapeutic yield [120]. The complete visualization is possible in 11% of the patients, in contrast to 18% in DBE [121]. However, in another study complete examination of the small bowel was not possible in any patient by SBE [122]. SBE appears to be safe, and complication rate is low and comparable with DBE. Spiral endoscopy has a spiral over tube of 118 cm length and at distal end there is 5.5 mm raised helix of 21 cm length. It can be fitted on any of the deep enteroscope or paediatric colonoscope. When compared with DBE, SE reduces the examination time but the depth of insertion is greater in DBE; however, another large study showed depth of insertion to be greater in SE, but had similar diagnostic and therapeutic yield [123,124]. In a prospective, randomized, single centre trial of 26 patients, DBE perform better with regard to the depth of insertion or the rate of complete enteroscopies achieved, but required more time as compared with SE [125]. When SE was compared with SBE, SE was found to be having greater depth of insertion but similar diagnostic yield and procedural time [126]. The risk of hyperamylasemia (20%) was common after SE but no pancreatitis was reported in a cohort of 32 patients [127]. In summary, most parameters are comparable among all the three deep enteroscopy techniques; however, the procedural time is shorter in SE, as compared with SBE and DBE. The complete enteroscopy rate is higher in DBE than that of SBE and SE; but the clinical impact of complete enteroscopy needs further evaluation ( Table 2). Intra-operative enteroscopy is a last resort and gold standard for evaluation of OGIB. Due to the availability of CE and deep enteroscopy, it is infrequently used in current practice. Indications for IOE are when small bowel lesions have not been localised with other techniques or cannot be treated by endoscopic or angiographic embolization or when patients condition does not allow non-invasive diagnostic evaluation [128]. IOE can be accessed either by open laparotomy or by laparoscopic assisted technique [129]. The different approach for IOE that can be used are-the transoral, transanal, through the enterotomy site or combined [128]. The endoscopes which are preferred for this purpose are the gastroscope or paediatric colonoscope and to reduce the risk of infection they should be sterile. The surgeon colleague helps by telescoping the small bowel over the scope. Because of infrequent indications and invasiveness, caution should be exercised before taking the patient for IOE [128]. In a study comparing four forms of small bowel endoscopy (IOE, CE, PE and DBE) the diagnostic yield was found to be 88%, 34.6%, 34.5% and 43% respectively [130]. This study emphasises the value of CE as first line investigation as it has a comparable diagnostic yield with respect to PE and DBE while being less invasive and more tolerable. Management The small bowel bleeding can be managed by conservative, radiological, pharmacologic, endoscopic and surgical methods, depends upon the indications, expertise and availability. A patient with acute overt ongoing bleed needs resuscitation, localization of bleeding by scintigraphy, angiography or deep enteroscopy followed by therapeutic procedures. For occult or intermittent overt bleeding, localization should be done by endoscopic or radiological methods, and it should be followed by definite therapy and iron supplementation. The various therapeutic endoscopic modalities available are argon plasma coagulation (APC), electrocoagulation, injection sclerotherapy, laser photocoagulation, haemoclip placement, and endoscopic band ligation. The endoscopic therapy of vascular lesion can be decided on the basis of Yano-Yamamoto classification, which categorizes vascular lesions in six categories [131]. Type 1a-punctulate erythema (<1 mm), with or without oozing Type 1b-patchy erythema (a few mm), with or without oozing Type 2a-punctulate lesions (<1 mm), with pulsatile bleeding Type 2b-pulsatile red protrusion, without surrounding venous dilatation Type 3-pulsatile red protrusion, with surrounding venous dilatation Type 4-other lesions not classified into any of the above categories. Types 1a and 1b are considered angioectasias and can be treated by cautarization. Types 2a and 2b are Dieulafoy's lesions; managed with haemoclip placement or surgery. Type 3 represents an arteriovenous malformation; require haemoclip, banding, sclerosant or surgery. Vascular lesions that are diffuse or are present in patients who are unfit for invasive therapies can be treated with various pharmacologic agents. Hormonal therapy (oestrogen and progesterone) has been found to beneficial in various studies; however, single randomized trial and a retrospective case control study did not show any benefit of hormonal therapy and therefore current evidence does not support the role of hormonal therapy [132][133][134][135][136]. Thalidomide, a VEGF inhibitor that inhibits angiogenesis has been used for recurrent, refractory or chronic gastrointestinal blood loss due to angiodysplasia [137]. The studies have shown decrease requirement of blood transfusion and increase in haemoglobin after treatment with thalidomide [138,139]. There is single open label randomized controlled trial, which compared the efficacy of 100 mg thalidomide (n = 28) with 400 mg iron (n = 27) daily for 4 months and followed up these patients for at least 1 year [140]. The effective response rate was defined as the decreased in bleeding episode by 50% in the first year of the follow-up period. Effective response rate was significantly higher in thalidomide group as compared with iron group (71% vs 3.7%, respectively; p < 0.001). There was also significant decrease in blood transfusion, overall hospitalization, hospitalization for bleeding episodes and level of VEGF. However, adverse events were higher in thalidomide group as compared with iron group (71.4% vs 33.3%, respectively). The common adverse effects were fatigue, constipation, dizziness, abdominal distension and, peripheral oedema. Octreotide has also shown benefit in various case series, but there is no published randomized control trial. The postulated mechanism of action is multipronged and includes improved platelet aggregation, decreased splanchnic blood flow, increased vascular resistance and inhibition of angiogenesis [141]. A meta-analysis by Brown et al. which included 62 patients showed octreotide decreases the need for blood transfusion and can be used in patients with refractory bleeding, inaccessible lesions and in patients at high risk for other interventions [142]. 270 The long acting intra muscular octreotide has also been tried and found to be beneficial in treating angiodysplasias [143,144]. Dieulafoy's lesion can be treated with endoscopic and interventional radiological techniques, depending on the availability and expertise [31]. The various endoscopic techniques used in the treatment are argon plasma coagulation, haemoclips application, injection therapy and combination of all these [29]. Surgery is required in recurrent bleeding or failed endoscopic treatments [29,31]. Ectopic varices within the reach of endoscopy can be treated with endoscopic haemostatic methods. Those beyond the reach of endoscopy will require embolization via intervention radiological techniques or surgery [32]. The role of vasoactive drugs in acute bleeding episodes and beta blockers for primary and secondary prophylaxis is not clear. The small intestinal ulcers can be treated with endoscopic techniques or surgery in case of recurrent bleeding. Ulcers due to specific aetiology require treatment according to the aetiology. NSAIDs should be stopped in bleeding due to NSAID-induced ulcers. Haemobilia can be managed conservatively in patients with haemobilia due percutaneous transhepatic cholangiography and liver biopsy [145]. It can also be managed by transcatheter arterial embolization (TAE), which can achieve haemostasis in 75 to 100% of cases [76]. Percutaneous thrombin injection should be considered as an option where angiography has failed or is contraindicated [146]. The indication for surgery is failed TAE or the underlying cause of haemobilia that itself is the indication for surgery [76]. The therapeutic options for haemosuccus pancreaticus are TAE and surgery. The success rate of TAE is approximately 80 to 100%. Recurrent bleeding may occur in about 17-37% of patients, which can be managed with repeat TAE or surgery [147]. The surgery is indicated when TAE fails or other indication of surgery (pseudocyst drainage) [148]. The success rate of surgery is ranging from 70 to 85% with mortality rates of 20-25% and rebleeding rates of 0-5% [149]. It is important to treat the underlying pseudocyst to prevent recurrence of bleeding [150]. As previously mentioned management of small bowel bleeding is tempered by the presentation and it is important to resuscitate and identify the source quickly in patients with overt bleeding. However CE is often the first modality to be utilised after a negative upper and lower endoscopy irrespective of the presentation (Figure 7) [151]. Although the role of surgery has declined with technological advances, however it has still role in some patients with small bowel bleeding. Surgery is usually required in case of life-threatening bleeding, failure of other haemostatic techniques, haemodynamic instability and clinical deterioration, recurrent bleeding, and lesions beyond the 271 reach of endoscope. Surgical resection or excision is the treatment of choice for small bowel tumours. The recurrent diverticular bleeding and aortoentric fistula require surgical intervention. CONCLUSIONS The small intestinal bleeding is rare cause of gastrointestinal bleeding. The vascular lesions are the most common implicated lesions for the small intestinal bleeding. Capsule endoscopy and deep enteroscopy has made it easy to diagnose the causes of small intestinal bleeding. The computed tomography is more useful in detecting mural and extra-intestinal lesions. There is paradigm shift in managing the vascular lesions after advent of the double balloon enteroscopy. The pharmacological treatment can be helpful in recurrent, refractory, inaccessible angiodysplasias lesions and in patients at high risk for other interventions. The role of surgery is declining, however it is last resort in failed endoscopic treatments and recurrent bleeding.
v3-fos-license
2021-05-11T00:06:13.724Z
2021-01-01T00:00:00.000
234336227
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.32604/biocell.2021.014305", "pdf_hash": "1940082f47b17caaed6d4745c5e46c44e4751fb6", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2543", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences", "Medicine" ], "sha1": "c18158bbd0c352bde22201e6bd4fe267ca4ffe76", "year": 2021 }
pes2o/s2orc
Protective effects of docosahexaenoic acid against non-alcoholic hepatic steatosis through activating of JAK2/STAT3 signaling pathway Non-alcoholic fatty liver disease is the most common cause of hepatic dysfunction. In the present study, human normal hepatocyte L02 cells were treated with 50% fetal bovine serum to induce the formation of hepatic steatosis in vitro, and then the cells were treated with docosahexaenoic acid to investigate its protective effect on Non-alcoholic fatty liver disease. Our results showed that 50% of fetal bovine serum significantly induced intracellular lipid accumulation and hepatocyte fatty degeneration within 48 h. The expression level of adipose formation-related genes was significantly up-regulated, such as PPARγ, C/EBPα and SREBP-1; meanwhile, the content of cellular total lipid, total cholesterol and triglycerides were significantly increased after 50% fetal bovine serum treatment. Interestingly, docosahexaenoic acid treatment could inhibit FBS-induced intracellular lipid accumulation in L02 cells and the expression of lipogenic genes. Moreover, docosahexaenoic acid treatment could reduce hepatic steatosis-induced oxidative stress and endoplasmic reticulum stress response, and these responses were shown by the modification of antioxidant enzyme activities and GRP78, CHOP expression. In addition, the results showed that docosahexaenoic acid can activate the JAK2/STAT3 signaling pathway in fatty liver L02 cell; inhibition of JAK2/STAT3 signaling pathway by WP1066 abolished the beneficial effects of docosahexaenoic acid on hepatic steatosis accompanied with the increased expression of lipogenic genes and endoplasmic reticulum stress response. Above all, the present study showed that docosahexaenoic acid can alleviate non-alcoholic hepatic steatosis by activating JAK2/STAT3 signaling pathway. Introduction With the development of the economy and the improvement of life quality, the prevalence of various kinds of "disease of affluence" such as obesity and diabetes has increased rapidly; the non-alcoholic fatty liver disease (NAFLD) has become the most common cause of chronic liver disease. It is estimated that 24-42% of the population in Western countries and 5-42% in Asian countries are affected by (Targher et al., 2016;Zheng et al., 2016). The NAFLD is a common liver disease closely linked with the features of metabolic syndrome; the main characteristic of NAFLD is the accumulation of fat in liver cells in the absence of excessive alcohol intake (Merola et al., 2015;Smith and Adams, 2011). The accumulated-lipid droplets led to a liver more sensitive to inflammatory cytokines, oxidative stress, endoplasmic reticulum (ER), or mitochondrial dysfunction, which further induce the development of nonalcoholic steatohepatitis (NASH) (Lim et al., 2010;Luo et al., 2020). However, there is no validated drug therapy at present. The lifestyle interventions designed to reduce body weight remain the first-line treatments, such as diet and exercise, but several studies demonstrated that lifestyle interventions cannot improve the histological features of NAFLD, but only the metabolic parameters and simple steatosis (Nobili et al., 2008;Vilar-Gomez et al., 2015;Zohrer et al., 2017). The endoplasmic reticulum (ER) is an important organelle that is responsible for proper and posttranslational modification of proteins, lipid synthesis, and calcium storage. ER stress, which was caused by the accumulation of unfolded protein and calcium depletion, contributed to unfolded protein response (UPR) and the occurrence of diseases. Hepatocytes contain a large amount of ER to synthesize plasma protein, secrete low-density lipoprotein and metabolize xenobiotics. Therefore, excessive ER stress usually caused the dysfunction of ER and liver diseases, such as NAFLD or liver fibrosis (Rutkowski and Kaufman, 2004). GRP78, ATF4, and SREBP-1C, three markers UPR, which are highly expressed under ER stress, might be involved in the formation and development of NAFLD (Lewis and Mohanty, 2010;Zhang et al., 2011;Yamamoto et al., 2010). Docosahexaenoic acid (DHA) is the major polyunsaturated fatty acids (PUFA) found in marine fish oil, which is the essential fatty acid of mammals. DHA is the main component of the phospholipid of the cell membrane and cannot be synthesized in the body (Horrocks and Yeo, 1999). Previous studies found that PUFA can regulate lipid metabolization in many kinds of animals (Khan et al., 2002;Peyron-Caso et al., 2003). When NAFLD patients were supplemented with PUFA for 6 months, the alanine aminotransferase and triglyceride levels in the liver were significantly decreased, and the symptoms of NAFLD were also relieved (Spadaro et al., 2008). In addition, DHA also improved insulin sensitivity by regulating lipid-related gene expression and ameliorated hepatic triglycerides accumulation in NAFLD mice (Fedor et al., 2012;Sun et al., 2011). Thereby, supplementation of DHA had a potential therapeutic effect on lipogenesis, fatty acid oxidation, and hepatic lipid metabolism (De Castro et al., 2015;Zhang et al., 2013); it can also attenuate fatty liver-caused damage by inhibiting endoplasmic reticulum stress response (Zheng et al., 2016). However, the possible protective mechanisms of DHA on NAFLD are still not clear. Janus kinase 2 -signal transducer and activator of transcription 3 (JAK2/STAT3) signaling pathway are involved in many physiological and pathological regulation processes, such as inflammation and apoptosis. Activation of JAK2/ STAT3 signaling pathway induced the expression of HMGB1 and inflammatory reaction, which is associated with the NAFLD; rapamycin can inhibit JAK2/STAT3 signaling pathway to reduce HMGB1 expression, which attenuated the liver injury (Zeng et al., 2014). In the present study, we investigate whether DHA could have a protective effect on non-alcoholic hepatic steatosis, and it is unclear whether the beneficial effect of DHA on NAFLD is regulated by JAK2/ STAT3 signaling pathway. Our result demonstrated that supplementation of DHA could attenuate lipid dropletscaused oxidative stress and ER stress response in L02 cells through activating the JAK2/STAT3 signaling pathway. Cell culture and DHA treatment L02 cells were cultured in DMEM medium supplemented with 10% fetal bovine serum (FBS) and antibiotics (100 IU/mL penicillin, 100 µg/mL streptomycin) in 5% CO 2 at 37°C. After the culture reached 80% confluence, cells were seeded in multi-well plates or flasks for 24 h. The cells were then divided into three groups: Control group, FBS-treated groups (24, 48, and 72 h), and DHA-treated group (25 μM). According to previous studies, 50% FBS was able to induce non-alcoholic hepatic steatosis in cell model (Cui et al., 2017;Wu et al., 2010), so the control group cells were cultured in DMEM, and FBS-treated group cells were cultured in 50% FBS medium for 24, 48, and 72 h to induce steatosis. As for the DHA-treated group, the cells first were cultured in 50% FBS medium for 48 h and then cultured in DMEM medium with DHA (25 µM) for 24 h. Oil red O staining The accumulation of lipid droplet in L02 cells were visualized and analyzed by Oil Red O staining (Fu et al., 2016). In brief, the cells are grown in 6-wells plates. After treatment, the cells were washed twice with PBS, and then the cells were fixed with 2 mL of 4% (v/v) formaldehyde for 30 min. Then the cells were stained with 0.5 mL of freshly prepared oil red O solution for 30 min at 37°C. After that, the cells were treated with 70% alcohol and observed under a light microscope. Determination of the contents of total cholesterol, triglycerides and the activities of aspartate aminotransferase and lactate dehydrogenase The cells were collected and sonicated on ice. The cell lysates were clarified by centrifugation. Then the total cholesterol (TC), triglyceride (TG), and the enzymatic activities of aspartate aminotransferase (AST) and lactate dehydrogenase (LDH) were measured by corresponding assay kit according to the instructions (Nanjing Jiancheng Bioengineering Institute, China). Measurement of anti-oxidative enzyme activates Malondialdehyde (MDA), glutathione peroxidase (GSH-Px), and total superoxide dismutase (SOD) activities were measured separately by MDA, GSH-Px, and SOD assay kits (Nanjing Jiancheng Bioengineering Institute, China). The cells were collected and placed in a centrifuge tube for the following experiments. For MDA assay: after adding 500 μL lysis buffer into the cells for 2 min, the cell extraction (100 μL) was mixed with working buffer and incubated at 95°C for 40 min. Then the mixture was centrifuged at 4000 rpm for 10 min, 250 μL of supernatant was added into a 96-well plate, and the absorbance value was measured at 530 nm. MDA content was calculated based on the absorbance value. For GSH-Px assay: After adding the lysis buffer into the tubule, the cell extraction was centrifuged at 3000 rpm for 10 min, and the supernatant was used for measuring GSH-Px. The absorbance value was examined at 412 nm. For SOD assay: the supernatant of cell extraction was added into a 96-well plate and incubated at 37°C for 20 min, then the absorbance value at 450 nm was determined by using Full wavelength marker. Total RNA extraction and qRT-PCR analysis The total RNA was extracted using Trizol reagent, and then reverse transcription was performed using a PrimeScriptTM RT Master Mix. The mRNA expression was quantified using qRT-PCR. The expression levels of all target genes were normalized to those of the endogenous reference gene β-actin using an optimized comparative Ct (2 −ΔΔCt ) value method, where ΔΔCt = ΔCt target −ΔCt β-actin . Primer sequences are listed in Tab. 1. Statistical analysis All experimental data were expressed as the means ± SEM. Statistical comparisons were made by analysis of variance (ANOVA), followed by Tukey's multiple comparisons test. Statistical significance was shown as *p < 0.05, **p < 0.01. Results Establishment of hepatic steatosis model Human normal hepatocyte L02 cells were treated with 50% FBS to induce hepatic steatosis, then the cells were stained with Oil Red O to detect the formation of lipid droplets. As shown in Fig. 1, after 50% FBS treatment, there are many small lipid drops were formed in the luminal side of L02 cells (Figs. 1A-1D). Then, the lipid droplets were extracted with isopropyl alcohol, and the absorbance of the extracted solution was measured by a microplate reader at 490 nm. The results showed that intracellular fat content was significantly increased after 50% FBS treatment for 48 and 72 h in L02 liver cells (Fig. 1E). To further examine the effect of 50% FBS on liver cell steatosis, the intracellular lipogenesis-related indexes TG, TC, and hepatic function-related indexes AST and LDH were examined. The results showed that the contents of TG and TC were significantly increased after 50% FBS treatment compared with the control group in L02 cells (p < 0.01) (Figs. 1F-1G). At the same time, 50% FBS treatment also increased the activity of AST and LDH (Figs. 1H-1I), suggesting that the metabolic function has a defect in 50% FBS-treated L02 liver cells. Above all, the accumulation of lipid droplets and the defect of cell metabolism indicated that the hepatic steatosis model was successfully established after high concentration of FBS treatment. Effect of DHA on lipid accumulation in FBS-induced hepatic steatosis To detect the effect of DHA on non-alcoholic hepatic steatosis, we first examine its effect on lipid droplet formation in L02 cells. Our result showed that the lipid accumulation was significantly inhibited in DHA-treated cells ( Figs. 2A-2C); meanwhile, the absorbance of the extracted solution was also significantly decreased after DHA treatment compared with the group of only 50% FBS treatment (Fig. 2D). These results clearly demonstrated that DHA can diminish the accumulation of lipid droplets, which is accompanied by the decrease of TC and TG contents (Figs. 2E, 2F). To further examine the effect of DHA on hepatic functions, the activities of AST and LDH were measured after DHA treatment. As shown in Figs. 2G, 2H, 50% FBS treatment can significantly increase the activity of AST and LDH compared with the control group. However, the activities of AST and LDH were significantly decreased after DHA treatment (p < 0.05). To further investigate the mechanism of DHA in lipid droplet formation in NAFLD, the lipogenic genes PPARγ, C/EBPα, and SREBP-1 were examined by qRT-PCR and Western blot. The results showed that the mRNA level and protein expression of PPARγ, C/EBPα, and SREBP-1 were significantly up-regulated in 50% FBS-treated L02 cells; Effects of DHA on oxidative stress and ER stress response in FBS-induced hepatic steatosis To further investigate the effect of DHA on hepatic steatosis, the levels of oxidative stress were analyzed. The cells were incubated with 50% FBS and were cultured with DHA (25 μM) medium for 24 h, then the activity of oxidative stressrelated enzymes was measured. Our results showed that the contents of GSH-Px and SOD were significantly reduced in 50% FBS-treated L02 liver cells; whereas after DHA treatment, the contents of SOD and GSH-Px were significantly increased (Figs. 3A, 3B). In contrast, DHA can relieve the activity of MDA in FBS-induced hepatic steatosis (Fig. 3C). These results suggested that DHA can prevent the accumulation of oxidative stress products in fatty liver cells. Glucose-regulated protein 78 (GRP78) and homologous protein (CHOP) are the protein markers of ER stressinduced unfolded protein response. Thereby, we next examined the effect of DHA on the hepatic steatosisinduced ER stress response. As shown in Figs. 3D, 3E, the protein level of GRP78 and CHOP were significantly increased in 50% FBS-treated L02 liver cells; however, DHA treatment can significantly alleviate FBS-induced ER stress by down-regulating GRP78 and CHOP in L02 cells, indicating that DHA could attenuate the hepatic steatosisinduced ER stress response. DHA attenuate hepatic steatosis through mediating JAK2/ STAT3 signaling pathway To further investigate the protective mechanism of DHA in hepatic steatosis-induced oxidative stress and ER stress, we examined whether the JAK2/STAT3 signaling pathway involved in this process by investigating the expression of JAK2/STAT3 and the phosphorylation of JAK2/STAT3. Our result showed that the phosphorylation of JAK2/STAT3 was significantly decreased in 50% FBS-treated L02 cells compared with the control group. Whereas DHA treatment restored the phosphorylation level of JAK2 and STAT3 (Figs. 4A-4C). These results implied that DHA could attenuate hepatic steatosis-induced injury through activating JAK2/STAT3 signaling pathway. Inhibition of JAK2/STAT3 signaling pathway reversed the beneficial effect of DHA on hepatic steatosis WP1066 is considered as a specific inhibitor of the JAK2/STAT3 signaling pathway. As shown in Figs. 4A-4C, WP1066 treatment significantly inhibited the phosphorylation of JAK2 and STAT3, while it also blocked the effect of DHA on the phosphorylation of JAK2 and STAT3, which further demonstrated that DHA could attenuate FBS-induced steatosis hepatocytes via mediating JAK2/STAT3 signaling pathway. To further confirm the protective role of DHA on the hepatic steatosis-induced injury was associated with JAK2/ STAT3 signaling pathway, we treated the cells with WP1066 to determine the effect of DHA on the expression of lipogenic genes and ER stress response. Our results showed that WP1066 treatment can reverse the effect of DHA on PPARγ, C/EBPα, and SREBP-1 expression in FBS-induced hepatic steatosis (Figs. 5A-5E). Meanwhile, WP1066 treatment also blocked the restoration of DHA in FBSinduced ER stress response (Figs. 5F-5G). Discussion In the present work, we treated the L02 cells with 50% FBS to induce hepatic steatosis, which mimics the feature of NAFLD in vitro to study the protective effect of DHA on non-alcoholic hepatic steatosis. The present study provides mechanistic insights into how DHA alleviate the injury of hepatic steatosis in L02 cells. We demonstrated that DHA can reduce the accumulation of lipid droplets in hepatic steatosis and also reduced the hepatic steatosis-induced oxidative stress and ER stress response through activating the JAK2/STAT3 signaling pathway. The liver is an important organ for maintaining the metabolism of the human body. It plays an important role in the metabolism of various nutrients and drugs and is also the central hub of lipid metabolism in the human body (Hallsworth et al., 2013). Fatty liver has become an important liver disease in the Asia-pacific region, especially the NAFLD has become the major form of chronic liver disease that is characterized by the formation of steatosis, inflammation, and different degree of fibrosis in liver tissue The normalized bands' intensity was assayed by Image J software. These results are presented as the mean ± SE from three independent determinations. *p < 0.05, **p < 0.01. (Ahmed et al., 2010;Fassio et al., 2004). It was well known that excessive accumulation of TC and TG in hepatocytes are the main factors of NAFLD (Wang et al., 2012). Meanwhile, AST and LDH are often used as sensitive and fairly specific biomarkers to evaluate drug-induced hepatocellular injury in preclinical and clinical studies (Schurr and Payne, 2007). Previous studies have shown that supplementation of DHA can inhibit lipogenesis and has a beneficial effect on hepatic lipid metabolism (De Castro et al., 2015;Devarshi et al., 2013). The present study showed that DHA can significantly reduce hepatic lipid accumulation through inhibiting the accumulation of TC and TG and also reducing the activities of AST and LDH. In addition, supplementation of DHA also inhibited the expression of lipogenic genes (such as PPARγ, C/EBP, SREBP-1), suggesting that DHA has antihepatic steatosis function in our NAFLD model. The excessive accumulation of lipid droplets in the liver was associated with the dysfunction of organelles, such as mitochondria and ER. The overloaded lipid increased the synthesis of acetyl-CoA and disturbed the function of the tricarboxylic acid (TCA) cycle during mitochondrial respiration, which increased the reactive oxygen species (ROS) formation. The defect of mitochondrial morphology, electron transport chain, and ATP generation have been shown in NAFLD accompanied by the high level of ROS and inflammation (Sunny et al., 2017;Wang et al., 2020). Therefore, oxidative stress has been considered as one of the major pathogenic mechanisms for the progression of NAFLD (Madan et al., 2006). Different levels of intracellular GSH-Px, SOD, and MDA are important factors to evaluate antioxidant ability. The result showed that the anti-oxidative markers SOD and GSH-Px were significantly increased after DHA treatment, whereas the MDA content was decreased, suggesting that DHA has anti-oxidative functions to relieve the hepatic steatosis-induced oxidative stress. Under environmental or physiological conditions, when the unfolded protein response (UPR) is not sufficient to maintain normal hepatocellular function, which can induce ER stress and disrupt the ER-dependent liposome homeostasis, thereby stimulating the development of steatosis (Bozaykut et al., 2016). Previous studies showed that the URP was activated in NAFLD patients, accompanied by the increased expression of CHOP and GRP78 Zhou et al., 2017). The present study showed that supplementation of DHA significantly alleviated hepatic steatosis-induced ER stress response by down-regulating GRP78 and CHOP. These results indicated that DHA could directly or indirectly target GRP78 and CHOP to inhibit liver lipid accumulation and cell inflammation, which further ameliorate DAFLD development. JAK-STAT pathway was found in recent years that involved in colonization of the cell increases, differentiation, death depth, and immune regulation and many important biological processes (Zhao et al., 2013). In the high-energyinduced fatty liver rats model, kefir peptides or puerarin can effectively attenuate the symptoms of NAFLD by enhancing the phosphorylation of JAK2 and STAT3, which suggested that the JAK-STAT signaling pathway plays critical roles in NAFLD . DHA, a natural ligand of PPARγ (Neschen et al., 2007), may activate the JAK2/ STAT3 signaling pathway by inhibiting PPARγ , thereby alleviating nonalcoholic fatty liver disease. In the present study, we found that the phosphorylation of JAK2 and STAT3 were increased after DHA treatment; meanwhile, WP1066 treatment significantly inhibited the phosphorylation of DHA on JAK2 and STAT3, further suggesting that DHA mediated the JAK2/STAT3 signaling pathway attenuated FBS-induced steatotic hepatocytes. To further demonstrate the effect of DHA on lipid gene expression and endoplasmic reticulum stress via the JAK2/ STAT3 pathway. We treated the cells with WP1066, and the results showed that WP1066 treatment could offset the decreased expressions of PPAR, C/EBP, and SREBP-1 in FBS-induced hepatic steatosis induced by DHA. Meanwhile, WP1066 treatment also prevented DHA recovery from FBS induced stress response. The results indicated that DHA attenuated the hepatic steatosis-induced oxidative stress and ER stress through activating the JAK2/STAT3 signaling pathway in the NAFLD. Conclusions Our study demonstrated that a high concentration of FBS can cause hepatic steatosis, which can be used to build the in vitro NAFLD model. Meanwhile, DHA could attenuate lipid droplets-caused oxidative stress and ER stress response in L02 cells through activating the JAK2/STAT3 signaling pathway (Fig. 6), which indicated that the JAK2/STAT3 signaling pathway might be an important molecular target of DHA for alleviating NAFLD. Ethics Approval: Not applicable. Availability of Data and Material: All data generated or analyzed during this study are included in this published article. Author Contribution: YW and YPD conceptualized the study, participated in its design and research. KLC carried out the molecular studies and sample collection. HXL and YQ analyzed data and drafted the manuscript. All authors read and approved the final manuscript for publication. Conflicts of Interest: The authors declare that there is no conflict of interests.
v3-fos-license
2024-05-03T15:13:09.547Z
2024-04-28T00:00:00.000
269519528
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4701/14/5/511/pdf?version=1714283892", "pdf_hash": "fed9136a21d71019bd9ad5805c9414896e01c685", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2544", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "12c87367b1fa6474dd38fdc6c71bd80b323b466b", "year": 2024 }
pes2o/s2orc
Nano-and Submicron-Sized TiB 2 Particles in Al–TiB 2 Composite Produced in Semi-Industrial Self-Propagating High-Temperature Synthesis Conditions : This paper investigates the structure and phase composition of Al–TiB 2 metal matrix composites prepared from the Al–Ti–B system powder using self-propagating high-temperature synthesis (SHS) in semi-industrial conditions (the amount of the initial powder mixture was 1000 g). The samples produced in semi-industrial conditions do not differ from the laboratory samples, and consist of the aluminum matrix and TiB 2 ceramic particles. The temperature rise leads to the growth in the average size of TiB 2 particles from 0.4 to 0.6 µ m as compared to the laboratory samples. SHS-produced composites are milled to the average particle size of 42.3 µ m. The powder particles are fragmented, their structure is inherited from the SHS-produced Al–TiB 2 metal matrix composite. The obtained powder can be used as the main raw material and additive in selective laser sintering, vacuum sintering, and hot pressing products. It is worth noting that these products can find their own application in the automotive industry: brake pads Introduction Transportation of people and cargo by air, land, and water transport must meet modern economic and environmental requirements [1].One of these requirements is a lower weight of structural elements of transport, which improves the transportation performance, fuel saving, and reduces harmful gas emissions [2,3].For example, an aircraft weight reduction by 20% decreases the CO 2 emission by 12-16%, which undoubtedly has a positive effect on the environment and humans.The reduction of the weight indicators of the transport structure can be gained not only by design improvements, but also by the development and implementation of novel materials.The structure and phase composition of novel materials can significantly improve their physical and mechanical properties (relative to present materials) and provide stable operation at higher temperatures.The creation of metal matrix composites, comprising a metal/intermetallic matrix and ceramic particles, is currently one of the key trends in the development of novel materials [4][5][6].Dispersion hardening of metal matrix materials by ceramic particles enhances their physical and mechanical properties, namely hardness, wear resistance, and strength both at room and higher temperatures [7,8].On the other hand, nano-and submicron-sized ceramic particles are also nucleation centers of crystalline particles in metal alloys that reduce their grain size and improve their physical and mechanical properties [9][10][11][12].It should be noted that ordinary alloys cannot reach the physical and mechanical properties of the metal matrix composites, owing to their more free growth of crystalline particles. It is thanks to the unique combination of physical and mechanical properties that metal matrix composites have found applications in the automotive industry: brake pads, drums, rail discs, etc.At the same time, intensive growth in the production of passenger cars is expected (for example, in the Asia-Pacific region, automobile production increased by 11%).Based on these data, it is assumed that the production of parts for passenger cars will be one of the main drivers of growth in the market for composite metal matrix materials.According to work [13], in 2020, the global market for metal matrix composites and materials based on them is estimated at USD 360 billion.It is projected to grow at a CAGR of 6.4% from 2020 to 2027.There are currently many ways to obtain metal matrix composites; for example, the addition of ceramic particles in the melt, hot pressing, and spark plasma sintering [14][15][16].A special focus is the self-propagating high-temperature synthesis (SHS) technique based on exothermic reactions between mixture components.SHS is characterized by highly intensive interactions between initial components, which are accompanied by large amounts of generated heat [17].This amount of heat avoids the necessity for using external energy sources during synthesis.Moreover, the SHS technique ensures control over the structure and phase composition of synthesis products by changing the initial mixture composition and reaction conditions.For metal matrix composites, the SHS process provides for in situ structure formation, i.e., during the reaction between the initial mixture components.In this case, the heat of reaction between ceramic particles is spent to the metal component melting.At high (>40 wt.%) content, ceramic particles are separated by the obtained alloy, which forms the matrix material.This largely eliminates the particle agglomeration and allows producing composite materials with a homogeneous structure [18]. Using SHS, the research team headed by prof.Promakhov has obtained (Ni-Ti)-TiB 2 composite from the powder system 63.5 wt.% NiB + 36.5 wt.% Ti [19].The composite structure consisted of the Ni-Ti intermetallic matrix with distributed particles of titanium diboride (TiB 2 ).It was found that the addition of 5 wt.%SHS-particles NiTi-TiB 2 to the Inconel 625 powder enhanced the hardness and ultimate tensile strength of SLS materials, respectively, by 40 and 20% as compared to relatively pure Inconel 625 powder [20].These results demonstrated the highly efficient use of SHS-produced composites as additives in selective laser sintering. According to the literature review, most of the publications in the field of composites do not go beyond the laboratory experiments.There are experiments with 20 to 50 g samples, in which synthesis conditions considerably differ from the production process conditions and do not consider process and ecological parameters.As mentioned above, the main research tasks include the development of composite materials and their industrial implementations.The SHS technique is, therefore, the focus of attention; semi-industrial conditions may significantly affect the structure and phase composition and, consequently, the physical and mechanical properties of the final product. In work [18], SHS was used to obtain the Al-TiB 2 metal matrix composite from the Al-Ti-B system powder in laboratory conditions.The composite structure consisted of the aluminum matrix with uniformly distributed TiB 2 particles submicron and nanometer size. Note that it is nano-and submicron-sized ceramic particles that provide the improvement of physical and mechanical properties for unavailable conventional aluminum and many other alloys.Thus, the obvious question that arises is whether it is possible to gain such results when increasing the weight of the initial powder mix up to 1000 g (1 kg) and approaching semi-industrial SHS conditions. The aim of this work is to investigate the structure and phase composition of the Al-TiB 2 metal matrix composite produced by SHS from the Al-Ti-B system powder in semi-industrial conditions. Materials and Methods Aluminum (Al), titanium (Ti), and boron (B) powders were used to prepare 1000 g (1 kg) of the Al-Ti-B system powder.Table 1 shows manufacturers, average particle size, and purity of these powders.The initial powder components were mixed with the amount of 60 wt.%Al, 27.6 wt.% Ti, and 12.4 wt.% B. The obtained mixture was mechanically blended in the ball mill, as illustrated in Figure 1a.To understand how the duration of mechanical grinding affects the particle structure of the initial powder mixture weighing 1000 g, two mechanical grindings were carried out: 15 min (as presented in [18] for a sample weighing 20 g), and 45 min.The steel drum was vacuumed and filled with argon to prevent the contents oxidation.Based on the data obtained in the work [21], it was supposed that after mechanical blending, the initial powder mix consisted mostly of composite particles with Al, Ti, and B inclusions (Figure 1b).Next, this powder mix was poured into a graphite crucible without preliminary compaction.A flammable layer comprising 80 wt.% Ti and 20 wt.% B was poured onto the powder mix.The flammable layer provided uniform heating of the upper layer of the main mix and initiated the reaction between the components.The graphite crucible was placed in the reactor (Figure 1c). Materials and Methods Aluminum (Al), titanium (Ti), and boron (B) powders were used to prepare 1000 g (1 kg) of the Al-Ti-B system powder.Table 1 shows manufacturers, average particle size, and purity of these powders.The initial powder components were mixed with the amount of 60 wt.%Al, 27.6 wt.% Ti, and 12.4 wt.% B. The obtained mixture was mechanically blended in the ball mill, as illustrated in Figure 1a.To understand how the duration of mechanical grinding affects the particle structure of the initial powder mixture weighing 1000 g, two mechanical grindings were carried out: 15 min (as presented in [18] for a sample weighing 20 g), and 45 min.The steel drum was vacuumed and filled with argon to prevent the contents oxidation.Based on the data obtained in the work [21], it was supposed that after mechanical blending, the initial powder mix consisted mostly of composite particles with Al, Ti, and B inclusions (Figure 1b).Next, this powder mix was poured into a graphite crucible without preliminary compaction.A flammable layer comprising 80 wt.% Ti and 20 wt.% B was poured onto the powder mix.The flammable layer provided uniform heating of the upper layer of the main mix and initiated the reaction between the components.The graphite crucible was placed in the reactor (Figure 1с). Figure 2 presents semi-industrial conditions in the SHS process.The reactor was evacuated by a pump and filled with argon to a pressure of 5 MPa.The synthesis reaction was initiated by the localized heating of the flammable layer using a molybdenum hot filament.ВР20/5 thermocouples were introduced in the initial powder mix to measure the synthesis temperature [22].Figure 2 presents semi-industrial conditions in the SHS process.The reactor was evacuated by a pump and filled with argon to a pressure of 5 MPa.The synthesis reaction was initiated by the localized heating of the flammable layer using a molybdenum hot filament.BP20/5 thermocouples were introduced in the initial powder mix to measure the synthesis temperature [22]. After synthesis, 1 kg of the obtained product was ground in a ball mill.A porcelain container and Al 2 O 3 balls of diameters 10 and 20 mm, respectively, were used for the grinding.The ground synthesis product was sieved through 200 mesh sieve. The X-ray diffraction (XRD) analysis of the phase composition was conducted on a Shimadzu XRD-6000 Diffractometer (Shimadzu Corporation, Kyoto, Japan) using CuKα radiation.The phase composition was identified by using the Powder Diffraction File (PDF-4).Rietveld refinement was used for the phase quantification and lattice parameters [23,24].Energy dispersive X-ray spectroscopy on the scanning electron microscope (SEM/EDX) with focused electron beam from Tescan (Tescan, s.r.o., Brno, Czech Republic) was used to study the sample microstructure.Its particle size was detected in SEM images using the secant method.The ANALYSETTE 22 MicroTec plus analyzer (Fritsch, Idar-Oberstein, Germany) was used to detect the particle size of the ground synthesis product.After synthesis, 1 kg of the obtained product was ground in a ball mill.A porcelain container and Al2O3 balls of diameters 10 and 20 mm, respectively, were used for the grinding.The ground synthesis product was sieved through 200 mesh sieve. The X-ray diffraction (XRD) analysis of the phase composition was conducted on a Shimadzu XRD-6000 Diffractometer (Shimadzu Corporation, Kyoto, Japan) using CuKα radiation.The phase composition was identified by using the Powder Diffraction File (PDF-4).Rietveld refinement was used for the phase quantification and lattice parameters [23,24].Energy dispersive X-ray spectroscopy on the scanning electron microscope (SEM/EDX) with focused electron beam from Tescan (Tescan, s.r.o., Brno, Czech Republic) was used to study the sample microstructure.Its particle size was detected in SEM images using the secant method.The ANALYSETTE 22 MicroTec plus analyzer (Fritsch, Idar-Oberstein, Germany) was used to detect the particle size of the ground synthesis product. Influence of the Powder Mix Weight and Synthesis Conditions on Its Temperature and Propagation Figure 3 contains SEM images of the initial Al-Ti-B powder structure after mechanical blending for 15 and 45 min.After 15 min, the structure consists of deformed particles with Al, Ti, and B inclusions (see Figure 3a, regions 1, 3, 4, 6, 9).There are also rounded Al and Ti inclusions without boron (regions 2, 5, 7, 8).After 45 min, the structure includes planes and irregular particles (Figure 3b).According to the elemental analysis, they comprise Al and B inclusions (Figure 3b, regions 1-9).These results are consistent with those obtained in [18], where a similar particle structure is obtained after 15 min mechanical blending of 20 g Al-Ti-B powder.Thus, a larger weight of the initial powder mix requires longer mechanical blending.3a, regions 1, 3, 4, 6, 9).There are also rounded Al and Ti inclusions without boron (regions 2, 5, 7, 8).After 45 min, the structure includes planes and irregular particles (Figure 3b).According to the elemental analysis, they comprise Al and B inclusions (Figure 3b, regions 1-9).These results are consistent with those obtained in [18], where a similar particle structure is obtained after 15 min mechanical blending of 20 g Al-Ti-B powder.Thus, a larger weight of the initial powder mix requires longer mechanical blending. In Figure 4, we present thermal curves for synthesis of 1000 g (1 kg) of the initial powder.The peak 1 on these curves describes the exothermic reaction between the initial powder components, which is accompanied by a large amount of generated heat.This peak corresponds to synthesis at 1850 • C. A comparison of results obtained here and in [18] shows that the SHS temperature grows by 200 • C with increasing weight of the Al-Ti-B system powder from 20 to 1000 g.This temperature rise is determined mostly by the larger diameter of the initial powder sample and, consequently, the reacting surface of its components.This results in a large amount of generated heat and temperature growth.The same was observed by Borovinskaya et al. [25], who detected the relation between the temperature and the sample dimension in several systems (for example, in Ti-B, Ti-2B, Ti-C, etc.).It should be noted that the initial powder mix was poured into the graphite crucible without a preliminary compaction (bulk density).In Figure 4, we present thermal curves for synthesis of 1000 g (1 kg) of the initial powder.The peak 1 on these curves describes the exothermic reaction between the initial powder components, which is accompanied by a large amount of generated heat.This peak corresponds to synthesis at 1850 °C.A comparison of results obtained here and in [18] shows that the SHS temperature grows by 200 °C with increasing weight of the Al-Ti-B system powder from 20 to 1000 g.This temperature rise is determined mostly by the larger diameter of the initial powder sample and, consequently, the reacting surface of its components.This results in a large amount of generated heat and temperature growth.The same was observed by Borovinskaya et al. [25], who detected the relation between the temperature and the sample dimension in several systems (for example, in Ti-B, Ti-2B, Ti-C, etc.).It should be noted that the initial powder mix was poured into the graphite crucible without a preliminary compaction (bulk density). At the same time, the density of powder samples obtained by cold uniaxial compaction in [18] from 20 g of the Al-Ti-B powder was higher than the bulk density.Yeh and Chen [26] reported that the SHS temperature grew with increasing density of the initial sample.However, in our experiment, the SHS temperature in the Al-Ti-B system powder with the bulk density and 1000 g weight was higher than that of the powder with 20 g weight.As mentioned above, the increase in the weight and volume of the initial powder mix led to the growth in the heat generation during the synthesis process due to the larger reacting surface.It was assumed that this heat compensated for the lower contact between the powder components associated with the reduced density of the initial powder mix that accompanied the higher temperature than in laboratory conditions.This allowed us to conduct stability and complete synthesis in conditions approaching semi-industrial ones, i.e., without preliminary compaction of the initial powder mix.At the same time, the density of powder samples obtained by cold uniaxial compaction in [18] from 20 g of the Al-Ti-B powder was higher than the bulk density.Yeh and Chen [26] reported that the SHS temperature grew with increasing density of the initial sample.However, in our experiment, the SHS temperature in the Al-Ti-B system powder with the bulk density and 1000 g weight was higher than that of the powder with 20 g weight.As mentioned above, the increase in the weight and volume of the initial powder mix led to the growth in the heat generation during the synthesis process due to the larger reacting surface.It was assumed that this heat compensated for the lower contact between the powder components associated with the reduced density of the initial powder mix that accompanied the higher temperature than in laboratory conditions.This allowed us to conduct stability and complete synthesis in conditions approaching semi-industrial ones, i.e., without preliminary compaction of the initial powder mix. In Figure 4a, the temperature slightly lowers after peak 1 and then grows again (peak 2).This is conditioned by the intense heat absorption in the adjacent regions, where the remaining amount of heat results in the temperature growth.This phenomenon is detected in [27]: it is reported that endothermic processes are stipulated by the melting and dissolution of the initial mixture components.Afterwards, the temperature drops, and the syntheses product is cooled (region 3).The heat absorption by adjacent regions can cause wave front destabilization and spin wave propagation.It is noteworthy that the synthesis process conducted in the graphite crucible does not allow us to observe the wavefront propagation.At the same time, the surface of the synthesis product in Figure 4b,c has lines four separated by pores.Their formation can probably be attributed to localization reaction centers and their motion along a helical path.The temperature gradient appears on the boundaries of reaction centers and unreacted region, which results in the pore formation.These lines are typical for the spin wave propagation.Based on these data and the results obtained in [27], we find that the Al content of 60 wt.% of the initial Al-Ti-B system leads to an intense absorption of the large amount of heat necessary for its melting.This facilitates the wavefront destabilization and spin wave formation (Figure 4d, region 5).In work [18], the Al-Ti-B system powder, consisting of 60 wt.%Al and weighing 20 g, also demonstrates the spin wave propagation during the synthesis process.In conclusion, the wavefront propagation in the Al-Ti-B system powder does not change with the weight of the initial powder mix increasing from 20 to 1000 g.In Figure 4a, the temperature slightly lowers after peak 1 and then grows again (peak 2).This is conditioned by the intense heat absorption in the adjacent regions, where the remaining amount of heat results in the temperature growth.This phenomenon is detected in [27]: it is reported that endothermic processes are stipulated by the melting and dissolution of the initial mixture components.Afterwards, the temperature drops, and the syntheses product is cooled (region 3).The heat absorption by adjacent regions can cause wave front destabilization and spin wave propagation.It is noteworthy that the synthesis process conducted in the graphite crucible does not allow us to observe the wavefront propagation.At the same time, the surface of the synthesis product in Figure 4b,c has lines four separated by pores.Their formation can probably be attributed to localization reaction centers and their motion along a helical path.The temperature gradient appears on the boundaries of reaction centers and unreacted region, which results in the pore formation.These lines are typical for the spin wave propagation.Based on these data and the results obtained in [27], we find that the Al content of 60 wt.% of the initial Al-Ti-B system leads to an intense absorption of the large amount of heat necessary for its melting.This facilitates the wavefront destabilization and spin wave formation (Figure 4d, region 5).In work [18], the Al-Ti-B system powder, consisting of 60 wt.%Al and weighing 20 g, also demonstrates the spin wave propagation during the synthesis process.In conclusion, the wavefront propagation in the Al-Ti-B system powder does not change with the weight of the initial powder mix increasing from 20 to 1000 g. Influence of the Powder Mix Weight and Synthesis Conditions on Structure and Phase Composition The SEM images in Figure 5 demonstrate the structure of the SHS product preliminary treated in 5 vol.%HCl solution.The structure consists of irregular rounded particles (region 1) distributed in the matrix (region 2).The particle size ranges between 0.17 and 4 µm, while the average size is 0.61 µm.The particle size distribution is contributed largely by 0.5 µm particles. The XRD pattern and EDX mapping of the surface of SHS products obtained from the Al-Ti-B powder are presented in Figure 6.These products contain TiB 2 and Al phases.Crystal lattice parameters of these phases do not qualitatively differ from each other and are comparable with those of reference Al and TiB 2 phases [28,29].There are also traces of the Al 3 Ti intermetallic phase, while EDX mapping of the SHS product structure (Figure 6b-e) shows Ti and B elements nearby the detected particles.At the same time, the matrix consists of Al and also Ti elements in some areas. Phase Composition The SEM images in Figure 5 demonstrate the structure of the SHS product preliminary treated in 5 vol.%HCl solution.The structure consists of irregular rounded particles (region 1) distributed in the matrix (region 2).The particle size ranges between 0.17 and 4 µm, while the average size is 0.61 µm.The particle size distribution is contributed largely by 0.5 µm particles.The XRD pattern and EDX mapping of the surface of SHS products obtained from the Al-Ti-B powder are presented in Figure 6.These products contain TiB2 and Al phases.Crystal lattice parameters of these phases do not qualitatively differ from each other and are comparable with those of reference Al and TiB2 phases [28,29].There are also traces of the Al3Ti intermetallic phase, while EDX mapping of the SHS product structure (Figure 6b-e) shows Ti and B elements nearby the detected particles.At the same time, the matrix consists of Al and also Ti elements in some areas. When comparing these data with XRD patterns, we found that SHS products, resulting from exothermic reactions from the Al-Ti-B system powder, consisted of TiB2 particles distributed in the matrix based on Al and Al3Ti inclusions.When comparing these results with those achieved in [18] for 20 g of the initial powder mix, we observed a slight growth (from 0.4 to 0.6 µm) in the average size of TiB2 particles for 1000 g from the initial powder.That change was attributed to the temperature rise, which led to the When comparing these data with XRD patterns, we found that SHS products, resulting from exothermic reactions from the Al-Ti-B system powder, consisted of TiB 2 particles distributed in the matrix based on Al and Al 3 Ti inclusions.When comparing these results with those achieved in [18] for 20 g of the initial powder mix, we observed a slight growth (from 0.4 to 0.6 µm) in the average size of TiB 2 particles for 1000 g from the initial powder.That change was attributed to the temperature rise, which led to the growth in TiB 2 crystalline particles and, thus, in the average particle size [30].It is worthwhile to note that, in work [18], the synthesis of the Al-Ti-B system with 20 g weight did not result in the formation of Al-Ti intermetallic compounds.Based on the data presented herein, we suggested that the formation of intermetallic compounds was associated with a stoichiometry deviation during the mixture preparation with larger particle size.As a result of the results obtained, it was established that increasing the mass of the sample from 20 to 1000 g does not lead to a significant change in the phase composition and structure of the synthesis products.Therefore, it can be assumed that increasing the volume of the resulting mixture will not complicate the technological process, nor will it lead to economic costs.associated with a stoichiometry deviation during the mixture preparation with larger particle size.As a result of the results obtained, it was established that increasing the mass of the sample from 20 to 1000 g does not lead to a significant change in the phase composition and structure of the synthesis products.Therefore, it can be assumed that increasing the volume of the resulting mixture will not complicate the technological process, nor will it lead to economic costs.As mentioned earlier, metal matrix composites can be used in selective laser sintering as the main powder material or additive.In this regard, after synthesis of 1000 g of the ceramic sample, it is necessary to comminute it.Figure 7 presents the particle size distribution of the Al-TiB2 metal matrix composite, SEM images of its particles, as well as an EDX analysis of the surface structure of these particles.The particle size after milling ranges from 0.5 to 95 µm, and their average As mentioned earlier, metal matrix composites can be used in selective laser sintering as the main powder material or additive.In this regard, after synthesis of 1000 g of the ceramic sample, it is necessary to comminute it.Figure 7 presents the particle size distribution of the Al-TiB 2 metal matrix composite, SEM images of its particles, as well as an EDX analysis of the surface structure of these particles.The particle size after milling ranges from 0.5 to 95 µm, and their average size is 42.3 µm.The powder particles are fragmented, and their surface consists of TiB 2 inclusions. The obtained powder can be used as the main raw material or additive in SLS, vacuum sintering, and hot pressing.In addition, the plasma-spheroidizing method can be used for this powder to improve its flow through a nozzle during fabrication [31].The obtained powder can be used as the main raw material or additive in SLS, vacuum sintering, and hot pressing.In addition, the plasma-spheroidizing method can be used for this powder to improve its flow through a nozzle during fabrication [31]. Conclusions This work investigated the structure and phase composition of the Al-TiB2 metal matrix composite produced from the Al-Ti-B system powder in semi-industrial SHS conditions.In total, 1000 g of the Al-Ti-B system powder (60 wt.% Al) was mechanically blended in a ball mill.The obtained mixture was placed in the graphite crucible without a preliminary compaction.The synthesis process was performed in the constant pressure reactor in argon medium.Summing up the results, it can be concluded that: - The synthesis temperature in 1000 g of the Al-Ti-B system powder was 200 °С higher than that in 20 g samples synthesized in laboratory conditions; - The final product did not differ from that obtained in laboratory conditions, and consisted of the Al matrix and TiB2 ceramic particles.There were, however, Al3Ti intermetallic particles, probably due to the semi-industrial conditions of the SHS process; - The growth in the SHS temperature provided the increase in the larger average size of TiB2 particles from 0.4 to 0.6 µm, as compared to that of the laboratory samples; -SHS-produced composite was comminuted to 42.3 µm particles, which were fragmented and had the structure inherited from the Al-TiB2 composite; - The obtained powder can be used as the main raw material or additive in SLS, vacuum sintering, and hot pressing. 1 . Figure 3 contains SEM images of the initial Al-Ti-B powder structure after mechanical blending for 15 and 45 min.After 15 min, the structure consists of deformed particles with Al, Ti, and B inclusions (see Figure3a, regions 1, 3, 4, 6, 9).There are also rounded Al and Ti inclusions without boron (regions 2, 5, 7, 8).After 45 min, the structure includes planes and irregular particles (Figure3b).According to the elemental analysis, they comprise Al and B inclusions (Figure3b, regions 1-9).These results are consistent with those obtained in[18], where a similar particle structure is obtained after 15 min mechanical blending of 20 g Al-Ti-B powder.Thus, a larger weight of the initial powder mix requires longer mechanical blending.In Figure4, we present thermal curves for synthesis of 1000 g (1 kg) of the initial powder.The peak 1 on these curves describes the exothermic reaction between the initial powder components, which is accompanied by a large amount of generated heat.This peak corresponds to synthesis at 1850 • C. A comparison of results obtained here and in[18] shows that the SHS temperature grows by 200 • C with increasing weight of the Al-Ti-B system powder from 20 to 1000 g.This temperature rise is determined mostly by the larger diameter of the initial powder sample and, consequently, the reacting surface of its components.This results in a large amount of generated heat and temperature growth.The same was observed byBorovinskaya et al. [25], who detected the relation between the temperature and the sample dimension in several systems (for example, in Ti-B, Ti-2B, Ti-C, etc.).It should be noted that the initial powder mix was poured into the graphite crucible without a preliminary compaction (bulk density). Figure 3 . Figure 3. SEM images of 1000 g of initial Al-Ti-B system powder and elemental composition after mechanical blending: (a) 15 min, (b) 45 min. Figure 3 . Figure 3. SEM images of 1000 g of initial Al-Ti-B system powder and elemental composition after mechanical blending: (a) 15 min, (b) 45 min. Figure 4 . Figure 4. Thermal curves for 1000 g of Al-Ti-B system components (a), synthesis product surface (b,c), layerwise wave propagation in the initial powder mix (d). Figure 4 . Figure 4. Thermal curves for 1000 g of Al-Ti-B system components (a), synthesis product surface (b,c), layerwise wave propagation in the initial powder mix (d). Figure 5 . Figure 5. SEM images of the structure of SHS product with the weight of 1000 g (a-c) and particle size distribution in the matrix (d). Figure 5 . Figure 5. SEM images of the structure of SHS product with the weight of 1000 g (a-c) and particle size distribution in the matrix (d). Figure 6 . Figure 6.XRD pattern with phase composition (a) and EDX mapping (b-e) of SHS product obtained from 1000 g of the Al-Ti-B system powder. Figure 6 . Figure 6.XRD pattern with phase composition (a) and EDX mapping (b-e) of SHS product obtained from 1000 g of the Al-Ti-B system powder. 14, x FOR PEER REVIEW 9 of 11 size is 42.3 µm.The powder particles are fragmented, and their surface consists of TiB2 inclusions. Figure 7 . Figure 7. Particle size distribution (a) and SEM images (b,c) of particles in Al-TiB2 metal matrix composite, EDX of the surface structure of the particles of the crushed composite (d). Table 1 . Manufacturers, average particle size, and purity of powders. Table 1 . Manufacturers, average particle size, and purity of powders.
v3-fos-license
2021-07-04T05:21:23.616Z
2021-05-20T00:00:00.000
235721691
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "12e127b7d20985d0aaa4a6039d6143777c92f01a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2545", "s2fieldsofstudy": [ "Medicine" ], "sha1": "12e127b7d20985d0aaa4a6039d6143777c92f01a", "year": 2021 }
pes2o/s2orc
Awareness of Iranian Medical Sciences Students Towards Basic Life Support; a Cross-Sectional study Introduction: Augmentation of the number of trained basic life support (BLS) providers can remarkably reduce the number of cardiac arrest victims. The aim of this study was to evaluate the level of BLS awareness among students of medical sciences in Iran. Methods: This multicenter cross-sectional study was performed on medical students at the 4 major medical schools in Tehran, the capital of Iran, between Jan 2018 and Feb 2019, using convenience sampling method. The level of medical sciences students’ awareness of BLS was measured using an international questionnaire. Results: Finally, 1210 students with the mean age of 21.2 ± 2.3 years completed the survey (79% female). 133 (10.9%) students had CPR experience and none had received any formal training. None of the responders could answer all questions correctly. The mean awareness score of participants was 11.93 ± 2.87 (range: 10.13 -17.25). The awareness score of participants was high in 49 (4.04 %) participants, moderate in 218 (18.01%), and low in 943 (77.93%) of studied cases. Conclusion: Based on the findings of this study, more than 70% of the studied medical sciences students obtained a low score on BLS awareness. Introduction Cardiac arrest is a fatal condition responsible for a large number of deaths in the modern world, and it has remained common worldwide (1)(2)(3). Deaths caused by cardiac arrest can be prevented via simple maneuvers and skills most of the time (5). Cardio-pulmonary resuscitation (CPR) is a life-saving and valuable technique that was invented back in 1960 (6); it is indeed a facile procedure that permits almost everyone to sustain life and decreases mortality up to 50% in golden minutes after cardiac and respiratory arrests (7,8). Based on the place in which a cardiac arrest takes place, it is divided into two categories of out-of-hospital cardiac arrest (OHCA) and in-hospital cardiac arrest (IHCA). OHCA occurs approximately in 19-104 per 100,000 persons each year (0.019-0.104%), and 10% of them are said to be saved at the hospital (9). Statics present that 350,000 people in Europe die annually because of OHCA (9). In the USA, OHCA is responsible for 760,000 deaths per year (10). Augmentation of the number of trained basic life support (BLS) providers can remarkably reduce the number of cardiac arrest victims (16). Therefore, many countries worldwide have integrated these topics into the curricula of their educational centers or even workplaces believing that any individual in the society should have sufficient knowledge and awareness to provide BLS when needed (18). Having this in mind, expectation from physicians and paramedical staff is naturally higher as their career requires this knowledge. This study aimed to evaluate the level of BLS awareness among Iranian medical sciences students studying in four major Iranian medical schools. Study design and participants This multicenter cross-sectional study was performed on medical students studying in the 4 major medical schools (Tehran Medical Sciences of Islamic Azad University, Tehran University of Medical Sciences, Shahid Beheshti University of Medical Sciences, and Iran University of Medical Sciences) in Tehran, the capital of Iran, between Jan 2018 and Feb 2019, using convenience sampling method. The level of medical sciences students' awareness of BLS was measured using an international questionnaire. The study protocol was approved by Ethics committee of Iran University of Medical Sciences (IR.IUMS.REC.1399.1291). Participants Being a student in one of the fields of medical sciences (medicine, nursing, and midwifery) was the inclusion criterion. There was not any sex or age limitation in this study. Incomplete questionnaires were excluded (29 questionnaire). Data gathering Demographic data, age, CPR experience, sex, educational status, and attendance of BLS courses were collected using a checklist. In addition, an international questionnaire that measures the level of awareness about BLS was used for evaluating the awareness level of participants regarding BLS. We used the Persian version of the international questionnaire measuring the awareness of participants regarding BLS. This questionnaire was designed by ÖzbilginŞ et.al. based on the latest version of AHA guideline. The international questionnaire measuring awareness of BLS has 20 multiple-choice questions and each question has 1 point (19). The range of scores of this questionnaire is 16-20 (high), 11-15 (moderate), and 0-10 (Low). Persian version of this questionnaire is validated by Ziabari et al. 2019 (6). Statistical analysis A required sample size of 1210 participants, was calculated using Rao soft software. For statistical analysis, SPSS software version 22 (SPSS Inc. Chicago, IL, United States) was used. The findings were presented as mean ± standard deviation or frequency and percentage. Results Finally, 1210 students with the mean age of 21.2 ± 2.3 years completed the survey (79% female). Baseline characteristics of studied participants are presented in table 1. Among the participants, 133 (10.9%) had CPR experience and none had received any formal training. Table 2 shows the results of students' responses to 20 questions regarding BLS awareness. None of the responders could answer all questions correctly. The mean awareness score of participants was 11.93 ± 2.87 (range: 10.13 -17.25). The awareness score of participants was high in 49 (4.04 %) participants, moderate in 218 (18.01%), and low in 943 (77.93%) of studied cases. Discussion Based on the findings of this study, more than 70% of the studied medical sciences students obtained a low score on BLS awareness. Nowadays, with the growth of cardiopulmonary diseases, the rate of cardiac arrest has been remarkably increasing (3)(4)(5)(6). It is expected from a majority of community members to be efficiently aware of BLS (especially CPR) (7). CPR is one of the essential skills that people of a society have to learn because it is a life-saving skill and can reduce the number of OHCA victims (8). Based on the reports, performing proper basic life support can reduce the mortality rate, especially in OHCA (3-5, 7, 9). The Findings in this cross-sectional study showed low awareness of BLS among medical sciences students. About 90% of cases had no experience of CPR, and more than 95% of participants had no idea about safety, and they did not know what they have to do when someone lies unresponsive on the street. Many people do not like to give mouth-to-mouth Limitations Our study has some limitations. One of them is that it was conducted only in the four Medical Universities in Tehran and did not involve the whole country. This study only demonstrates and evaluates the awareness, and the knowledge and practice regarding BLS was not evaluated in this study. The number of students who completed the survey is not enough to generalize the results to other medical schools in Iran. Conclusion Based on the findings of this study, more than 70% of the studied medical sciences students obtained a low score on BLS awareness. Author contribution All authors made substantial contributions, revised the manuscript, and approved the final version for publication. Funding sources The author(s) received no financial support for the research, authorship, and/or publication of this article. Conflict of interest None to Declare.
v3-fos-license
2019-04-02T13:02:50.772Z
2016-01-01T00:00:00.000
90315570
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CC0", "oa_status": "GREEN", "oa_url": "https://ddd.uab.cat/pub/artpub/2016/180491/anibio_a2017v66n3p415.pdf", "pdf_hash": "eebee66aae7289c3b432e0225bd74f87768cf2d3", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2546", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "8562501619e5493f14341a78241df59ee69d1297", "year": 2016 }
pes2o/s2orc
Male-biased litter sex ratio in the southernmost Iberian population of edible dormouse: a strategy against isolation? Litter sex ratio is a key component of parental fitness due to its impact on lifetime reproductive success. Multiple causes may lay at the origin of sex ratio variation among species and populations, such as maternal condition, local resource competition, presence of helpers, habitat quality or inbreeding levels. Whereas variation in sex allocation between species is relatively well understood, it is still unclear how and why litter sex allocation differs within species. Here, we present an analysis of litter sex ratio variation in two populations of edible dormice (Glis glis) over nine years of study. Populations are situated in the Montnegre and Montseny massifs in Catalonia (NE Iberian Peninsula). The Montnegre population is nowadays an isolated population located at the southernmost range edge of the species in the Iberian Peninsula. Litter sex ratio was male-biased in Montnegre but balanced in Montseny, whereas both populations showed a balanced adult sex ratio. We suggest that this differential sex allocation investment in Montnegre, may be a strategy to overcome isolation effects in this massif, as males are the dispersing sex in this and other rodent species. Introduction Fisher's principle argues that natural selection should produce balanced sex ratios if the cost of production of both sexes is the same (Fisher, 1930). This principle has been corroborated in many species, but a large amount of studies have also found evidence of a biased sex ratio at birth (Clutton-Brock, 1986;Clutton-Brock & Iason, 1986;Cockburn et al., 2002;Komdeur, 2012). There is now substantial empirical and theoretical evidence that multiple causes may lay at the origin of this bias, especially in taxa with complex life histories and social systems such as birds or mammals (Cockburn et al., 2002). First, females in better condition (i.e., status, territory quality or body characteristics) should produce a higher proportion of males (Trivers and Willard hypothesis, Trivers & Willard, 1973). The underling explanation is that by producing sons they may achieve a greater fitness return for an equal investment (Trivers & Willard, 1973). Second, in species with sex biased competition, sex allocation should be biased towards the dispersing sex. Under intensified competition, the dispersing sex should be overproduced to promote a reduction in competition by increasing the number of potential dispersers in the population (local resource competition hypothesis, Clark, 1978;Silk, 1983). Third, when offspring of one sex cooperate with each other or with their parents, the helping sex should be overproduced (local resource enhancement hypothesis, Emlen et al., 1986;Komdeur et al., 1997). Fourth, given that environment varies spatially, reproductive performance should also vary according to the quality of the reproductive habitat (Julliard, 2000). Thus, if dispersal behavior is biased, it would be adaptive to overproduce the dispersing sex in lowquality habitats since this sex is more likely to disperse to another habitat with better quality. On the contrary (in high-quality habitats), it would be adaptive to overproduce the philopatric sex. Finally, in inbred populations an overproduction of the dispersing sex is expected to increase the fitness return for females. Indeed, given that a negative relationship between inbreeding and fitness is often observed (see Kempenaers, 2007 for a review) and relatives tend to be clustered around the natal site (Greenwood, 1980), the dispersing sex would achieve major fitness by mating with unrelated individuals. Despite decades of interest, sex allocation studies still give unexpected results, especially in higher vertebrates (West et al., 2002). Furthermore, most studies on variation in sex allocation have been based on among-species comparisons, despite the fact that proposed mechanisms should also apply within species. In fact, recent work has shown within-species variation in sex allocation (Stauss et al., 2005;Michler et al., 2012). The aim of this study was to investigate litter sex ratio variation in the two southernmost wild populations of edible dormice (Glis glis) of the Iberian Peninsula, situated in Catalonia (the Montseny and Montnegre massifs). Although the two studied massifs are only separated by 10 km, the population of edible dormouse of Montnegre (located further south) is virtually isolated from the nearest population (Montseny). Additionally, the Montnegre population has suffered from a recent retreat of its deciduous forest, suffers from drier conditions than the Montseny population and, therefore, has an uncertain future in terms of survival (Ribas et al., 2009). Because there are important differences in habitat quality between these two sites, we predicted that sex allocation and, as a consequence, litter sex ratio differs among Montseny and Montnegre edible dormice populations. Studied species The edible dormouse is an arboreal, nocturnal and hibernating small mammal with a general distribution across Europe (Amori et al., 2008). From mid-June to mid-August, they mate (Bieber & Ruf, 2004;Özkan, 2006) and give birth to one litter per year in Southern Europe (occasionally two; Santini, 1978;Pilastro, 1992). On the contrary, in central and northern Europe edible dormice are characterized by low or no reproduction in years with low food availability (i.e., low beech or oak crops; Pilastro et al., 2003). A litter is composed of 1 to 11 pups, with an average of 4.75 to 6.80 pups depending on the geographical location (Kryštufek, 2010). Pups are born hairless, develop their fur at 16 days, open their eyes after 3 weeks and leave the nest at 30 days (Kryštufek, 2010). Study sites and sample collection The data used for this study were obtained from a capture-mark-recapture study monitoring the two southernmost populations of the edible dormouse on the Iberian Peninsula: Montseny and Montnegre. The sampled area in Montseny is a 5-ha deciduous forest (Quercus petraea, Fagus sylvatica, Corylus avellana and Acer opalus) mixed with Q. ilex and Ilex aquifolium, surrounded by beech-dominated deciduous forest. It is situated in the center of the Montseny Biosphere Reserve (range 1078-1143 m a.s.l., 41°47 59 N, 2°25 14 E), with a mean temperature of 9.5°C and a precipitation of 975 mm per year ( fig. 1). The sampled area in Montnegre is a 5-ha deciduous forest (Q. canariensis, Q. petraea, C. avellana, Castanea sativa and Prunus avium) mixed with Q. ilex and I. aquifolium, surrounded by Mediterranean forest. It is situated in the northern slopes at the top of the Montnegre massif (range 700-764 m a.s.l., 41°39 37 N, 2°34 44 E), with a mean precipitation of 840 mm per year ( fig. 1). The southernmost population of Montnegre is virtually isolated from the nearest population (Montseny). Indeed, despite the short distance (10 km) among populations they are separated by open unsuitable habitat and a freeway that is likely to strongly hinder the dispersion of animals from one population to the other. For data collection, nest boxes (30 cm × 15 cm × 15 cm, with a 5 cm entry hole) were attached to trees at a height of approximately 3 m aboveground (Freixas et al., 2011). Nest boxes are frequently used by dormice during the active period. sampling was designed to obtain data during the reproductive period, 2012-2015 sampling was designed to increase data quantity and quality by increasing monitoring effort in order to encompass the overall active period of the species (table 1). Nest boxes were inspected during the day, when dormice can be found sleeping inside the boxes, and lasted a maximum of 15 min per individual. All captured dormice were identified by a unique number, sexed and aged. Pups were aged according to the color of their fur (pink pups; grey and eyes close pups; grey and eyes open pups); juveniles were aged according to their body size and tibia length ( 30 days of life); yearlings (after their first hibernation, already sexually mature) and A 20 m. B 5 × 4 nest boxes placed in a grid and separated of 30 m, occupying just over 1 ha (the plots of the same population are separated by a maximum distance of 675 m). adults (after their second hibernation) (Schlund, 1997;National Dormouse Monitoring Programme, 2015). Juveniles, yearlings and adults were marked using a transponder (AVID Musicc, 8 × 2.1 mm) injected under the skin of the neck. The implantation of the transponder has no obvious adverse effects. Also, a numbered metal ear-tag (National Band and Tag Co., USA) was placed on the ear. We measured litter size as the number of pups with less than fifteen days (i.e., pink pups or grey eyes close pups) because there is a low rate of mortality at this stage in both studied populations (personal observations, unpublished). The analysis of sex ratio was performed only on litters with at least two pups (only one litter had a single pup) for which the sex of all pups was known. The number of captured mature individuals (yearlings and adults) each year was used to calculate sex ratio of mature individuals. Statistical analysis A Generalized Linear Mixed Model (GLMM) was used to investigate whether the sex ratio differed between populations (Montseny and Montnegre). The GLMM was performed using the function "GLMR" of the R package "LME4" (Bates et al., 2011), with the proportion of males per litter as response variable. The GLMM was used with a logit link and a variance given by a Binomial distribution. The population and the year of sampling were included as fixed factors. To control for females having reproduced several times during their lives, maternal identity was included as a random effect. To investigate mature individual sex ratio in the studied populations, two-tailed Wilcoxon paired tests were used to compare yearly sex ratio of mature individuals within each population. All statistical analyses were conducted using the R software version 3.3.0 (R Development Core Team, 2016). Results Litter sex composition was determined for 74 complete litters (404 pups from 60 different mothers) (see table 2 for details on litter sex composition per year). In Montseny, sex composition was determined for 48 complete litters (250 pups from 38 different mothers and a mean (± SD) litter size of 5.21 ± 1.62) and in Montnegre, for 26 litters (154 pups from 22 different mothers and a mean (± SD) litter size of 5.92 ± 1.65). More than half of the marked juveniles were not recaptured (i.e., either dispersed or dead) after their first hibernation (Montseny: 91% of males and 87% of females; Montnegre: 72% of males and 54% of females). In Montseny, the litter sex ratio (proportion of males in a litter) was 0.52 and did not significantly differ from 0.50 (95% CI = 0.46-0.58; fig. 2). On the contrary, the litter sex ratio in Montnegre was 0.61 and significantly departed from 0.50 (95% CI = 0.55-0.67) showing a male-biased litter sex ratio ( fig. 2). According to our In Montseny, sex ratio of mature individuals was found to be balanced (Wilcoxon paired test: V = 0, P = 0.18, N = 112). Surprisingly, the male biased litter sex ratio found in Montnegre was no longer existent in mature individuals, where sex ratio was found to be balanced (Wilcoxon paired test: V = 3, P = 0.58, N = 34). Discussion Litter sex ratio was found to be male-biased in one isolated southernmost population of edible dormice, but not in other close-by population. The bias towards males reported in Montnegre is consistent with observations from a German population (Koppmann-Rumpf et al., 2015). No sex ratio bias was found for mature individuals in both populations, as was also the case in the German study (Koppmann-Rumpf et al., 2015). Litter sex ratio variations at Montseny (balanced) and Montnegre (malebiased) may be due to the fact that different selection pressures may be operating at close-by populations. We hypothesize that a lack of mature males in the population with unequal sex ratio would be responsible for the overproduction of young males to compensate for losses at the mature age. Indeed, the population of edible dormouse of Montnegre is virtually isolated and it is composed by few mature individuals. Isolated populations experience particular environmental, demographic and genetic contexts that may favor sex allocation strategies different from those in nearby non-isolated populations. First, overproducing the dispersing sex is expected to generate higher benefits in terms of fitness if dispersers are established in a better habitat that in the one they were born, because in a favorable habitat reproductive performance should be higher (Julliard, 2000). Montnegre population may be considered to thrive in a low-quality habitat (isolated and small population). In these conditions, breeding females may enhance their fitness by producing a higher number of individuals of the dispersing sex. However, since Montnegre is surrounded by Mediterranean forests, less suitable for this species, we expect that dispersing individuals will have lower chances to reach suitable territories. Thus, although plausible, an overproduction of males may not be effective in Montnegre given the limited suitable habitat and longer dispersal distance (i.e., individuals moving from Montnegre to Montseny should travel a minimal distance of 10 km of unsuitable habitat). Second, small and isolated populations may experience reduced genetic diversity and increasing levels of inbreeding, leading to inbreeding depression (Wright, 1931;Nei et al., 1975). Increasing dispersal may be effective to reduce inbreeding because dispersers are more likely to mate with unrelated individuals (Motro, 1991;Gandon, 1999;Perrin & Mazalov, 1999). As Montnegre is a small isolated population, we expect high inbreeding levels. Thus, given that males are the main dispersing sex in edible dormouse (Bieber, 1995;Ściński & Borowski, 2008), the overproduction of males found in Montnegre could be a mechanism to increase the number of dispersers and ultimately to increase fitness return for females. Contrary to the Montnegre population, edible dormice population of Montseny has a suitable habitat connecting it with the northern populations of the Iberian Peninsula (Torre et al., 2010). Thus, we expect low inbreeding levels and a high quality habitat in this population. Contrary to Montnegre, dispersion may not be a driver of litter sex ratio in Montseny, which may explain balanced litter sex ratio in this population. Although the inability to quantify inbreeding levels as well as dispersal behavior of edible dormice is a limitation of this data set, it may be solved in the near future by sequencing large number of single-nucleotide polymorphisms and conducting GPS surveys, respectively. There is an important difference between litter sex ratio (biased) and mature individuals sex ratio (balanced) in Montnegre. One reason that may explain this difference may be a sex-biased mortality rate, as has been already found in birds and mammals (Promislow, 1992;Liker & Székely, 2005) or a sex-biased mortality due to a sex-biased dispersion (Lucas et al., 1994). We suggest that biased litter sex ratio (but unbiased mature sex ratio) in Montnegre may be a strategy to compensate biased dispersal with limited immigration and/or high male mortality. Accordingly, Koppmann-Rumpf et al. (2015) proposed that juvenile sex ratio deviations were compensated by higher mortality rates of juvenile males in a German edible dormouse population. Caution is required since no information is available regarding mortality rates in our study populations, although data collection is ongoing. In fact, our data cannot distinguish between mortality and dispersal, since individuals that are no longer detected (i.e., recaptured) could be actually dead or emigrated. In conclusion, previous and present investigations of litter sex ratios in edible dormice populations, carried out at different locations across the species' range, showed different results. These differences could reflect variation in selective pressures acting on sex ratios. In Montnegre, poor habitat quality, small population size and isolation may lead females to produce a higher number of males per litter in order to increase dispersal. Alternatively, maternal conditions, local resource competition or communal breeding (i.e., the Trivers and Willard hypothesis, 1973; the local resource competition hypothesis: Clark, 1978;Silk, 1983; and the helper repayment hypothesis: Emlen et al., 1986;Komdeur et al., 1997) could also explain litter sex ratio variations in edible dormice. Because testing such hypothesis requires additional data, preferably also comparing more than two populations, future analysis linking mother condition, seed production, dispersal or survival patterns on litter sex ratios could shed light on the relative costs or benefits of producing unbiased (Montseny) versus biased (Montnegre) litters in edible dormice. Finally, further studies of population dynamics of this species may provide some tools for conservation purposes in the southernmost populations of the Iberian Peninsula, threatened by oak forest decline due to climate change and land use (Ninyerola et al., 2007).
v3-fos-license
2022-02-20T16:24:03.893Z
2022-02-01T00:00:00.000
246991703
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1999-4915/14/2/408/pdf", "pdf_hash": "8c330687f8c24c1a81ccf3ae3cd97ea03219ddc8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2547", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "0665aa362c21787553d66f7d132fc5ae00971ba3", "year": 2022 }
pes2o/s2orc
THαβ Immunological Pathway as Protective Immune Response against Prion Diseases: An Insight for Prion Infection Therapy Prion diseases, including Creutzfeldt–Jakob disease, are mediated by transmissible proteinaceous pathogens. Pathological changes indicative of neuro-degeneration have been observed in the brains of affected patients. Simultaneously, microglial activation, along with the upregulation of pro-inflammatory cytokines, including IL-1 or TNF-α, have also been observed in brain tissue of these patients. Consequently, pro-inflammatory cytokines are thought to be involved in the pathogenesis of these diseases. Accelerated prion infections have been seen in interleukin-10 knockout mice, and type 1 interferons have been found to be protective against these diseases. Since interleukin-10 and type 1 interferons are key mediators of the antiviral THαβ immunological pathway, protective host immunity against prion diseases may be regulated via THαβ immunity. Currently no effective treatment strategies exist for prion disease; however, drugs that target the regulation of IL-10, IFN-alpha, or IFN-β, and consequently modulate the THαβ immunological pathway, may prove to be effective therapeutic options. Introduction Prion diseases, a debilitating example of which is Creutzfeldt-Jakob disease (CJD), are caused by transmissible proteinaceous pathogens. Patients with prion disease show degenerative changes in the brain and nervous tissues that are progressive and eventually fatal. Currently, no effective medications exist for the treatment of these detrimental diseases, and the pathogenesis and immunological responses associated with prion diseases remain unclear. This review discusses the host immunological pathways that attempt to limit prion diseases. Prions are essentially defined by the protein-only hypothesis, which states that these pathogens comprise only proteins and lack any genetically inherited nucleic acid material. Their discovery led to the abandonment of the scientific dogma that only DNA-or RNA-containing organisms could be transmitted in infectious diseases. This was further established by the discovery of only proteinaceous content, and the lack of either DNA or RNA, from scrapie-infected mouse and hamster brains. Additionally, this proteinaceous content was found to be transmissible and infectious. Prions consist of scrapie prion protein (PrP sc ), which is a conformer of cellular prion protein (PrP c ). PrP sc aggregates recruit PrP c , which results in template-mediated misfolding to cause conformational change of PrP c to The clinical features of CJD include rapid, progressive dementia accompanied by ataxia, myoclonus, visual abnormalities, and other manifestations of nervous system dysfunction including typical periodic sharp wave complexes in electroencephalography (EEG). Neuropathological observations include abnormal prion protein aggregation, spongiform changes, neuronal loss, and gliosis. Despite these common characteristics, the disease has been known to display great phenotypic variability ever since it was first described [5]. Regarding the combination of the methionine (M) valine (V) polymorphism at codon 129 and the prion PrP sc confirmation type (type 1 or type 2), sporadic CJD can be classified into subtypes, including VV1, VV2, MM1, MM2, MV1, and MV2. The VV2 subtype of this disease involves the subcortical structures, including the nucleus of the brain stem, and presents as early ataxia and late dementia. The manifestations of the MV2 subtype, which has clear cerebellar involvement are similar to those of VV2, with early ataxia that gradually progresses to dementia over time. The MM2 subtype is characterized by welldefined spongy changes in the thalamus and lower olives, and manifests as sleeplessness, agitated behavior, ataxia, and cognitive alterations. Dementia is a manifestation of both MM2 and VV1 subtypes. The MM2 subtype is associated with pathological changes in all cortical layers, while the VV1 subtype is characterized by abnormalities of the cortical area and striatum. Although these categorizations are useful, they do not fully represent the broad spectrum of these illnesses, and as many as 35% of patients present with a mixed phenotype [6]. The most common symptom is cognitive dysfunction, followed by cerebellar, constitutional, and behavioral changes in approximately 25% of cases [7]. About one-third of patients with sporadic Creutzfeldt-Jakob Disease show prodromal signs, including asthenia, headache, malaise, vertigo, changes in sleep or eating patterns, and weight loss [7,8]. Approximately one-fifth of patients initially present with behavioral changes, which develop later in about 50% of patients during the course of the disease. Higher cortical dysfunctions including aphasia, apraxia, negligence, and acalculia, among others, are early disease indicators in about 5% of patients [7]. Vision or oculomotor impairment occurs early in approximately 10% of cases, and develop during the course of the disease in approximately 35% of cases. Additionally, about 7% of patients with sporadic Jakob-Creutzfeldt disease present with sensory symptoms [7]. Creutzfeldt-Jakob disease mimics several other neurological or psychiatric diseases, which often results in incorrect diagnoses [9]. Sporadic Fatal Insomnia Sporadic fatal insomnia (FI) is a rapidly progressive neurodegenerative disease characterized by progressive insomnia that is followed by dysautonomia, stupor, and death [10]. The clinical manifestations include sleep abnormalities, psychiatric disorders, gait problems, and mobility disturbances. Pathological findings are observed in the thalamus and Viruses 2022, 14, 408 3 of 11 lower olives [11], with elevation of type 2 PrP sc commonly identified in patients with MM homozygosity [10,12]. Variably Protease-Sensitive Prionopathy Patients with MM homozygosity demonstrate significant Parkinsonism and myoclonus, with no psychiatric or cognitive involvement. In contrast, patients with VM and VV genotypes have significantly higher levels of psychiatric dysfunction and dementia than those without Parkinsonism and myoclonus. Approximately half of all patients with the three polymorphisms have been observed to have ataxia. CSF 14-3-3 protein, EEG, and MRI examinations are generally not useful for diagnosis [13]. Spongiform and glio-changes are observed diffusely in the cerebral cortex, basal ganglia, thalamus, and cerebellum [14,15]. Familial Creutzfeldt-Jakob Disease Most patients with genetic prion disorders have unknown family history. Familial Jakob-Creutzfeldt disorder generally presents as rapidly progressive dementia and ataxia accompanied by motor abnormalities. Disease onset is observed between 30 and 60 years of age. Many cases of familial Creutzfeldt-Jakob disease are caused by the E200K variant, and are typically characterized by rapidly progressive dementia, myoclonus, and ataxia. MRI typically reveals symmetric striatal T2w/DWI hyper-intensities, usually with reduced cortical ribboning [16]. EEG patterns may vary in the family of variants responsible for Creutzfeldt-Jakob disease; however, a late periodic sharp wave complex is typical. CSF markers, including 14-3-3 protein, NSE, and t-tau, may be elevated, albeit with a lower frequency than that observed in sporadic Creutzfeldt-Jakob disease. A previous study has reported that real-time quaking-induced conversion (RT-QuIC) from CSF has a higher sensitivity than 14-3-3 protein or t-tau for the diagnosis of familial Jakob-Creutzfeldt [17]. Gerstmann-Straussler-Scheinker Syndrome This disorder has near-total penetration and is characterized by tremors, cerebellar ataxia, speech, and swallowing dysfunction, pyramidal signs, Parkinsonism, sensory dysesthesia, and cognitive symptoms. Disease onset may occur any time between 20 and 80 years of age, and the duration can vary from a few months to more than 10 years. The spectrum of onset, duration, and clinical manifestations may be narrower for specific variants. Codon 129 polymorphisms may also contribute to disease manifestation, with individuals who are homozygous for the MM genotype manifesting earlier onset of the disorder than those with an MV genotype at the same locus. This is also the case for the Pro102Leu mutation. In contrast, carriers of apolipoprotein E variants present with late onset of symptoms [18,19]. Gerstmann-Sträussler-Scheinker syndrome is a gradually progressive ataxic or motoric (e.g., Parkinsonian) disease with late-onset dementia. Approximately 10 PRNP variants are known to be associated with this syndrome, including P102L, P105L, P105T, A117V, Q145X, F198S, Q217R, and several OPRI [5]. The median age of onset is often 50-60 years of age, and ranges between 20 and 70 years of age, although with large variability commonly seen, even within families. Familial Fatal Insomnia Patients with this disorder initially report hypersomnia due to mood and psychiatric changes, which are related to abnormal nocturnal sleep patterns in the early stages of the disorder [20,21]. Onset usually occurs at the end of the fourth decade, and subjects typically experience a severely progressive inability to sleep for a couple of months, followed by dysautonomia, including hyperhidrosis, tachycardia, and hyperpyrexia. Cognitive and motor manifestations usually manifest later on in the disease. In more progressed cases, polysomnography demonstrates a reduction in typical sleep transients, total sleep time, Viruses 2022, 14, 408 4 of 11 realization of dreams, and disorganization of sleep cycles [22], finally leading to protracted periods of stupor. Autonomic dysfunction with hypertension, fever, palpitation on movement and gait disorders have also been observed [23], along with an increase in total metabolic demand with cachexia. Although MRI results are non-specific and show diffuse atrophy, positron emission tomography (PET) imaging reveals significant and moderate hypo-metabolism in the thalamus and corpus callosum, respectively [24]. Neuronal loss and glio-changes are prominently seen in the anterior ventral and mediodorsal thalamic nuclei, as well as in the inferior olives. CSF 14-3-3 protein has low-sensitivity for the diagnosis of this syndrome, and spongiform changes are known to occur very late during disease progression [25]. Even though a genetic test is essential for the definitive diagnosis of this disease, various schemes have been proposed to aid disease confirmation. As suggested by Krasnianski [26], these algorithms focus on clinical manifestations and polysomnography results. However, the diagnosis can be complicated by the absence of a family history due to the low-sensitivity of available confirmatory tests and atypical clinical signs. In brief, patients with this disease do not meet the classical criteria for Creutzfeldt-Jakob disease (CJD), and consequently prion disease is often not suspected. Other PRNP Mutations Truncating variants lead to diseases that have very unusual clinicopathological presentations, as dementia progresses over time, and is often similar to Alzheimer's disease, frontotemporal dementia, and other neurodegenerative diseases with amyloid prion angiopathy and tauopathy [27]. Kuru Spongiform encephalopathy has a typical duration of one year and is characterized by progressive ataxia, dysarthria, dysphagia, tremors, and motor dysfunction. Patients with dementia have fewer cognitive symptoms as compared to those with other prion diseases. This disease was caused by ritualistic endocannibalism, and women and children were more likely to be affected since they were also more likely to consume brain tissue [28]. Iatrogenic Creutzfeldt-Jakob Disease (iaCJD) A few cases of Creutzfeldt-Jakob Disease (CJD) contracted the disease post transfusion with contaminated blood [29]. Contaminations could also be mediated by dura matter grafts and intracranial surgical devices. The clinical presentation of this disease is similar to that of sCJD, with typical symptoms including ataxia, rapidly progressing dementia, and myoclonus. Clinical manifestations related to growth hormone infusion tend to affect the cerebellum, with significant ataxia and cognitive dysfunction developing later in the course of the disease [30]. iaCJD is more likely to occur in youth [31], and, as observed for other prion disorders, codon 129 polymorphisms seem to affect susceptibility to, and incubation time of, the disease [32]. The clinical phenotype and MRI results for iatrogenic Creutzfeldt-Jakob disease are associated with dura matter overlap, as seen in sporadic Creutzfeldt-Jakob disease [33]. Variant Creutzfeldt-Jakob Disease (vCJD) Manifestations of variant Creutzfeldt-Jakob disease often begin with a psychiatric prodrome, at least six months before the onset of neurologic symptoms, which include dysesthesia, cognitive dysfunction, cerebellar dysfunction, dystonia, myoclonus, and chorea. The median age of onset is 27 years (range, 10-70 years) for this disease, earlier than that of sporadic Creutzfeldt-Jakob disease, while the median disease duration of vCJD is typically 15 months [34]. Many patients with this disease are homozygous for methionine at codon 129 in PRNP, which indicates the possible role of codon 129 heterozygosity in susceptibility [34]. However, the MV129 codon was also seen in patients with variant Creutzfeldt-Jakob disease. Unlike other prion diseases, the PrP Sc in this disease are found not only in the central nervous system (CNS) but also in the lymphatic system, possibly due to acquisition via oral or blood routes [27,34]. vCJD initially manifests as psychiatric symptoms that progress to ataxia, as well as movement and cognitive dysfunction within a year [34]. EEG findings and CSF 14-3-3 lack sufficient sensitivity to confirm diagnosis [35], and CSF RT-QuIC is often negative. MRI signal intensity in the pulvinar area of the thalamus is the most sensitive indicator (pulvinar sign) of infection, and is seen in as many as 90% of cases [36]. Although a definite diagnosis requires brain biopsy, abnormal prion proteins can be detected in lymphatic tissue, thus rendering tonsillar biopsy as the preferred choice of proof of infection [37]. Prions enter the digestive tract following their consumption. However, as they are resistant to the acidic gastric milieu, only minimal protection against prion infection is achieved at this stage. Previous studies have demonstrated the ability of prions to pass through the stomach and enter the intestine, where they accumulate in Peyer's patches [43]. Notably, the number of Peyer's patches is positively related to prion infectivity, and, thus, these structures play a critical role in the pathogenesis of prion disease. M cells lie scattered among typical enterocytes in the intestine, and facilitate antigen uptake from the intestinal lumen to mediate immunosurveillance. However, certain pathogens, including prions, hijack these cells to cause infections. Previous studies have demonstrated efficient transcytosis of prion pathogens via M-cells. Further, an oral challenge revealed that prion proteins enter M cells in Peyer's patches to infect hosts, and that the depletion of M cells in an animal model reduced the rate of infection by prion pathogens [43,44]. Following passage through the follicle-associated epithelium of the Peyer's patches, prions spread via a possible cell-mediated mechanism. Macrophages that engulf prion proteins may play minor roles in their transmission and spread [45][46][47][48]. More importantly, dendritic cells from gut-associated lymphoid tissue, such as Peyer's patches, transcytose these pathogens for antigen presentation to lymphocytes. Prions, in turn, exploit these mechanisms for intercellular transmissions. Follicular dendritic cells, also a type of antigen-presenting cell, play a critical role in prion transmission [49]. Previous findings have demonstrated that prion proteins can accumulate in follicular dendritic cells [50][51][52][53][54][55][56][57][58], and mice with depleted follicular dendritic cells experience fewer intracerebral prion infections. These cells function as primary antigenpresenting cells that stimulate follicular helper T cells to produce interleukin-21 for B-cell antibody class switching in response to foreign antigens. Follicular dendritic cells usually express PrP c , and consequently are primary targets for prions, which hijack them to aid their own transmission. Additionally, mice treated with the lymphotoxin-β receptor antibodies that kill follicular dendritic cells avoid prion splenic accumulation and experience slower prion neuro-invasion [59]. A different study, in which mice were treated with an inhibitor of the tumor necrosis factor receptor, reported similar observations on the prevention of prion infection. Additionally, mice lacking lymphotoxin-α and lymphotoxin-β, which are crucial for follicular dendritic cell functioning, experience fewer prion intraperitoneal infections [59][60][61]. These findings support the notion that follicular dendritic cells are vital for prion infectivity. Further, they play critical roles in the peripheral retention of prion pathogens within lymphoid tissues, and in the replication of lymphotropic prion strains. Chronic lymphocytic inflammation with follicular dendritic cell-dominant lymphoid follicles within affected organs enable ectopic prion protein replication, further supporting the possibility of a key role for these cells in prion pathogenesis. After accumulation and replication in secondary lymphoid organs, such as follicular dendritic cells containing lymphoid follicles, prions disperse to the central nervous system. Animal models have demonstrated that this dispersal occurs through the autonomic nervous system. Previous studies have revealed that sympathectomy prevents or delays prion pathogenesis [62][63][64][65], and, in contrast, sympathetic hyperinnervation in the secondary lymphoid organs of transgenic mice facilitates prion pathogenesis and nervous system invasion. Prion proteins are, therefore, believed to be transmitted via the sympathetic nerves to the spinal cord and brain. On reaching the brain, prions progressively aggregate in the CNS causing fatal synaptic spongiform encephalopathies, and neuronal losses with neuroinflammation. Prionmediated neuroinflammation may vary from aggressive to occasionally minimal. This process typically involves the activation of astrocytes and microglia, which is a prominent feature in patients with prion diseases [66][67][68][69][70][71]. Microglia function to clear apoptotic neurons subsequent to prion accumulation and infection. However, they typically fail to efficiently degrade the prion pathogens themselves [69]. Microglia is a subtype of macrophage located in brain tissues. Prion proteins also enable the transformation of macrophages from M1-type macrophages to M2-type macrophages [72]. Further, cytokines released by microglia augment the pathogenesis of prion infections. Prion infections trigger NF-κB activation and the secretion of pro-inflammatory cytokines, including interleukin-1α, interleukin-1β, TNFα, and interleukin-6 [40,45], as has been observed in patients with prion diseases and in experimental mouse models. Additionally, the regulatory cytokine TGFβ is induced in mice after prion infection, which, in conjunction with interleukin-6, plays a key role in the TH17 immunological pathway. This indicates the possible induction of TH17 immunity subsequent to prion infection. Mice with depleted interleukin-1 receptors have also been observed to have significantly prolonged incubation periods for prion infection. The whole infectious process of prion pathogen infection is shown in Figure 1. Protective Immunity against Prion Diseases The THαβ immune response appears to be the protective host immunological pathway that targets prion diseases. Previous studies have found that type 1 interferons and interleukin-10 are protective against prion infections [73,74], and play crucial roles in the antiviral TGF-β immunological pathway. Interleukin-10 knockout mice have been shown Protective Immunity against Prion Diseases The THαβ immune response appears to be the protective host immunological pathway that targets prion diseases. Previous studies have found that type 1 interferons and interleukin-10 are protective against prion infections [73,74], and play crucial roles in the antiviral TGF-β immunological pathway. Interleukin-10 knockout mice have been shown to have shortened incubation periods for prion infection [75], and type 1 IFN administration protects animals from the same infection [73]. The key players in THαβ immunity include NK cells, CD8 T cells, IL-10-producing CD4 T cells, and IgG1 B cells [76]. The effector mechanisms of the THαβ immunological pathway are antibody-dependent cellular cytotoxicity (ADCC) executed by NK cells and MHC I-TCR-mediated cell cytotoxicity implemented via CD8 T cells [77]. Consequent to these processes, all intracellular protein and nucleic acid content is degraded via cellular apoptosis, resulting in stoppage of viral infectivity. Several lines of evidence for the role of THαβ immunity in protection against prion diseases exist. Interferon regulatory factor 3 (IRF3) knockout mice show accelerated pathogenesis of prion infections [78]. IRF3, a MyD88-independent Toll-like signaling pathway mediator, functions downstream of the Toll-like receptor 3 (TLR3). Additionally, repeated TLR9 stimulation results in the initiation of protective immunity against prion infections [79][80][81]. TLR3 and TLR9 function as sensors of viral infection, and are responsible for initiating the THαβ immunological pathway, thereby potentially providing protective immunity against prion diseases. Furthermore, interleukin-10 knockout mice are more susceptible to prion diseases, post intraperitoneal or intracerebral inoculation with prion pathogens. Since interleukin-10 is the key mediator of THαβ immunity, the pathway is thought to be protective against prion infections. Type 1 IFN treatment has also been demonstrated to be protective in mice against prion infections. These molecules are the first host cytokines produced against viral infections. Moreover, CXCR3 knockout mice have been shown to accumulate prion pathogens with prolonged incubation periods in a prion infection challenge. The levels of CXCL9 and CXCL10, the ligands of CXCR3, as well as the CX3CR1-CX3CL1 axis, are known to change during prion infections [82]. CX3CR3 is a key chemokine receptor responsible for THαβ immunity, and the CX3CR1-CX3CL1 axis plays a critical role in antiviral immune responses. Host antiviral immune effects include the extermination of infected cells via the action of CD8 + T cells or NK cells [83]. Although intracellular bacterial and protozoan pathogens are killed by macrophages as part of TH1 immunity, intracellular prions digested by macrophages are not completely destroyed. Macrophages typically destroy intracellular pathogens via the action of lysozymes or the generation of free radicals subsequent to iNOS activation. This effect is exerted via the degradation of the bacterial cell wall by lysozymes, and lipid peroxidation of cellular membranes by free radicals. Proteins, however, are not highly susceptible to attack by free radicals, and prions in fact activate macrophages, such as microglia, to cause immune pathogenesis. Induction of antiviral THαβ immunity that results in apoptosis of prioninfected cells, accompanied by DNA fragmentation and protein degradation via the action of caspases, is critical in the defense against prion infections. Thus, apoptosis triggered by CTL or NK cells is the only successful immune response that degrades prion pathogens by utilizing the protein degradation machinery and consequently preventing further infection and transmission. Additionally, TH17 immunity plays a minor role in prion infections. TH17 immunity is known to use pro-inflammatory cytokines, including TNFα and IL-1, to activate neutrophils that, in turn, digest extracellular bacteria or fungi. Prions, not being extracellular pathogens, are not destroyed by neutrophils. However, they trigger TH17 immunity to mislead the host immune response and consequently prevent prion clearance by preventing appropriate functioning of THαβ immunity. Collectively, the above-mentioned findings suggest that host antiviral immunity is largely protective against prion pathogens. The protective immunity against prion infection is shown in Figure 2. Prions, not being extracellular pathogens, are not destroyed by neutrophils. However, they trigger TH17 immunity to mislead the host immune response and consequently prevent prion clearance by preventing appropriate functioning of THαβ immunity. Collectively, the above-mentioned findings suggest that host antiviral immunity is largely protective against prion pathogens. The protective immunity against prion infection is shown in Figure 2. Conclusions Although prion infections trigger pro-inflammatory cytokines that facilitate TH17 immunity, antiviral THαβ immunity provides protection against these pathogens. Currently, there are no effective medications for the treatment of prion infections. However, key mediators of THαβ immunity, including type 1 interferons, interleukin-10, and TLR3/TLR9 stimulators, can be exploited to initiate the host immune response against prion infections. This will contribute significantly to the development of strategies for the management of prion diseases. Conclusions Although prion infections trigger pro-inflammatory cytokines that facilitate TH17 immunity, antiviral THαβ immunity provides protection against these pathogens. Currently, there are no effective medications for the treatment of prion infections. However, key mediators of THαβ immunity, including type 1 interferons, interleukin-10, and TLR3/TLR9 stimulators, can be exploited to initiate the host immune response against prion infections. This will contribute significantly to the development of strategies for the management of prion diseases.
v3-fos-license
2019-03-22T16:18:52.726Z
2011-01-10T00:00:00.000
85132977
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://zenodo.org/record/7793/files/1319361841-Chistiakov_2011BJMMR_748.pdf", "pdf_hash": "bbece7ae261ee42d435df7e53daae9e851fd3cc2", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2550", "s2fieldsofstudy": [ "Biology" ], "sha1": "94ea771ceb54a4ca12147409a7f97cf09c6daec8", "year": 2011 }
pes2o/s2orc
Genetic Markers of Graves’ Disease: A Historical View and Up-date ___________________________________________________________________________________________ ABSTRACT Two decades of intensive but quite chaotic and decentralized population studies on susceptibility to Graves’ disease (GD) provided a bulk of inconsistent data resulted in finding of proven association only for the HLA class II region that exerts a major effect in the genetics of GD. Using low-resolution microsatellite-based human genome-wide scans revealed several regions of linkage harboring putative susceptibility variants. Further, high throughput genotyping of large population cohorts with help of high dense panels of single nucleotide polymorphisms (SNPs) and application of advanced tools for analysis of extended blocks of linkage disequilibrium within a candidate gene (SNP tagging, etc.) revealed the presence of several susceptibility genes in the regions of linkage on chromosome 2q (CTLA-4), 8q (Tg), 14q (TSHR), 20q (CD40), 5q (SCGB3A2/UGRP1) and, probably, Xp (FOXP3). The list of GD-predisposing loci was then extended with three more genes (PTPN22, IL2RA/CD25, and FCRL3). In the nearest future, implementation of even more robust technology such as whole-genome sequencing is expected to catch any disease-associated genetic variation in the patient’s individual DNA. In this review, the historical development of our knowledge on genetic factors predisposing to GD is considered, with special emphasis on the functional significance of observed associations and discussion of possible mechanisms of their contribution to GD pathogenesis. C/EBP α -CCAAT/enhancer-binding protein alpha; CTLA-4-cytotoxic T-lymphocyte-associated protein 4; DZ -dizygotic;FCRL3-Fc receptor-like protein 3; flCTLA-4- full-length CTLA-4; FOXP3-forkhead box P3; GD- Graves’ disease; GWAS- genome wide association study; HLA-Human Leukocyte Antigens; IFIH1-interferon-induced helicase Cdomain-containing protein 1; IL2-interleukin-2; IL2RA-IL-2 receptor, alpha subunit; JIA-juvenile idiopathic arthritis; LD-linkage disequilibrium; LYP-lymphoid phosphatase; MAF- minor allele frequency; MARCO-macrophage scavenger receptor with collagenous structure; MS- multiple sclerosis;MZ-monozygotic; NF κ B- nuclear factor kappa B; OR- odds ratio; PTPN22- protein tyrosine phosphatase, non-receptor type 22 (lymphoid); RA- rheumatoid arthritis; SCGB3A2-secretoglobin 3A2; sCTLA-4- soluble CTLA-4; sIL- 2RA-soluble IL2RA; SLE- systemic lupus erythematosus; SNP-single nucleotide polymorphism; T1D- type 1 diabetes mellitus; TBII-TSHR-binding inhibitory antibodies; TCR-T-cell receptor; Tg- thyroglobulin; TSAb- TSHR-binding stimulating antibodies; TSH- thyroidstimulating hormone; TSHR- thyroid stimulating hormone receptor; Tregs- regulatory T cells; INTRODUCTION belongs to autoimmune thyroid disease (AITD) characterized by selfantibodies-mediated stimulation of the thyroid stimulating hormone (TSH, thyrotropin) receptor (TSHR) that causes a hyperfunction of the thyroid gland. The thyroid activation leads to follicular hypertrophy and hyperplasia causing thyroid enlargement and increasing thyroid hormone production. GD diagnosis requires identification of suppressed TSH levels and elevated levels of the free thyroid hormone [i.e., thyroxine (T4) and/or triiodothyronine (T3]. GD affects ~0.5-2% of Western populations and accounts for the majority of cases of the hyperthyroidism. GD exhibits a clear sex-related bias in its frequency occurring 10-fold more often in females than in males. The familial clustering of autoimmune thyroid disease (AITD) has been known since the middle of the last century, with ~50% of patients reporting a family history of disease (Bartels et al., 1941). Furthermore, a whole variety of thyroid abnormalities have been reported in relatives of patients with thyroid disease, with thyroid autoantibodies, for example, being present in over 50% of children of patients with GD (Desai and Karandikar, 1999). Perhaps, the most convincing evidences for a genetic predisposition to a disease are provided by twin studies. While in monogenic diseases there is a full concordance among monozygotic (MZ) twins, in disorders with complex inheritance, the concordance is incomplete, but still higher compared to dizygotic (DZ) twins. Twin data have confirmed, with remarkable clarity, the presence of a substantial inherited susceptibility to GD. Several large twin studies have reported a higher concordance rate of AITD in monozygotic (MZ) twins compared to dizygotic (DZ) twins (Tomer and Davies, 2003). Concordance rates were 35% in MZ twins and 3% in DZ twins for GD. Model-fitting analysis of these data showed that 79% of the predisposition to the development of GD is attributable to genetic factors, whereas individual-specific environmental factors not shared by the twins could explain the remaining 21% (Brix et al., 1998;2001). The sibling risk ratio that is the ratio of the prevalence of the disease in siblings of affected individuals compared to the prevalence of the disease in the general population serves as a good estimate of disease heritability, with a ratio of > 5 considered significant. For AITD, the sibling risk ratio calculated for US Caucasians exceeds 16.0 thereby suggesting for a strong genetic influence on the pathogenesis of this disease (Jacobson et al., 2008). Results from twin studies are informative and helpful but they should be analyzed with caution bearing in mind the bias they often have. In many twin studies, it is likely that at least two types of bias operate in the selection of twin pairs for inclusion in the sample from all possible twins in the population who meet the criteria for the study. One such bias is concordance-dependent ascertainment, where the probability of twin pairs being included in a study of a particular trait is dependent on whether they are concordant or discordant for that trait. Such a bias can occur in a number of ways, even when a voluntary recruitment procedure is adopted. Another bias that may occur is that of non-independent ascertainment, where ascertainment probability depends on the combination of within-pair similarity and the type of relative (e.g. MZ or DZ twins); for example, it may happen that concordant MZ twins are more likely to be included in a particular study than are concordant DZ twins. Familial clustering of GD and twin studies showed that this disease does not occur because of a single gene defect and does not follow a simple pattern of Mendelian inheritance (Farid et al., 1981). To date, it is known that genetic susceptibility to GD is accounted by multiple genes, with the most of those exhibiting a rather modest effect, with Odds Ratio (OR) not exceeding 1.5 (Tomer, 2010). In this review, we will consider major findings in the genetics of GD from the evolutionary-historical point of view by focusing on the characterization of advances achieved with help of four major strategies in genetic analysis including candidate gene approach, whole-genome linkage screening, genome-wide association studies (GWAS), and whole-genome sequencing. Candidate Gene Studies Functional candidate genes may be selected from a bulk of human genes on the basis of their functional significance. For example, due to the major role of autoimmune mechanisms in the pathogenesis of GD mediated by self-reactive T-cells, a variety of immune-related genes such as Human Leukocyte Antigens (HLA), cytotoxic T-lymphocyte-associated protein 4 (CTLA-4), and many others could be considered as candidates for GD susceptibility. Since GD is characterized by the presence of several major self-antigens such as TSHR, thyroid peroxidase, and thyroglobulin (Tg), their genes could be chosen as attractive candidates for thyroid autoimmunity. The analysis of a limited number of DNA variants, often single nucleotide polymorphisms (SNPs), within a gene of interest in a relatively small number of cases and controls have been common place with journals reporting a longline of positive and negative results. Due to the significant inconsistency of produced results and underpowered character of most studies, association analyses resulted in the identification of only four susceptibility genes including HLA, CTLA-4, TSHR, and PTPN22 (encodes protein tyrosine phosphatase, nonreceptor type 22, also known as LYP -lymphoid phosphatase). The role of these genes in etiology of GD will be considered below. In early association studies, one or several polymorphisms within a gene of interest have been typically analyzed. Since a human gene usually contains dozens or even hundreds of SNPs, some genes have been erroneously considered as lacking association with GD on the basis of the analysis of only a few SNPs. In human genome, regions of extended linkage disequlibrium (LD) have been considered as problematic for precise identification of an etiological variant. More recently, however, using this strong LD, a single SNP can be chosen which will give a good representation of the associated LD block allowing a more comprehensive coverage of the gene region of interest. This allows for an estimation of a large number of genotypes by only typing a few that catch, or tag, a block of LD (Johnson et al., 2001). The tagging SNP approach makes the analysis of a gene more comprehensive and cost-effective since provides the possibility to find an etiological variant within an LD block without genotyping every SNP in a chromosomal region. Whole-Genome Linkage Screening Linkage analysis is based on study of affected families or a pedigree allowing the evaluation of co-segregation of a genetic variant with disease. If a tested marker is close to an etiological variant, the frequency of recombination between those may be significantly reduced to cause a preferential inheritance of the marker alleles among affected individuals, even though the marker itself is not involved in the disease pathogenesis. The measure of the likelihood of linkage between a disease and a genetic marker is the logarithm of odds (LOD) score (Ott, 1999). The LOD score is the base-10 LOD ratio in favor of linkage. According to widely accepted guidelines, in complex diseases an LOD score of >1.9 is suggestive of linkage, while an LOD score of >3.3 indicates significant linkage in studies using the parametric approach. Linkage is confirmed if evidence for linkage is replicated in two separate data sets (Lander and Kruglyak, 1995). In a typical genome-wide approach, a set of ~300-400 microsatellite markers is sufficient to cover a whole genome. Compared to microsatellites, SNPs are less polymorphic since they typically represent biallelic markers. However, SNPs are very abundant and on the average there is a SNP every 300 bp. Therefore, to screen the entire human genome for linkage with a disease, more SNPs are required. Usually, at least 10,000 SNPs located across the genome are needed to provide a reasonable resolution to find a disease-associated variant. The linkage analysis showed its proven robustness in the analysis of Mendelian traits caused by genetic alterations. However, the suitability of this approach is limited for dissecting complex disorders such as GD by the requirement for multiplex families and low power to detect susceptibility loci with weak genetic effects. Another limitation of linkage analysis is the low resolution, which makes it usually impossible to distinguish effects of loci within a distance of 2-3 Mb. Since the association analysis has a much profound sensitivity to detect genetic association for a set of polymorphisms located within the limited chromosomal region, this technique is applicable for further fine mapping of an etiological variant(s) within the region of linkage. Such an approach called 'positional cloning' allows narrowing the chromosomal region of the location of a putative causal variant down to the identification of a true etiological disease marker (Kennedy, 2003). In microsatellite-based whole-genome linkage studies, several loci have been identified as linked with GD (Tomer et al., 1999Sakai et al., 2001). However, only few regions of linkage discovered in early genome-wide screens were replicated in the last whole-genome analysis involved 1,119 AITD families (Taylor et al., 2006). Some of these loci have been then fine mapped and the genes identified. The AITD susceptibility gene on 2q is the CTLA-4 gene (also identified by the candidate gene approach), the susceptibility gene on 8q is Tg, on 14q the TSHR (also identified by the candidate gene approach), and on 20q the CD40 gene. Genome-Wide Association Studies The completion of the HapMap project has made whole-genome scanning by association studies feasible. Besides genotyping over 1.0 million SNPs spanning the whole human genome, HapMap revealed the complex architecture of the human genome organized into discrete LD block, with limited recombination rate between markers located within the every LD block due to the tight pair-wise intermarker LD (Altshuler et al., 2005). This enabled the utilization of tag-SNPs (each SNP representing an entire LD block) to test the entire human genome for association with disease. Moreover, microarray-based genotyping technology using high-density genome-wide SNP platforms enabled the typing of up to 1,000,000 or even more SNPs in a single experiment (Distefano and Taverna, 2011). Despite the unquestionable value and extraordinary high throughput capacity, GWAS have limitations such as a potential for false positive results, which necessitates very large sample sizes, genotyping errors or insensitivity to structural variants (Pearson and Manolio, 2008). Current GWAS usually take into consideration common SNPs, with minor allele frequency (MAF) of 5%. However, there is an increasing number of evidences showing that the disease risk may be significantly influenced by rare (MAF<5%) or very rare genetic variants (MAF<1%). For example, Nejentsev et al. (2009) found four rare (MAF=0.5-2%) functionally relevant variants of interferon-induced helicase C domain-containing protein 1 (IFIH1), which contributed to the risk of type 1 diabetes (T1D) more significantly than common nonsynonymous SNPs within this gene. The genetically powered identification of association of such rare polymorphisms with a complex disease through implementation of GWAS requires enormously extended population sample sizes up to 100,000 cases whose recruitment and genotyping would be too laborious and expensive. Three GWAS for AITD susceptibility have been performed. The first involved over 500,000 SNPs typed in seven common diseases each with 2,000 samples and a common control cohort of 3,000 samples (WCCT, 2007a) The second involved four disease states including AITD in which 14,500 non-synonymous SNPs (e.g. SNPs causing an amino acid substitution) have being typed in 900 AITD patients and 1,466 control subjects. The study confirmed the TSHR gene as a susceptibility gene for GD and identified FCRL3 and several other putative susceptibility genes for GD (WCCT, 2007b). The third GWAS recently performed in a Chinese cohort (over 1,500 GD subjects and over 1,500 controls) replicated four previously reported loci (HLA, TSHR, CTLA-4, and FCRL3) and discovered two more susceptibility loci located at 6q27 (the RNASET2-FGFR1-CCR6 gene region) and 4p14 (SNP rs6832151) (The China Consortium for the Genetics of AITD et al. 2011). Whole-Genome Sequencing While at present most interesting novel data on genetics of autoimmune diseases are coming from carefully designed GWAS, in the near future, this technology may be replaced by even more powerful approaches based on the next-generation DNA sequencing. Progress in this field makes it possible to sequence genomes or large parts thereof (such as an exome, i.e., all exons from a genome) at unprecedented speed. While the price is still high, it is expected that sequencing the entire human genome will cost around $1,000 per sample within the next few years making the whole-genome sequencing a feasible approach to identify complex disease genes. The 1000 Genomes Project running now is focused on low-coverage whole-genome sequencing of 179 individuals from four populations, highcoverage sequencing of two mother-father-child trios, and exon-targeted sequencing of 697 individuals from seven populations (1000Genomes Project Consortium, 2010. This project provides a wealth of information on rare polymorphic variants, copy number variations, genome-wide and local haplotype organization, and structural variants, most of which were previously undescribed. On average, each person is found to carry approximately 250 to 300 loss-of-function variants in annotated genes and 50 to 100 variants previously implicated in inherited disorders (1000Genomes Project Consortium, 2010. Recently, whole-genome sequencing has been successfully utilized in two patients, one with Charcot-Marie-Tooth disease (Lupski et al., 2010) and the other with acute myeloid leukemia (Mardis et al., 2009). Finally, Thompson et al. (in press) reported a very promising cost-effective single-step strategy that provides a possibility to any gene can be captured and sequenced directly from the human genomic DNA without amplification, cloning, and using no proteins or enzymes prior to sequencing. The main challenge of whole-genome sequencing is developing robust methods for analyzing the sequence data and sorting out normal variations between individuals from those that are responsible for disease susceptibility. When such a computer tool will be generated, this strategy becomes a truly personalized approach to the treatment of complex diseases such as AITD. GD SUSCEPTIBILITY GENES To date, proven records for association with GD have been produced for several immunerelated genes such as HLA, CTLA-4, CD40, PTPN22, SCGB3A2/UGRP1, and FCRL3 and two thyroid-specific genes (TSHR and TG) ( Fig. 1). Less consistent results have been obtained for IL2RA/CD25 and FOXP3, both are key regulators of natural Tregs. HLA HLA molecules as a part of the immunological synapse play a central role in the human immune system by binding fragments of processed antigens in the form of peptides and presenting them on the surface of an antigen-presenting cell (APC) to the T-cell receptor (TCR). HLA molecules are also involved in T-cell selection in the thymus (Splint and Kishimoto, 2001). Due to its crucial impact in the recognition of self-and foreign antigens and maintaining central immune tolerance, it is not surprisingly that the HLA locus is linked to a variety of autoimmune diseases including AITD and GD. The contribution of the HLA region in various autoimmune disorders is different. For example, in T1D, the HLA class II gene variants are the major susceptibility locus accounting for ~30-40% of genetic risk (Davies et al., 1994). In GD susceptibility, HLA does not play a major role accounting for the only ~10-20% of genetic predisposition (Vaidya et al., 2002). However, it should be stressed that these estimates have been made based on data produced in linkage analysis and before the discovery of several non-HLA susceptibility genes such as PTPN22, TSHR, and CD25. In early studies, association between HLA and GD has been attributed to the HLA class I genes such as HLA-A and HLA-B, with ORs for GD ranging from 1.5 to 3.5 (Grumet et al., 1974;Farid et al., 1976) (Table 1). Further studies showed that association between the HLA class II genes and GD is stronger than that between the HLA class I and GD (Bech et al., 1977) and is a result of the strong LD between these loci within the entire HLA region . Subsequently, among HLA class II genes, the strongest association has been shown for alleles DRB1*03 and DQA1*05 in various Caucasian populations, with a common susceptibility haplotype DR3 (DRB1*03-DQB1*02-DQA1*0501) (OR=3.1-3.8; Table 1) and a protective haplotype DR7 (DRB1*07-DQB1*02-DQA1*02) (Ban et al., 2002a;Simmonds et al., 2005). The frequency of DR3 in GD patients was generally 40-55% in GD patients, and ~ 15-30% in the general population, resulting in OR for people with HLA-DR3 of 3-4 (Jacobson et al., 2008). Since T-cells recognize and respond to peptide antigens when presented by APCs bound to HLA class II pockets, it was proposed that that certain HLA-DR alleles may permit selfantigenic peptides to fit into the peptide-binding pocket and to be presented more efficiently to T-cells (Nepom et al., 1996). The hypothesis was confirmed in several autoimmune diseases, more notably in T1D. A key role of the amino acid residue at position 57 of the DQbeta chain has been found in the genetic susceptibility to T1D (Morel et al., 1988). A similar molecular mechanism explaining the predisposing or protective role of HLA molecules has been identified for GD susceptibility (Menconi et al., 2008). The presence of arginine at position 74 of the HLA-DRbeta1 chain (DRbeta-Arg74) has been shown to be a critical factor for conferring DR-mediated susceptibility to GD (Ban et al., 2004). In contrast, the presence of glutamine at position 74 of the DRb1 chain provides the protective effect (Simmonds et al., 2005). Structural analysis showed the unique role of the position 74 in influencing peptide-binding properties of the HLA molecule and presentation to T-cells. This position encompasses several peptide-binding pockets within the peptide binding domain crucial for both T-cell receptor docking and antigen presentation (Chelvanayagam, 1997). Recently, Jacobson et al. (2009) found Tg peptides capable to be presented by HLA-DR pockets containing arginine at position beta 74. These findings indeed suggest that the peptide-binding pocket structure and conformation play a major role in the etiology of several autoimmune diseases including T1D and AITD (Todd et al., 1987;Jacobson et al., 2008). CTLA-4 CTLA-4 (also known as CD152) is another component of the immunological synapse. CTLA-4 molecule is responsible for negative regulation of TCR-mediated responses, and its function is opposite to the function of the CD28 costimulatory molecule that promotes T-cell activation (Walunas et al., 1996). CTLA-4 acts through delivering inhibitory signal through its cytoplasmatic domain, which can reverse the classic TCR-induced stop signal needed for physical interaction between T-cell and APC thus reducing adhesion periods between these cells that in turn decreases cytokine production and proliferation (Schneider et al., 2006). In addition to the cell intrinsic action mediated by the membrane-bound flCTLA-4, the sCTLA-4-dependent extrinsic model has been proposed (Qureshi et al., 2011). The extrinsic mechanism of CTLA-4 action may involve stimulation of regulatory T cells (Tregs) but may be released through the removal of costimulatory ligands (CD86) from APCs via transendocytosis (Qureshi et al., 2011). Levels of sCTLA-4 were shown to be elevated in several autoimmune diseases including AITD (Oaks and Hallett, 2000). Since CTLA-4 suppresses T-cell activation to control normal T-cell responses, it was postulated that CTLA-4 polymorphisms that reduce its expression and/or function might predispose to autoimmunity by creating overreactive T-cells (Chistiakov and Turakulov, 2003). The first evidence for association between CTLA-4 and GD was reported by Yanagawa et al. (1995). In fact, this study that found a significant association between a (AT) n microsatellite in the 3' untranslated region (3'UTR) of CTLA-4 and GD was the first report of an association between CTLA-4 and any autoimmune condition. Except for the (AT) n microsatellite, several more functionally relevant polymorphisms at CTLA-4 have been widely evaluated for association with GD. It has been proposed that long AT-repeat allele of the (AT) n microsatellite decreases stability of CTLA4 mRNA blunting inhibitory function of the protein and thus reducing control of T-cell proliferation (Takara et al., 2003). Another polymorphism is an adenine-to-guanine change in codon 49 (A49G, rs231775) causing an amino acid substitution (Thr17Ala) at the signal peptide (Donner et al., 1997). Compared to the Thr17 allele, the predisposing Ala17 variant of CTLA-4 has been shown to have altered posttranslational processing resulting in insufficient glycosylation of this molecular variant (Anjos et al., 2002). Although studies in multiple ethnic groups showed strong association between this marker and GD (Heward et al., 1999;Vaidya et al., 1999;Park et al., 2000;Chistyakov et al., 2000), the evidence on CTLA-4 Thr17Ala as a causal variant for GD on chromosome 2q33 was positioned under question by Xu et al. (2002) who failed to show any significant influence of the codon 17 polymorphism on both the extrinsic and intrinsic actions of the recombinant human CTLA-4 transgene expressed in Jurkat T cells. Using re-sequencing and fine mapping of all common variants within the CTLA4 gene, Ueda et al. (2003) reported the disease susceptibility locus located within a noncoding 6.1 kb region adjacent to the 3'UTR of CTLA-4. The susceptibility locus had four SNPs (CT60, J030, JO31, and JO27-1), which showed the strongest association with GD that was even stronger that an association between any other common SNP within the CTLA-4 gene including rs231775 and rs5742909. Surprisingly, the higher risk allele G of the marker CT60 (+6230 G/A, rs3087243) was associated with lower mRNA levels of sCTLA-4. The correlation between the carriage of the CT60 polymorphism and serum concentrations of sCTLA-4 has not been confirmed in other studies (Anjos et al., 2005;Mayans et al., 2007). In a large-scale meta-analysis, Kavvoura et al. (2007) summarized data on 28 studies involved a total of 4,848 GD cases and 7,314 controls and reported significant association of the allele G of rs231775 and allele G of rs3087243 with higher risk of GD (OR=1.49 and 1.45, respectively). The modest association with increased GD risk was found for alleles G of markers of JO31 and JO30, but not for the (AT) n microsatellite and SNP C(-318)T or JO27-1 (Kavvoura et al. 2007). It is known that CTLA-4 polymorphisms are associated with production of thyroid self-antibodies in GD patients (Tomer et al., 2001;Zaletel et al., 2002), and may synergistically interact with GD-predisposing variants HLA-A*02 and -DPB1*05:01 in production of TSHR-blocking (TBII) antibodies (Takahashi and Kimura, 2010). However, to date, the true etiological variant of CTLA-4 is still unknown. Perhaps, the predisposition to GD within the CTLA-4 locus is determined by the complex interplay between the clusters of disease-associated markers in 3' and 5'regions of CTLA-4 independently contributing to GD susceptibility. Interestingly, using commercial monoclonal antibodies againt the extracellular domain of CTLA-4, Tector et al. (2009) failed to find s-CTLA-4 itself in the CTLA-4 immunoreactive material isolated from the blood of patients with myasthenia gravis. These findings may reconcile the apparent discrepancy between reports of elevated levels of sCTLA-4 in plasma from patients with autoimmune disease and the report of decreased levels of the sCTLA-4 transcript among individuals with the CT60 allele of the CTLA-4 gene. Like the HLA region, CTLA-4 belongs to general autoimmunity genes, for which association with the majority of autoimmune diseases has been found (Gough et al., 2005). The major role of this gene in thyroid-specific autoimmunity and other organ-specific and systemic autoimmune T-cell mediated disorders arises from the central role of CTLA-4 in controlling TCR-dependent activation of T-cells and maintaining peripheral immune tolerance (Riley and June, 2005). CD40 CD40, which belongs to the family of tumor necrosis factor receptors, is primarily expressed on the surface of B-lymphocytes and other professional and non-professional APCs (Banchereau et al., 1994), and plays a fundamental role in B-cell activation and antibody production (Armitage et al., 1993). The physiological ligand for CD40 is the CD154 (CD40L) molecule that is expressed on the surface of activated T-helper cells (Hollenbaugh et al., 1992). In B-cells, CD40 ligation provides the necessary costimulatory signal for cell proliferation, immunoglobulin class switching, antibody secretion, prevention of apoptosis of germinal center B-cells, affinity maturation, and generation of long-lived memory cells (Chatzigeorgiou et al., 2009). As a GD susceptibility gene, CD40 has been found by fine mapping within the GD-2 locus on chromosome 20q11 linked to the development of GD Pearce et al., 1999). In CD40, an etiological variant is presented by the functional SNP rs1883832 [C(-1)T] that is located at position -1 relative to the translation start and affects the Kozak sequence, which plays the major role in the initiation of the translation . The genotype C/C of rs1883832 showed association with higher GD risk, and this association has been widely replicated in Caucasian and Asian populations (Kim et al., 2003;Ban et al., 2006;Kurylowicz et al., 2007) except for two studies in the UK population Houston et al., 2004). Overall, the meta-analysis of a total of 1,961 affected patients and 1,960 control subjects revealed significant but modest genetic effect of the allele C in GD susceptibility in Caucasians (OR=1.22) (Kurylowicz et al., 2007). Functional analysis showed that, compared to the allele T, the higher risk allele C is associated with more efficient translation of CD40 reflected by a 20-30% gain in the production of CD40 in in vitro translation system (Jacobson et al., 2005;Park et al., 2007). As mentioned above, CD40 is expressed in B-cells and non-professional APCs such as thyrocytes, i.e. in cell types involved in the pathogenesis of GD (Metcalfe et al., 1998). Therefore, increased expression of CD40 on B-lymphocytes can lead to enhanced production of anti-TSHR-stimulating antibodies (TSAbs), whereas increased expression of CD40 on thyrocytes can trigger an autoimmune response to the thyroid by resident T-cells. These mechanisms could be simultaneously operating in the thyroid thereby implicating in the etiology of GD. Finding of Jacobson et al. (2007) reported the stronger association of the CC genotype with GD in a subset of GD patients who had persistently high levels of thyroid antibodies provides the indirect evidence in support of the stimulatory effects of the C variant of CD40 on production of thyroid antibodies. The association of CD40 with autoimmunity is not limited to GD only. Several studies showed that CD40 variants could be implicated in a set of autoimmune and proinflammatory conditions accompanied with activation of B-cells and propagation of B-cell autoreactive clones producing self-antibodies such as asthma (Metcalfe et al., 1998), rheumatoid arthritis (RA) (Raychaudhuri et al., 2008), systemic lupus erythematosus (SLE) (Gaffney et al., 2006), and multiple sclerosis (MS) (ANZgene, 2009). PTPN22 The PTPN22 gene lies on chromosome 1p13 and encodes the immune regulatory phosphatase LYP, which triggers T-cells by inhibiting signal transduction and preventing activation through the interaction of LYP with several accessory molecules including protein tyrosine kinase Csk and Grb2 (Cloutier and Veillette, 1999). Initial reports of association of the C1858T polymorphism (rs2476601), causing an amino acid change of an arginine to tryptophan at residue 620 (R620W) of LYP, with T1D (Bottini et al., 2004;Smyth et al., 2004) were rapidly extended by finding positive associations not only with AITD (Smyth et al., 2004;Velaga et al., 2004) but also with SLE (Kyogoku et al., 2004), RA (Begovich et al., 2004), juvenile idiopathic arthritis (JIA), and Addison's disease (Lee et al., 2007). In many autoimmune diseases, PTPN22 represents a second most strongly associated locus after HLA, with OR typically ranging from 1.5 to 1.9 (Criswell et al., 2005;Vang et al., 2007). The functional R620W polymorphism resides in the P1 proline-rich motif of LYP, which binds with high affinity to the Src homology 3 (SH3) domain of the tyrosine kinase, Csk, and hence affects binding properties of LYP with this partner molecule in an inhibitory complex that regulates key TCR signaling kinases (Lck, Fyn, ZAP-70) (Bottini et al., 2004). The W620 variant disrupts the interaction between PTPN22 and Csk (Begovich et al., 2004) and also increases the phosphatase activity, which in turn suppresses TCR signaling more efficiently than the wild-type protein (Vang et al., 2005;Rieck et al., 2007). In fact, the R620W polymorphism is a gain-of-function mutation, with 60% increase in the catalytic specific activity of the LYP 620W phosphatase compared to the LYP 620R variant (Vang et al., 2005). This results in enhanced down-regulation of TCR signaling followed by the inhibition of expansion of T-cells, weakening the positive selection in the thymus, and reduction of the antibody production through lowering activity of helper T-lymphocytes (Hasegawa et al., 2004). It is speculated that a lower T-cell signaling would lead to a tendency for self-reactive T-cells to escape thymic deletion and thus remain in the periphery. However, this theoretical possibility awaits experimental confirmation. Recent experiments in mice expressing the LYP variant homolog Pep619W showed dramatic reduction in levels of the mutant (Pep619W) variant compared to the levels of the wild-type Pep619R protein due to the calpain 1-mediated proteolysis (Zhang et al., 2011). Similarly, compared to the LYP 620R protein, human LYP 620W phosphatase was found to be sensitive to the calpain digestion in vitro that may explain less levels of the enzyme in Tand B-cells of the LYP 620W carriers. The reduced expression of LYP 620W was associated with lymphocyte and dendritic cell hyperresponsiveness, a mechanism by which LYP620W may increase risk for autoimmune disease. These data could be supported by observations of Zickerman et al. (2009) who reported the hyperactivation of CD45 E613R B-lymphocytes carrying the mutation E613R in the juxtamembrane wedge domain of the CD45 molecule and development of a B cell-driven, lupus-like disease in Pep-deficient mice. Therefore, the role of PTPN22 in autoimmunity is not restricted by altering function of T-lymphocytes, but also involves B-cells. Interestingly, the capacity of human LYP to inhibit the activity of B-cell antigen receptor (BCR) has been reported (Rieck et al., 2007;Arechiga et al., 2009). Carriers of the autoimmunity-predisposing LYP 620W variant, have a decrease in memory B cells, which also exhibit impaired calcium flux upon BCR ligation, suggesting a B cell-intrinsic defect in individuals who express the LYP 620W variant (Rieck et al., 2007). It seems that the R620W polymorphism, by suppressing TCR and BCR signaling, globally alters maturation, selection, and function of both T-and B-lymphocytes that predisposes to inducing autoimmunity (Stanford et al., 2010). The PTPN22 R620W polymorphism displays strong association with GD across multiple Caucasian populations as reflected by ORs ranging from 1.5 to 2.0 (Smyth et al., 2004;Velaga et al., 2004;Criswell et al., 2005;Skorka et al., 2005). However, this polymorphic variant is very rare or absent in Asian and African populations (Mori et al., 2005;. For example, SNP rs2476601 has not been found in Japanese AITD patients (Ban et al., 2005). Whilst association of the rs2476601 SNP appears to be common to a number of autoimmune conditions, other independent associations within this gene region are being detected with different patterns of association emerging in individual diseases (Carlton et al., 2005;Onengut-Gumuscu et al., 2006;Heward et al., 2007;Michou et al., 2007). This includes disease-specific haplotypes providing both susceptibility to and protection from GD suggesting that the mechanism by which PTPN22 confers susceptibility to GD may be different, for example, to T1D and RA. IL2RA/CD25 Tregs are a unique population of T-lymphocytes involved in the regulation of T-cell activation (Paust and Cantor, 2005). Tregs are responsible for maintaining peripheral immune tolerance. Stimulation of Tregs results in inhibiting murine experimental autoimmune thyroiditis (Gangi et al., 2005). Depletion of Tregs in mice makes animals more prone to experimentally induced GD (Saitoh and Nagayama, 2006;Nagayama et al., 2007), while Tregs depletion in mice with induced GD causes switching the disease pathogenesis to a Hashimoto's-like phenotype (McLahlan et al., 2007). These findings suggests on the inhibitory role of Tregs against Graves' hyperthyroidism (Saitoh et al., 2007). Several subtypes of Tregs have been detected. One subset, the naturally existing CD4+CD25+Tregs, constitutively express CD25, CTLA-4, and glucocorticoid-induced tumor necrosis factor receptor (Paust and Cantor, 2005). In GD patients, no alteration in the distribution of subpopulations of Tregs was found compared to the controls (Pan et al., 2009). However, in GD, the function of CD4+CD25+ Tregs is impaired (Mao et al., 2011) and reduced (Wang et al., 2006). Natural Tregs are characterized by high levels of the alpha chain of the interleukin-2 (IL-2) receptor (IL2RA; also known as CD25) on their surface (Burchill et al., 2007). Together with two other subunits, beta-chain (IL2RB, also known as CD122) and the common cytokine receptor gamma-chain (γc, also known as CD132), IL2RA/CD25 constitutes the IL-2 receptor molecule (Gaffen and Liu, 2004). IL-2 receptor mediates functional effects of IL-2, a cytokine that is vital in the regulation of the development of CD4+CD25+ Tregs (Chistiakov et al., 2008). Using tagging SNP approach and multilocus test, Brand et al. (2007) showed significant evidence for association of the IL2RA/CD24 locus with GD (P=0.00045) in the British population. Findings of Brand et al. (2007) have been recently confirmed in a Russian dataset (Chistiakov et al., in press). We showed association of the haplotype AA compised by minor alleles of two SNPs, rs11594656 and rs41295061, located upstream the 5'promoter region of the IL2RA/CD25 gene, with increased risk of GD (OR=1.47). The carriage of the predisposing haplogenotype AA/AA correlated with elevated levels of the soluble IL-2RA (sIL-2RA) in sera of both GD patients and healthy controls. There is the first evidence of association between IL2RA/CD25 variants and serum concentrations of the soluble IL-2RA form. In fact, IL2RA/CD25 may represent a general autoimmunity gene. Except for GD, association between this gene and several more autoimmune diseases including T1D (Vella et al., 2005), RA (Kurreeman et al., 2009), MS (Matesanz et al., 2007), and JIA (Hinks et al., 2009) has been reported. However, distinct polymorphic variants of IL2RA/CD25 contribute to the pathogenesis of different autoimmune disorders (Maier et al., 2009b). Likely, association of disease-associated markers at the IL2RA/CD25 region with serum levels of sIL-2RA could, at least partially, explain the contribution of this gene to autoimmunity. Elevated concentrations of sIL-2RA have been detected in several autoimmune diseases including GD (Zwirska-Korczala et al., 2004;Jiskra et al., 2009) thereby suggesting for Tlymphocyte activation (Dedijca, 2001). Despite the lack of the transmembrane and cytoplasmic domains, sIL-2RA is able to bind IL-2 (Murakami, 2004). Indeed, elevated sIL-2RA could neutralize available IL-2, which is necessary for activation of CD4+CD25+ Tregs. On the other hand, increased production of sIL-2RA is associated with enhanced proliferation and expansion of responder CD4+ T cells (Maier et al., 2009a). Therefore, correlation between the carriage of disease-associated variants of IL2RA/CD25 and increased levels of sIL-2RA may be related to reduction in the inhibitory role of CD4+CD25+ Tregs and increase in the activity of responder CD4+ T-cells (including self-reactive clones of T-lymphocytes), and as a consequence, this imbalance will contribute to thyroid autoimmunity. Since IL-2 inhibits its own production (Villarino et al., 2007), the level of sIL-2RA could influence this self-inhibitory feedback and therefore IL-2 production. There is a second putative mechanism by which increased sIL-2RA levels could promote thyroid autoimmunity. This finding could also at least partly explain reduced levels of IL-2 observed in sera of GD patients (Eisenstein et al., 1994;Ward and Fernandes, 2000). FCRL3 FCRL3 (FC receptor-like-3, also known as CD307c) is a receptor containing immunoreceptor-tyrosine activation motifs and immunoreceptor-tyrosine inhibitory motifs in its cytoplasmic domain making it important in the regulation of the immune system. The FCRL3 molecule shares significant structural homology to classical receptors for immunoglobulin constant chains (Fc receptors) (Miller et al., 2002). RCRL3 is found mainly on B-cells but also on T-cells. Among B-cell subsets, this molecule is present on mature, germinal center, memory, plasma cells, and bone marrow immature B cells suggesting for its key role in the development, maturation, and function of B-lymphocytes (Matesanz-Isabel et al., 2011). The first evidence for association of FCRL3 with GD has been obtained in Japanese (Kochi et al., 2005). The allele C of rs7528684 located at position -169 in the promoter of FCRL3 showed the strongest association with higher risk of GD (OR=2.15, P =8.5x10 -6 ). The disease-associated variant has been found to be functionally significant because it increased the affinity for the NFκB transcription factor and caused enhanced transcription activity of the FCRL3 promoter (Kochi et al., 2005). The association between different variants of FCRL3 and GD has been then replicated in the independent Japanese dataset (Kochi et al. 2005), Chinese population (Gu et al., 2010) as well as by several large-scale population studies in the UK Whites (Simmonds et al., 2006;2010;WCCT et al., 2007b;Owen et al., 2007). However, FCRL3 disease-associated variants in UK Caucasians were different from those found in Japanese. In Japanese, the susceptibility locus within the FCRL3 region has been mapped to the cluster of SNPs located in the 5' region of the gene. In contrast, in the UK datasets, implementation of the tag SNP approach and logistic regression that association of rs3761959 (that tagged rs7528684) with GD is secondary to rs11264798 and rs10489678 SNPs located in the LD block at the 3'region of FCRL3 (Owen et al., 2007). Further analysis revealed the primary contribution of the allele C of rs10489678 to GD susceptibility in the predisposing extendend haplotype of FCRL3, and this effect is independent on the impact of the SNP cluster at the neighboring FCRL5 gene (Simmonds et al., 2010a). Overall, the available data suggest that genetic polymorphism(s) modifying susceptibility for GD do exist in the FCRL3 region but the primarily associated variant(s) remains to be found. Furthermore, the FCRL3 gene has been reported to contribute to several autoimmune diseases including GD, SLE, and RA (reviewed by Chistiakov and Chistiakov, 2007;Kochi et al., 2010). Again, compared to the Asian populations, other variants of FCRL3 are implicated in autoimmunity in Caucasians, since the marker rs7528684 associated with autoimmunity in Japanese repeatedly failed to show significant association with various autoimmune outcomes in Caucasian populations Davis, 2007;Mao et al., 2010). The pathogenic activation of FCRL3 expression is suggested to lead to the down-regulation of BCR-mediated signaling, incomplete induction of anergy and deletion in autoreactive Bcells, and, finally, to breakdown of B-cell tolerance (Kochi et al., 2009). Recently, a high expression of FCRL3 has been found on 40% of natural CD4+CD25+ CD127 low Tregs that have a memory phenotype and decreased response to IL-2 stimulation (Nagata et al., 2010). These cells also had a reduced capacity to suppress the proliferation of effector T-cells (Swainson et al., 2010). Thus, FCRL3 could contribute to the loss of self-tolerance and inducing autoimmunity at least through two pathogenic mechanisms: by excessive inhibiting BCR signaling and the impairment of suppressing function of Tregs. Predisposing variants FCRL3 and CD40 could cooperate in the breakage of B-cell tolerance since stimulation of CD40 was shown to result in the up-regulation of FCRL3 expression through the TRAF6-NF-κB1-mediated signaling pathway (Kochi et al., 2005). SCGB3A2/UGRP1 The secretoglobin 3A2 (SCGB3A2) gene encoding secretory uteroglobin-related protein 1 (UGRP1) resides on chromosome 5q12-q33, a region that showed linkage with GD in two Asian populations (Sakai et al., 2001;Jin et al., 2003). Initially, studies of positional candidate genes located in the susceptibility locus on chromosome 5q12-q33 including SCGB3A2 failed to show association with GD in Chinese likely due to the small size of a population studied (Yang et al., 2005). However, using the extended dataset (over 2800 affected Chinese patients), Song et al. (2009) found association between two polymorphisms (-112G/A (rs1368408) and -623 ~ -622 AG/T) both located in the promoter region of SCGB3A2 (OR=1.28 and 1.32, respectively) with GD. Furthermore, these SCGB3A2 variants constituted two higher risk haplotypes associated with reduced SCGB3A2 gene expression levels in human thyroid tissue due to the lower transcriptional activity of disease-associated variants (Song et al., 2009). Association between rs1368408 and GD has been recently replicated in two Caucasian large cohorts including UK Whites (OR=1.18, P=0.007; Simmonds et al., 2010b) and Russians (OR=1.33, P=2.9×10 −5 ; Chistiakov et al., 2011). The higher risk allele A of the −112G/A variant of SCGB3A2 may potentially disrupt the binding site for CCAAT/enhancer-binding protein alpha (C/EBPα), which positively regulates transcription of SCGB3A2 (Tomita et al., 2008;Song et al., 2009). Consequently, compared to the allele -112G, the SCGB3A2 −112A variant displays a 24% decrease in the promoter activity (Niimi et al., 2002) that results in lower levels of SCGB3A2 mRNA in the thyroid issue and decreased concentrations of UGRP1 in sera of healthy subjects and individuals affected with GD and asthma (Inoue et al., 2008). At present, it is unclear how SCGB3A2 variants predispose to GD. In humans, this protein is predominantly expressed in the lung although a low level expression was also found in thyroid and kidney (Niimi et al., 2002;Song et al., 2009). In lungs, UGRP1 is a ligand for macrophage scavenger receptor with collagenous structure (MARCO), an important member of the innate immune system of the lung where it binds inhaled particles including microbial pathogens and facilitates their clearance by the macrophage system (Areschoug and Gordon, 2009). Both MARCO and UGRP1 have been shown to play a key role in pulmonary inflammation including bronchial asthma and rhinosinusitis (Niimi et al., 2002;Thakur et al., 2009). Probably, the involvement of UGRP1 inGD may be a consequence of systemic effects originating from the respiratory system such as elevation in serum IgE, a hallmark of allergy. The correlation between the −112G/A polymorphism of SCGB3A2 and IgE concentrations in sera of healthy subjects have been observed (Chistiakov et al., 2011). A number of studies provide evidence that allergy-associated mechanisms can contribute to the pathogenesis of autoimmune diseases such as AITD (Tanda et al., 2009). However, further studies are needed to investigate a precise mechanism by which UGRP1 links allergic asthma and thyroid autoimmunity. FOXP3 Additionally to CD25, the expression of the forkhead box P3 (FOXP3) is a molecular signature of natural Tregs. This gene acts as a key regulator of the development and function of natural Tregs (Zhang and Zhao, 2007). Foxp3-deficient mice develop a fatal lymphoproliferative disorder (Brunkow et al., 2001). This gene resides in a region on chromosome Xp11.23 that has been shown to be linked with GD Tomer et al., 1999). Therefore, the FOXP3 is an excellent positional and functional candidate gene for GD. In US Caucasians, family-based analysis showed association of a microsatellite inside the FOXP3 gene with AITD in a subset of patients with juvenile GD Tomer et al., 2007). No association between FOXP3 and AITD has been found in a population-based study in the UK (Owen et al., 2006) and Japanese cohorts . However, in a small independent Japanese dataset, an association of the -3279 C/A polymorphism (genotype AA) with GD in remission has been reported . The marker 3279C/A is functional, with allele A related to the low translation of FOXP3. Defects in FOXP3 expression suppresses the regulatory function of Tregs and therefore should positively correlate with poor prognosis (relapse) of AITD (Mao et al., 2011). Thus, the obtained data on association between FOXP3 and GD are still inconsistent. Additional population studies and functional analyses are required to replicate findings on FOXP3 association with GD and emphysize a role of Tregs in thyroid autoimmunity. TSHR TSHR located on the surface of thyroid epithelial cells is a G s -protein coupled receptor responding to thyrotropin (Akamizu et al., 1990). TSH is central to the regulation of thyroid gland. Since anti-TSHR antibodies circulating in the serum of affected subjects are the hallmark of GD, not surprisingly, that the TSHR became the first gene (after HLA) to be tested for association with GD. The TSHR resides on chromosome 14q31 and comprises 13 exons (Kakinuma and Nagayama, 2002). Initial studies have been focused on three germline non-synonymous SNPs in the TSHR: D36H and P52T, both located in the extracellular domain of the receptor, and D727E found in the intracellular portion of the molecule (Tonacchera and Pinchera, 2000). Despite the positive results of some studies (Cuddihi et al., 1995;Gustavsson et al., 1995;Chistiakov et al., 2002;2004), subsequent case-control studies have largely rejected association with GD for either of these TSHR SNPs in Caucasians (de Roux et al., 1996;Kotsa et al., 1997;Allahabadia et al., 1998;Simanainen et al., 1999;Kaczur et al., 2000;Ban et al., 2002b). Nevertheless, genome-wide linkage analysis subsequently suggested for a GD susceptibility locus in chromosomal region 14q31 . This encouraged extension of the search for susceptibility loci to non-coding sequences within TSHR gene. In Japanese, polymorphic markers within intronic regions of TSHR consistently showed associations with GD including microsatellites (Akamizu et al., 2000) and haplotypes comprised of alleles of an SNP cluster in intron 7 (Hiratani et al., 2005). In Caucasians, implementation of large population cohorts and a tagging SNP approach resulted in the identification of a higher risk haplotype (OR=1.7) spanning through two LD blocks and containing SNP rs2268458 (located in intron 1) as a marker that showed the strongest association with GD (OR=1.31) at the TSHR gene region (Dechairo et al., 2005). Further analysis of a panel of 98 SNPs (including rs2268458) encompassing a 800-kb genomic region with the TSHR gene, revealed two markers in intron 1 (rs179247 and rs12101255) with the strongest association with GD (OR=1.53 and 1.55, respectively) (Brand et al., 2009). Functional analyses showed association of both markers with reduced expression of the full-length TSHR mRNA relative to two truncated splice variants, which in turn could lead to increase in shedding of a part of the TSHR receptor called the A-subunit (i.e., TSHR-A). The role of TSHR shedding in inducing thyroid autoimmunity is established (Chen et al., 2003;Chistiakov, 2003), and increase in TSHR-A levels should contribute to the pathogenesis of GD. Association of these two SNPs was recently confirmed in the extended dataset of Europeans, with the primary role of marker rs12101255 in conferring GD susceptibility (Ploski et al., 2010). An evidence supporting TSHR as a GD susceptibility gene was also produced in GWAS (WCCT, 2007b). Disease-associated haplotypes found in intron 7 of TSHR in Japanese (Hiratani et al., 2005), are awaiting for replication in the independent cohort. Some data supporting association of intron 1 SNPs have been obtained in Asian populations including marker rs2268474 in Japanese (Hiratani et al., 2005) and rs2239610 in an ethnically mixed Asian population from Singapore (Ho et al., 2003). Yet undiscovered, an etiological variant in intron 1 of TSHR, which is in strong LD with rs12101255, is suspected to alter TSHR splicing. The major splice variant of the TSHR whose length is 1.3 Kb includes most of the extracellular domain of the TSHR (Graves et al., 1992). Other minor splice variants have been also discovered (Kakinuma, Nagayama, 2002). TG Tg, which is a major antigenic target for autoreactive antibodies in AITD, was considered as an excellent candidate gene for AITD. Early genome-wide linkage scans identified the region 8q24 harboring the Tg gene as a major AITD susceptibility locus . These findings have been independently replicated in several ethnic groups including Caucasians Collins et al., 2003) and Asians (Hsiao et al., 2007;Maierhaba et al., 2008). Further studies revealed three Tg non-synonymous amino acid substitutions (A734S, V1027M, and W1999R) associated with GD . It was suggested that these Tg variants may be implicated in AITD susceptibility by altering Tg processing in endosomes causing production of the pathogenic Tg peptide repertoire. In support of this hypothesis, an evidence for gene-gene interaction between the predisposing variant HLA-DRb-Arg74 and W1999R polymorphism of Tg has been found resulting in a high OR of 6.1 for GD (Hodge et al., 2006). Subsequent immune-binding assays revealed only a small group of unique Tg peptides capable to bind to the HLA-DRb-Arg74 pockets (Jacobson et al., 2009). Specific binding of the peptide Tg.2098 to the HLA-DRβ1-Arg74 allele was able to stimulate T-cells from mice and humans with autoimmune thyroiditis therefore suggesting that this peptide is a major T-cell epitope (Menconi et al., 2010). CONCLUSIONS Genes whose variants are involved in the pathogenesis of GD could be functionally devided into several groups. Since GD is a thyroid autoimmune pathology, the contribution of two thyroid-specific genes such as Tg and TSHR to its etiology perfectly explains the organ specificity of this disease. The HLA, CTLA-4, and PTPN22 genes encode the members of the immunological synapse itself between an APC (presenting thyroid-specific antigens) and self-reactive T-helper cell and a component of a complex of signaling kinases/phosphatases primarily segregated with the TCR molecule (Fig. 2). Susceptibility variants of both CD40 and FCRL3 are involved in functional support of antibody-produced autoreactive B-cell clones. CD25 and FOXP3 are central in the development and functioning Tregs whose regulatory activity is impaired and/or reduced in GD. The position and functional significance of SCGB3A2/UGRP1 is this mosaic is waiting for its explanation. Despite this, functional groups of known GD susceptibility genes thoroughly capture all major players of an autoimmune process. In the future, new susceptibility loci with less genetic effects on GD susceptibility are likely to be discovered. Genes detected in association studies as giving a low relative risk (risk ratios < 3-5) such as PTPN22 and CTLA-4 in AITD may contribute no more than 5% each to overall genetic susceptibility (Risch and Merikangas, 1996).Hence, 10-20 genes may be influencing the expression of AITD (Davies, 1998). Among the known GD-predisposing variants, only the HLA-DR3 exhibits a very strong impact (OR > 5) in the genetics of GD while the individual contribution of each non-HLA locus to the risk of GD is significantly weaker and typically does not exceed OR of 2.0 (Pearce and Merriman, 2009). The genetic background has a very substantial influence on the etiology of GD accounting for 70-80% of the disease risk (Brix et al., 2001). The inheritance of multiple genes with small additive effects cannot explain the high prevalence of AITD in the general population. Therefore, coinherited susceptibility variants should synergistically interact to each other resulting in a combined OR that is significantly higher than the one expected with an additive effect alone. Such an example of a synergism in gene-gene interaction was observed between the Tg gene and DRbeta1-Arg74 in GD susceptibility (Hodge et al., 2006). Another putative mechanism is genetic heterogeneity that increases the genetic effect of a particular susceptibility variant in a subset of GD subjects studied while in the whole population of GD patients this effect is diluted resulting in much smaller ORs.
v3-fos-license
2021-04-20T13:05:06.996Z
2021-04-19T00:00:00.000
233302306
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.636765/pdf", "pdf_hash": "8ca6f7c527d8234e846a18fe76e721bf16295fcc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2551", "s2fieldsofstudy": [ "Biology" ], "sha1": "8ca6f7c527d8234e846a18fe76e721bf16295fcc", "year": 2021 }
pes2o/s2orc
Effects of Porcine Immature Oocyte Vitrification on Actin Microfilament Distribution and Chromatin Integrity During Early Embryo Development in vitro Vitrification is mainly used to cryopreserve female gametes. This technique allows maintaining cell viability, functionality, and developmental potential at low temperatures into liquid nitrogen at −196°C. For this, the addition of cryoprotectant agents, which are substances that provide cell protection during cooling and warming, is required. However, they have been reported to be toxic, reducing oocyte viability, maturation, fertilization, and embryo development, possibly by altering cell cytoskeleton structure and chromatin. Previous studies have evaluated the effects of vitrification in the germinal vesicle, metaphase II oocytes, zygotes, and blastocysts, but the knowledge of its impact on their further embryo development is limited. Other studies have evaluated the role of actin microfilaments and chromatin, based on the fertilization and embryo development rates obtained, but not the direct evaluation of these structures in embryos produced from vitrified immature oocytes. Therefore, this study was designed to evaluate how the vitrification of porcine immature oocytes affects early embryo development by the evaluation of actin microfilament distribution and chromatin integrity. Results demonstrate that the damage generated by the vitrification of immature oocytes affects viability, maturation, and the distribution of actin microfilaments and chromatin integrity, observed in early embryos. Therefore, it is suggested that vitrification could affect oocyte repair mechanisms in those structures, being one of the mechanisms that explain the low embryo development rates after vitrification. INTRODUCTION Currently, vitrification is mostly used to cryopreserve gametes and embryos. It is intended to maintain cell viability, functionality, and developmental potential when they are stored at low temperatures (Chian et al., 2004;Casillas et al., 2018). In recent years, vitrification has been a useful tool for assisted reproduction techniques (ART), so that the scientific and technical progress in this field has been developed for female gametes. In humans, it is considered an important resource in the treatment of reproductive conditions and infertility (Khalili et al., 2017), as well as to improve the reproductive capacity and gamete quality in economically important and endangered species (Mullen and Fahy, 2012). The cryoprotectant agents (CPAs) are substances that protect cells during cooling and warming. However, their use in high concentrations increases the risk of osmotic damage caused by their chemical components (Chian et al., 2004). Although substantial progress has been made to improve vitrification protocols by the use of co-culture systems (Jia et al., 2019), the Cryotech method (Angel et al., 2020;Nowak et al., 2020), the reduction of the volume of cryopreservation cell devices, and CPA selection (Sun et al., 2020), the recovery of intact morphophysiological gametes after vitrification are still low due to the damage generated in cell structures, mainly the plasma membrane, cytoplasm, nucleus, and DNA (Chang et al., 2019). In this regard, it was reported that the extent of the cell damage depends on the nuclear cell stage (Somfai et al., 2012;Egerszegi et al., 2013). During vitrification, the addition of CPAs is required for cell protection, and it depends on the animal species, the cell type, and the chemical nature of the CPAs to select an appropriate vitrification procedure. In pigs, vitrification can cause alterations in actin microfilaments (MF) and chromatin (CHR), affecting oocyte viability, maturation, fertilization, and embryo development (ED). Previous studies have evaluated the effect of vitrification on germinal vesicle (GV), metaphase II (MII) oocytes, zygotes, and blastocysts in the same stage of development (Egerszegi et al., 2013). Other studies evaluated the role of MF and CHR based on the fertilization and ED rates, but they did not determine the alterations of these structures (Rajaei et al., 2005;Egerszegi et al., 2013). Therefore, this study was designed to evaluate how the vitrification of porcine immature oocytes affects early ED according to the distribution of actin MF and CHR integrity. Ethics Statement and Animal Care This study was approved under the regulations of the Ethics Committee for care and use of animals, Metropolitan Autonomous University-Iztapalapa Campus. Experimental Design Five replicates were performed for all experiments. After selection, the cumulus-oocyte complexes (COCs) were divided into two groups: (a) control group, fresh GV oocytes underwent in vitro maturation (IVM), and subsequently fertilized in vitro (IVF) for early ED (two to four blastomeres) through 40 h and (five to eight blastomeres) through 68 h. (b) Experimental group, vitrified GV oocytes, then IVM in a co-culture with fresh granulosa cells, followed by IVF and early ED. In both groups, the viability in oocytes and embryos was evaluated by methyl tetrazolium (MTT) staining. IVM, IVF, and ED were evaluated by bisbenzimide (Hoechst 33342) staining. The analysis of actin MF distribution was carried out using phalloidinfluorescein isothiocyanate conjugate (phalloidin-FITC), and CHR by Hoechst staining (Rojas et al., 2004). Chemicals, Culture Media, and Culture Conditions Unless otherwise stated, all reagents were purchased from Sigma Chemical Co. (St. Louis, MO, United States), and different culture media were prepared in the laboratory. For COCs collection and washing, Tyrode's medium containing 10 mM HEPES, 10 mM sodium lactate, and 1 mg/ml of polyvinyl alcohol (TL-HEPES-PVA) were used (Abeydeera et al., 1998). The medium for early embryo culture was North Carolina State University-23 (NCSU-23) medium supplemented with 0.4% BSA (Petters and Wells, 1993). All culture media and samples were incubated under mineral oil at 38.5 • C with 5% CO 2 in air and humidity at saturation. Oocyte Collection Porcine ovaries were obtained from pre-pubertal Landrace gilts at "Los Arcos" slaughterhouse, Edo. de México and transported to the laboratory in 0.9% NaCl solution at 25 • C. The slaughterhouse is registered in health federal law authorization under the number 6265375. For COCs collection, ovarian follicles between 3 and 6 mm in diameter were punctured using an 18-gauge needle set to a 10-ml syringe. Oocytes with intact cytoplasm and surrounded by cumulus cells were selected for the assays. Vitrification and Warming For vitrification, COCs were washed twice in VW medium and equilibrated in the first vitrification solution containing 7.5% dimethylsulfoxide (DMSO) and 7.5% ethylene glycol (EG) for 3 min. Then, COCs were exposed to the second vitrification solution containing 16% DMSO, 16% EG, and 0.4 M sucrose for 1 min, and at least nine oocytes were immersed in a 2-µl drop and loaded into the Cryolock (Importadora Mexicana de Materiales para Reproducción Asistida S. A. de C.V. Mexico). Finally, in less than 1 min, the Cryolock was plunged horizontally into liquid nitrogen at −196 • C, then COCs were vitrified for 30 min (Casillas et al., 2014). For warming, the one-step method was performed (Sánchez-Osorio et al., 2010). For COCs recovery, the Cryolock was immersed vertically in a four-well dish containing 800 µl of VW medium with 0.13 M sucrose. Immediately, oocytes were incubated in the same medium for 5 min, then recovered for IVM (Sánchez-Osorio et al., 2008). In vitro Maturation Control and vitrified-warmed COCs were washed in 500 µl of MM three times. Afterward, 30-40 oocytes were randomly distributed in a four-well dish (Thermo-Scientific Nunc, Rochester NY) containing 500 µl of MM with 0.5 µg/ml of LH and 0.5 µg/ml of FSH (Ducolomb et al., 2009) for 44 h (Casas et al., 1999). Vitrified oocytes were matured in MM with a co-culture with fresh granulosa cells for 44 h (Casillas et al., 2014). The total numbers of evaluated cells in the control and vitrification groups were 256 and 143, respectively. In vitro Fertilization After IVM, oocytes were denuded mechanically with a 100µl micropipette. Then, the oocytes were washed three times in MM and three times in mTBM in 500-µl drops covered with mineral oil. For IVF, 40-60 oocytes were placed in 50-µl drops of mTBM covered with mineral oil and incubated at 38.5 • C with 5% CO 2 and humidity at saturation for 1 h until insemination (Ducolomb et al., 2009). Semen was obtained by the gloved hand method on a commercial farm; a 1:10 dilution was made with boar semen extender (MR-A, Kubus, S.A.) and transported to the laboratory at 16 • C. Five microliters of semen was diluted in 5 ml of PBS-Dulbecco, Gibco, 1:1 dilution, supplemented with 0.1% BSA fraction V, 0.1 µg/ml of potassium penicillin G, and 0.08 µg/ml of streptomycin sulfate. It was centrifuged at 61 × g at 25 • C for 5 min. The supernatant was diluted 1:1 with PBS-Dulbecco and centrifuged at 1,900 × g for 5 min. The supernatant was removed and suspended in 10 ml of PBS-Dulbecco and centrifuged at 1,900 × g for 5 min. The pellet was suspended with 100 µl of mTBM. From this solution, 10 µl was diluted 1:1,000 with mTBM to calculate a final concentration of 5 × 10 5 spermatozoa/ml. Finally, 50 µl of the sperm suspension was co-incubated for 6 h with the matured oocytes for fertilization at 38.5 • C (Ducolomb et al., 2009). The fertilization rate was determined through pronuclei (PNs) formation, and the total numbers of evaluated cells in the control and vitrification groups were 126 and 95, respectively. Embryo Development After co-incubation, oocytes were washed three times in 50µl drops of NCSU-23 medium (Petters and Wells, 1993) supplemented with 0.4% fatty acid-free BSA and placed in drops of 500 µl of the same medium covered with mineral oil in a four-well dish and incubated for 16 h. The evaluation of early ED was performed after 40 and 68 h of incubation (Ducolomb et al., 2009). The total numbers of evaluated cells in the control and vitrification groups were 241 and 151, respectively. Evaluation of Oocytes and Embryo Viability Viability was analyzed in oocytes and embryos with MTT staining at T 0 h after oocyte collection, and vitrification, after 44 h of IVM, and after 40 and 68 h of early ED (Mosmann, 1983). Oocytes and embryos were stained with 100-µl drops of 0.5 mg/ml of MTT diluted in mTBM. After 1.30 h, oocytes and embryos were observed under a light microscope (Zeiss Axiostar). Cells showing a purple stain were considered alive, and those colorless were considered dead ( Figure 1A). The total numbers of evaluated cells in the control and vitrification groups were 386 and 381, respectively. Evaluation of Oocyte Maturation Maturation was evaluated by Hoechst stain. Oocytes were stained with 10 µg/ml of Hoechst for 40 min using a confocal scanning laser microscope (Zeiss, LSM T-PMT) for observation. Maturation was evaluated at 44 h of incubation; oocytes with a germinal vesicle (GV) or in metaphase I (MI) were considered as immature and those in metaphase II (MII) with the first polar body as mature (Figure 2A). Evaluation of in vitro Fertilization Zygotes and embryos were stained with 10 µg/ml of Hoechst for 40 min using Zeiss, LSM T-PMT for observation. To evaluate fertilization, oocytes with one pronucleus (PN) were considered activated (ACT) (Figure 3Aa) and those with two pronuclei as monospermic (MSP) (Figure 3Ab), and more than two decondensed sperm heads (DH) or more than twopronuclei were considered polyspermic (PP) (Figure 3Ac). Oocytes in MII (first polar body) were considered as not fertilized (UF) (Figure 3Ad). Evaluation of Embryo Development Early ED was evaluated 40 h after IVF, two to four cell embryos (Figure 4Aa), and 68 h after IVF five to eight cell embryos (Figure 4Ac). Hoechst Staining and Immunocytochemistry (Actin Microfilaments and Chromatin) For CHR evaluation, embryos were stained with Hoechst and, for actin MF, by immunofluorescence with phalloidin-FITC, 1:350 in PBS. After 40 and 68 h of incubation, early embryos were washed three times to 500-µl drops of PBS-BSA; then, 300 µl of Hoechst was added and kept at 4 • C for 45 min; afterward, 200 µl of 4% paraformaldehyde fixative solution was added and kept at 4 • C overnight. After, 200 µl of PBS-Triton X-100 1% permeabilizing solution was added at 4 • C for 2 h and washed. Next, 200 µl of blocking solution was added, with 0.02 g/ml of PBS-BSA, 0.02 g/ml of skimmed milk, and 0.011 g/ml of glycine diluted in PBS, for 1 h at room temperature. For MF labeling, 200 µl of the phalloidin-FITC was added and kept at 4 • C for 2 h, then, transferred three times to 500-µl drops. All the incubations were performed in the dark. The washing of the embryos was made with PBS-BSA. The slide mounting was performed with PBS/glycerol 1:9 on slides and covered with a coverslip and sealed with transparent nail polish (Rojas et al., 2004). Images were obtained using Zeiss, LSM T-PMT. The analysis was carried out capturing Z stack series, through four sections covering the whole embryo. MF visualization (green) was by Phalloidin-FITC with an excitation wavelength of 490 nm and an emission of 525 nm. For CHR (blue), Hoechst had an excitation of 350 nm and an emission of 470 nm. The evaluation of images was performed using the Image J Processor. For actin MF distribution evaluation, embryos were classified as embryos with cortical actin (CA) (Figure 5Aa), disperse actin (DA) (Figure 5Ab), and dispersed cortical actin (DCA) (Figure 5Ac). Embryos showing CA were considered with good quality and high developmental potential (Figure 5Aa); DCA was considered as a medium quality embryo indicator (Figure 5Ac) and DA as a low embryo quality, with less developmental potential (Figure 5Ab). For CHR evaluation, two classifications in both groups were considered: embryos without damage (ND) and with damage (D) (Figure 6A). The ND CHR embryos presented well-defined nuclei (Figure 6Aa). D CHR embryos were considered when one or more abnormal chromatin (ACHR) structures were identified (Figure 6Ab). ND CHR embryos are related to good quality with a high probability of successful ED. D CHR embryos have less embryo development potential. For MF evaluation, the total numbers of evaluated cells in the control and vitrification groups were 61 and 35, respectively. For CHR evaluation, the total numbers of evaluated cells in the control and vitrification groups were 64 and 33, respectively. Statistical Analysis Statistical analyses were carried out using GraphPad Prism 8.2.1 (Graphpad Software Inc.). Data from vitrified and control groups were compared with non-parametric Mann-Whitney U test with a confidence level of P < 0.05, and percentage data are presented as mean ± standard deviation (SD) values. Evaluation of Oocytes, Embryo Viability, and Oocyte Maturation After collection (T 0 h), 97% of the control oocytes were alive, but this percentage was reduced significantly (78%) after vitrification (P < 0.05). Oocyte viability after 44 h of in vitro maturation in the control was 89%, while in the vitrified group, it was 63% (P < 0.05). After 40 h of embryo development, 94% of embryos were alive in the control and decreased significantly in the vitrified group up to 54% (P < 0.05). After 68 h of embryo development, viability decreased up to 42% (P < 0.05) in the vitrification group compared with that of the control (Figure 1B). Figure 2B shows the percentage of oocytes in vitro maturation in both groups. Maturation (metaphase II-first polar body) was significantly lower in the vitrification group 40% compared with the control 79% (P < 0.05). A higher germinal vesicle rate (58%) was obtained in the vitrification group (P < 0.05) compared with that of the control (12%). Also, the percentage of metaphase I oocytes was significantly lower in the vitrification group (P < 0.05) compared with that of the control (4 and 12%, respectively). The percentage of early embryo development at 40 and 68 h of incubation in both groups are shown in Figure 4B. In the control group, a higher percentage of embryos with two to four cells (58%) was found compared with the vitrification group (33%) (P < 0.05). In vitrified oocytes, a higher percentage of undivided embryos (No Div) (63%) was obtained compared with that in the control (33%) (P < 0.05). However, both groups were not statistically different in the production of five to eight cell embryos (P > 0.05) (9% control vs. 4% vitrified). Also, the percentage of embryos that reached five to eight cells was significantly lower after vitrification than the two to four cells (P > 0.05). Evaluation of Actin Microfilament Distribution and Chromatin Integrity in Early Embryos Results in the control group showed a higher percentage of embryos with cortical actin compared with the vitrification group (P < 0.05) (67 vs. 29%). In the vitrified group, higher percentages of dispersed actin (32 vs. 11%) and dispersed cortical actin (40 vs. 23%) were obtained compared with the control (P < 0.05) (Figure 5B). For chromatin evaluation, results indicate that in the control, a higher percentage of embryos with chromatin without damage, with well-defined nuclei was obtained compared with the vitrification group (P < 0.05) (77 and 41%, respectively). Also, there was a greater percentage of embryos in the vitrification group with chromatin damage, with one or more scattered chromatin structures and without embryo division (61 and 23%, respectively) ( Figure 6B). Figure 7 shows the merged images from microfilaments and chromatin evaluation in early embryos. Pictures a, b, and c correspond to the control group and images d, e, and f to the vitrification group. Some embryos show damaged blastomeres, and others did not divide. It seems that the cortical distribution of actin microfilaments and the integrity of the chromatin are related to embryo quality. Oocytes with dispersed actin also showed abnormal chromatin distribution and even the absence of cell division (Figure 7a). In contrast, control embryos showed normal chromatin conformation (ND), with cortical actin, or with some degree of actin dispersion (Figures 7a,c). DISCUSSION Over the past years, several methods for cryopreservation have been developed through the vitrification of immature (Casillas et al., 2018) and mature oocytes (Casillas et al., 2020) as well as for embryos in different developmental stages. In general, studies have shown that embryos have greater probabilities of survival after vitrification than immature and mature oocytes. Immature oocyte vitrification in women is important because ovarian hyperstimulation can be avoided. Also, in other mammalian species, it is possible to recover a greater number of GV oocytes than MII. It has been reported that vitrification success in immature oocytes depends on the species and so different strategies are used. To evaluate the quality of embryos produced in vitro from vitrified immature oocytes, several aspects must be considered through the oocyte maturation, fertilization, as well as ED, like CPAs, containers, warming procedures, and recently the use of cumulus cell-oocyte co-culture (Casillas et al., 2015;Kopeika et al., 2015;de Munck and Vajta, 2017). Although there is great progress in the knowledge of oocyte vitrification, the rate of embryos reaching morulae and blastocyst stages remains low; therefore, a few births of live offspring from vitrified immature oocytes are reported (Somfai et al., 2010). Several studies have used different approaches to explain the possible causes for the decrease in viable embryos produced from vitrified oocytes. One of the main parameters affected by vitrification is oocyte viability. After 44 h of IVM, in the vitrification group, it was 63% lower than that of the control group, which was 89% (Casillas et al., 2014). According to the literature, this is the first study that directly evaluates the viability of embryos derived from vitrified immature oocytes. In several species, studies reported only the percentage of ED to consider this parameter (Rajaei et al., 2005;Somfai et al., 2010;Chatzimeletiou et al., 2012;Fernández-Reyes et al., 2012). In the present study, results indicate that viability decreased significantly in embryos derived from vitrified oocytes compared with that of the control group (54 vs. 94%, respectively). This indicates that vitrification affects oocyte differentiation and ED in vitro. Although oocytes survived during this process, they were affected to carry out an optimal ED. However, the percentage of fertilized oocytes was similar in both groups. The percentage of polyspermy was lower in vitrified oocytes compared to the control. In this study, there was a decrease in the IVM rate in vitrified oocytes compared with the control group. Similar results were previously reported (Casillas et al., 2020). According to the literature, studies evaluating IVM in vitrified oocytes have reported different results. This may be due to the oocyte maturation stage before vitrification (GV), cell containers, types of cryoprotectants, temperatures, or different cooling and warming procedures (Fernández-Reyes et al., 2012;Casillas et al., 2014;Wu et al., 2017). In addition, Somfai et al. (2010) reported 77% of IVM in the control vs. 22% in the vitrified oocytes using the solid surface vitrification method. These results are similar to those obtained in the present study; however, they use maturation media supplemented with porcine follicular fluid. Actin is an essential component of the cytoskeleton that achieves functions such as cell migration and division, and the regulation of gene expression, which are basic processes for ED. Besides, microfilaments rearrange the organelles involved in fertilization (Sun and Schatten, 2006), such as extrusion of the second polar body and reorganization of the smooth endoplasmic reticulum for the generation of intracellular Ca 2+ during oocyte activation (Chankitisakul et al., 2010;Bunnell et al., 2011;Gualtieri et al., 2011;Egerszegi et al., 2013;Martin et al., 2017). In mammals, the cortical distribution of actin microfilaments in mature oocytes is polarized, which is evidenced by swelling near the metaphase axis and less in the opposite cortical domain. In immature oocytes, the actin microfilament distribution does not appear polarized. This indicates that F-actin has a restructuration for polymerization during oocyte maturation (Coticchio et al., 2014(Coticchio et al., , 2015b. Some studies have evaluated the effect of oocyte vitrification in the cytoskeleton, showing an interruption in the cortical microfilaments network, as well as disorganization of the microtubule spindle, which led to chromosomal dispersion (Rojas et al., 2004;Chatzimeletiou et al., 2012). These data could explain the possible mechanism of damage derived from immature oocyte vitrification. Our results showed significant differences in the percentage of early embryos with actin dispersion. Vitrification inhibits the correct polymerization of G-actin, reflected in an MF breakdown in the cytoplasm and its disruption in the cortical area (Baarlink et al., 2017;Misu et al., 2017). In the vitrification group, there was a high rate of No Div embryos. The percentage of embryos that reached five to eight cells was also lower after vitrification than the two to four cells. This could be related to the lack of damage repair mechanisms in the oocytes. Some blastomeres in the early embryos showed lower cortical actin compared with that of the control group. Besides, some disrupted actin polymerization (dispersed actin and dispersed cortical actin). An increase in blastomere DNA fragmentation in blastocyst is attributed to the cryoprotectants (Rajaei et al., 2005). Also, it was reported that the cytotoxicity of these agents occurs in a greater proportion of cells with high metabolic activity as immature oocytes or embryos (Lawson et al., 2011). Cryoprotectants interrupt the cortical microfilament network, causing spindle depolymerization and disorganization, which leads to chromosomal dispersion that may trigger aneuploidies during the ED (Rojas et al., 2004;Chatzimeletiou et al., 2012). Actin microfilaments also are involved in the immobilization of mitochondria to the cell cortex or sites with high ATP utilization (Boldogh and Pon, 2006). Depolarization and disorganization of the cytoskeleton will prevent the externalization of the meiotic organization toward the cortical zone and the alignment of the chromosomes on the equatorial axis in the oocytes (Gualtieri et al., 2011). DNA is susceptible to a variety of chemical compounds and physical agents causing alterations in its conformation as a result of errors produced during replication, recombination, and repair (Coticchio et al., 2015a). Changes in chromatin are a result of histones modifying enzymes, which alter its post-transcription activities and ATP-dependent chromatin remodeling complexes (Farrants, 2008). Reduced production of ATP in human vitrified oocytes can be associated with depolymerization of actin microfilaments, causing failure in the DNA repair system, leading to chromatin disorders (Manipalviratn et al., 2011). In the present study, this mechanism may explain the high proportion of embryos derived from vitrified oocytes showing some type of damage in the chromatin affecting the DNA repair system. It is important to highlight that several studies reported in the literature have evaluated the effect of vitrification in GV and MII oocytes, zygotes, and blastocysts at the same stages of development in which they were vitrified. In this study, the effect of vitrification was evaluated in the further early ED. Previous studies in zygotes or blastocysts analyzed only the role of actin microfilaments and chromatin in the success of the fertilization and embryo production (Wu et al., 2006;Somfai et al., 2010;Egerszegi et al., 2013); however, the distribution and the morphological characteristics of these structures in early ED blastomeres has not been evaluated. Therefore, the present study provides important information that reveals the damage caused by vitrification in immature oocytes and their further early ED. CONCLUSION In conclusion, the results of this study indicate that the damage generated by the vitrification of immature oocytes affects viability, maturation, and the distribution of actin MF and CHR integrity, as observed in early embryos. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT This study was approved under the regulations of the Ethics Committee for care and use of animals; Metropolitan Autonomous University-Iztapalapa Campus. AUTHOR CONTRIBUTIONS AL developed the methodology, performed the experiments, analyzed the results, conducted the investigation, and prepared and wrote the original draft. YD also developed the methodology, performed the experiments, and analyzed the results. EC analyzed the results and reviewed and edited the manuscript. SR-M reviewed and edited the manuscript. MB and FC conceptualized the study and developed the methodology, software, data curation, prepared and wrote the original draft, conducted the visualization, investigation, supervision, validation of the study, reviewed, edited, and wrote the final manuscript. All authors contributed to the article and approved the submitted version.
v3-fos-license
2021-09-27T20:48:58.290Z
2021-08-03T00:00:00.000
238825646
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/jat/2021/1341729.pdf", "pdf_hash": "d50a670dddfa277fea38d2dcf5d1ffec41dbf3d0", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2552", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "d59a7ddfc86346c601deb1d76adadf35de9a8443", "year": 2021 }
pes2o/s2orc
Research on the Traffic Flow Control of Urban Occasional Congestion Based on Catastrophe Theory Aimed at the problem of occasional congestion control, the cusp catastrophe theory is used to establish the catastrophe model of the urban road system under occasional congestion, finding breakpoints and analyzing stability after urban road system catastrophes by constructing the energetic function; based on the catastrophe characteristics of the urban road system, the feasibility method of congestion control is discussed. The results show that the control method of traffic flow based on catastrophe characteristics of the urban road system can effectively improve the efficiency of the road system in theory. Finally, the applicability of the control method based on catastrophe characteristics is analyzed by examples under different occasional congestion situations. Introduction Occasional congestion refers to the phenomenon of traffic congestion caused by the decline of road capacity due to occasional accidents. Occasional congestion often adversely affects the efficiency of the urban road system, and improper control may even cause the congestion to spread to upstream intersections and form regional blockages. erefore, the research on the control of urban road congestion has important theoretical significance and practical value. In existing research on the control of occasional congestion, the goal is to minimize the delay time [1], or limit the density [2], or the maximum capacity [3]; the control methods mainly adopt the signal light control [4]. e above research mainly focuses on the single-lane situation. In this paper, for the multilane situation, take the maximum capacity as the goal, the control method of traffic flow based on catastrophe characteristics of urban road system is proposed, and the efficiency of the control method under different occasional congestion situations is discussed. Catastrophe Theory and the Urban Road System Catastrophe theory is the French mathematician om's theory of discontinuous change based on mathematical theories such as singularity theory and stability theory [5]. In catastrophe theory, potential refers to the ability of the system to develop in a certain direction. e independent variable of the potential function is the state variable used to describe the behavior of the system, and the factor that affects the change of the behavior of the system is the control variable. Existing studies have summarized 7 primary catastrophes when the number of state variables does not exceed two and the number of control variables does not exceed four: cusp catastrophe, folding catastrophe, dovetail catastrophe, hyperbolic umbilical point catastrophe, butterfly catastrophe, ellipse umbilical point catastrophe, and parabolic umbilical point catastrophe [6]. For a system with a catastrophe phenomenon, as long as the number of its state variables and control variables is established, one of the corresponding seven elementary catastrophe functions can be selected as the catastrophe model of the system to analyze the catastrophe characteristics of the system. e basic idea of catastrophe theory is to use bifurcation theory to analyze the stability of the system, to classify breakpoints according to potential functions, to study the characteristics of discontinuous states near various breakpoints, to find the breakpoints of potential function catastrophes, and to analyze the system by the process of catastrophe from one stable state to another stable state [7]. Catastrophe theory has very strong applicability and has been applied to various fields since it was proposed. Cusp systems generally have five major characteristics: sudden jump, multimodality, unreachability, divergence, and hysteresis [6]. e road traffic system is a complex system described by multiple parameters. Its operation process conforms to the five characteristics of cusp abrupt changes: sudden jump (the system transition from a noncrowded state to a crowded state is not a gradual process, but a leap, a catastrophe), multimodality (the system has two steady states, crowded and noncrowded), unreachability (some areas in the system cannot be reached in practice), bifurcation (when the system is in a critical equilibrium state, where just an unstable balance exists instantaneously. Once it encounters disturbances from external factors, this balance may be destroyed. e system may be biased to a crowded state or a noncrowded state), divergence (in a critical state, small changes in control variables will lead to a catastrophe of the state variable), and hysteresis (the direction of the catastrophe is related to the direction of the control variable, and the catastrophe of the control variable from one direction is different from the catastrophe from the other direction) [8]. is paper uses the cusp catastrophe theory to analyze the catastrophe characteristics of the urban road system under occasional congestion. Taking the occupancy rate of the traffic volume in the carrying capacity as the state variable, and the change rate of the traffic flow and the change rate of the road capacity as the control variables, the cusp catastrophe model of the urban road traffic system under occasional congestion is established. e impact of occasional congestion on the road traffic system is first manifested as changes in control variables, which further cause state variables to change in multiple state spaces and even the system catastrophes. e cusp catastrophe model is used to analyze the catastrophe characteristics of the system, determining breakpoints and analyzing stability after the urban road system catastrophes. Finally, we give a feasible method for traffic flow control. State Variables and Control Variables of the Urban Road System. In this paper, the occupancy rate of traffic volume in the carrying capacity is selected as the state variable of the system. Traffic flow and road capacity are the two control variables of the system, in order to maintain the same dimension as the state variable, the change rate of traffic flow and the change rate of road capacity are selected as the control variable of the system, and the cusp catastrophe model is established to analyze the catastrophe characteristics of the road traffic system under occasional congestion. Definition 1. e occupancy rate of the traffic volume in the carrying capacity is taken as a state variable, denoted as η(t): where X(t) is the traffic volume of the road system and Y(t) is the carrying capacity of the road the system. Definition 2. e change rate of the traffic flow is regarded as a control variable of the system, denoted as c(t): where Q(0) is the traffic flow at the moment of the accident and Q(t) is the traffic flow at the moment t. Definition 3. Take the change rate of the road capacity as another control variable of the system, denoted as λ(t): where C(0) is the road capacity at the moment of an accident and C(t) is the capacity of the road at the moment t. In practice, urban roads often use multiple lanes in the same direction in urban planning. When accidents occur, the lanes are occupied and the road capacity will inevitably decrease. Occasional accidents occur in different locations and have different impacts on road capacity. e impact value is mainly determined by the lane utilization coefficient. Take three lanes as an example. From the center line of the road to the rightmost lane, they are defined as lanes 1, 2, and 3 respectively. e utilization coefficient of the lane is gradually reduced, and the utilization coefficient of each lane is 1.00, 0.8-0.89 (take 0.87), and 0.65-0.78 (take 0.73), respectively; then, when an accident occurs, the road capacity C is where C 0 is the design capacity of the road and P is lane loss coefficient, expressed by the ratio of the number of lanes available to the total number of lanes [9]: where n is the number of damaged lanes and N is the number of total lanes. In equation (4), β j denotes the weight which is the sum of utilization coefficients of available lanes to total lane utilization coefficients, as occasional congestion occurring in lane i: where α * i is the sum of utilization coefficients of lanes occupied or damaged as occasional congestion occurring in lane i and α i is total lane utilization coefficients. Cusp Catastrophe Model of the Urban Road System under Occasional Congestion. Taking the occupancy rate of the traffic volume in the carrying capacity as the state variable, and the change rate of traffic flow and the change rate of capacity as the two control variables, the cusp catastrophe model of urban road system under occasional congestion is established, and its potential function is According to E ′ (η) � 0, the equilibrium surface (catastrophe manifold) is According to E ″ (η) � 0, the singularity is Combine equations (8) and (9) to obtain the bifurcation set: e urban road system is a three-dimensional space composed of state variables η(t) and control variables λ(t), c(t). e equilibrium surface is a smooth surface containing folds (or pleats). e upper and lower sheets of the surface are stable equilibrium points. In the middle sheet, the system moves in the direction which minimizes potential function; at this state, the equilibrium point is unstable, so it is called the unreachable region, while in the upper and lower sheets, the potential function reaches the minimum value and the equilibrium point is stable, which is also the state the potential function is usually located in. e bifurcation set is defined by projection of the fold, which determines the area of the catastrophic behavior [10]. When control variables move outside the bifurcation set, the corresponding equilibrium point changes in the upper or lower of the manifold, and the system is stable; when control variables move through the bifurcation set point, the corresponding equilibrium point is at the edge of the upper and lower of the manifold; at this time, the system is about to undergo a catastrophe; when control variables move around the inner region of the bifurcation set, the equilibrium point is at the middle sheet of the manifold, and the system is in an unstable state, and it jumps to another state at any time. Catastrophe Analysis When using the cusp catastrophe model to analyze the catastrophe process of the road traffic system under occasional congestion, it is necessary to construct the energetic function of the system to describe the relationship between the variables. e energetic function is the complex sum term of the potential function, which has the same catastrophe characteristics as the potential function [6]. erefore, through analyzing the energetic function of the system, breakpoints of the system can be solved and the stability of the system can be discussed. On this basis, the feasibility control method of the flow under the occasional congestion can be obtained. e energetic function of the road traffic system [3] is as follows: where m is the mass coefficient of the standard unit vehicle and V(t) is the average speed of the vehicle. Speed Model Based on the Relationship between Traffic Flow Parameters. Traffic flow Q, average speed V, and density K are the three basic parameters that characterize traffic flow, and the basic relationship between the three is [11]: Q � VK. According to the Greenshields velocitydensity linear model [12], V � aK + b, where a and b are the variable coefficients, which can be obtained through data regression analysis. e flow-density relationship is thus obtained as Let (dQ/dK) � 0, that is, 2aK + b � 0, obtaining the following: when K � − (b/2a), V � b/2, the flow reaches the maximum value: Q max � − (b 2 /4a); the following road capacity C is formulated: Let the length of the road be L, the time when the vehicle passes in the free flow state is expressed as T 0 � L/b; meanwhile, V � L/T, K � (V/a) − (b/a), and combining with flow-density relationship (12), we obtain Formula (14) can be regarded as a quadratic equation of one variable about T 0 /T, and the path resistance function can be obtained by solving e application range of the road impedance function model is 0 ≤ (Q/C) ≤ 1, the degree of road load Q/C as the independent variable, denoting different vehicle flow speeds. When Q/C � 1, that is, T/T 0 � 2, it means that the vehicle travel time is twice the free travel time, when the road is in a saturated state [13]. Based on the independent resistance function, the average speed V(t) of vehicles in the urban road system can be expressed as Journal of Advanced Transportation (11) and (16), the energetic function of the urban road system is Catastrophe Analysis of the Urban Road System under Occasional Congestion. From equations e set of breakpoints of the energetic function can be obtained by zE(K(t), C(t))/zK(t) � 0, which can be simplified as and the breakpoints are obtained: Whether the second derivative is negative or not is the condition for judging the stability of breakpoints: simplified as When z 2 E(K(t), C(t))/zK (t) 2 > 0, the energetic function has a minimum point, breakpoints can be judged to be a stable breakpoint; when z 2 E(K(t), C(t))/zK (t) 2 < 0, the energetic function has a maximum value, and breakpoints can be judged to be an unstable breakpoint; when z 2 E(K(t), C(t))/zK (t) 2 � 0, it is the inflection point of the energetic function with respect to density. erefore, the break flow Q(i) of the urban road system at the catastrophe time can be obtained: After the catastrophe, the potential function will continue to move continuously to the minimum value until it is stable at time j; then the stable traffic flow will be Suppose that when an accident occurs at 0 moment, the road capacity becomes smaller and the control variable λ(t) of the system becomes larger instantly. According to Maxwell's agreement, the potential function will move to a minimum point, and also c(t) begins to change; at time i, the potential function reaches the junction of the bifurcation set point; that is, when the control variable satisfies the bifurcation set point: 8c(i) 3 + 27λ(i) 2 � 0, the potential function suddenly becomes smaller, discontinuity occurs, and the system jumps from the original stable state to another stable state; as the control variable continues to change, the potential function continues to move slowly and continuously to the minimum value; the system stabilizes after reaching the equilibrium point j in the new stable area and obtains a stable traffic flow: Q(j) � (9a/b 2 )C 2 (j) + 3C(j) (see Figure 1). Traffic Flow Control Method Based on Catastrophe Characteristics of the Urban Road System Catastrophe theory has two main catastrophe conventions: ideal delay convention and Maxwell convention. e former believes that the system has been stable at the established equilibrium position and remains unchanged until the equilibrium position disappears; the latter refers to the general direction of the system that makes the overall potential extremely small to shift to the equilibrium position. e road traffic system is affected by the control variables and its operation changes are more random. erefore, the Maxwell agreement is more suitable for the road traffic system. In the catastrophe model, the larger the potential function, the higher the operating efficiency of the urban road system. According to Maxwell's agreement, the potential function always moves to the smallest equilibrium position of the potential global, from one stable area to another stable area, until the equilibrium is stable in another area; that is, the system has a catastrophe. At this time, the potential function can only move to the maximum direction with the help of external force, that is, adopt the corresponding traffic flow control strategy to improve the operating efficiency of the urban road system. Based on the analysis of the catastrophe process of urban road system, at a critical point the stable points, and the bifurcation set, the potential function suddenly becomes smaller, and the system suddenly changes from the original stable region to another stable region and continuously becomes smaller in the new stable region until balanced. Based on this, the feasibility control methods of the traffic flow are given: (1) Before a catastrophe, control the traffic flow to make it less than the breakpoints. Preventing the catastrophe occurs, and the potential function decreases sharply. (2) After a catastrophe, control the traffic flow with a certain restriction rate to be as small as the stable traffic flow, preventing the potential function moving to a smaller direction, in order to obtain the maximum capacity. According to formulas (22) and (23), the traffic flow restriction rate is obtained [3]: 6. Experiment Table 1). Results and Analysis. e parameters describing the urban road system after urban road system catastrophes under the three occasional situations are calculated (see Table 2). Based on the parameter after urban road system catastrophes, the flow restriction rates under three occasional congestion situations are calculated separately, and a comparison is drawn between the operating efficiency of the urban road system before controlling and after control, obtaining the efficiency of the traffic flow controlling (see Table 3). e analysis of the example shows that, under occasional congestion, the traffic flow control method based on the catastrophe characteristics of the road system can effectively improve the operation efficiency of the road system and improve the road capacity. is more effective when the accidental accident has a small impact on the road capacity, which can better control the congestion and improve the road capacity. Conclusion In this paper, taking the occupancy rate of the traffic volume in the carrying capacity of the road system as the state variable, the change rate of the traffic flow, and the change rate road capacity as the control variables, a cusp catastrophe model of the road traffic system is constructed under occasional congestion. Find breakpoints and analyze stability after urban road system catastrophes by constructing the energetic function; based on the catastrophe characteristics of the urban road system, the feasibility method of congestion control is discussed. And give the traffic flow control method on this basis: (1) Before a catastrophe, control the traffic flow to make it less than the breakpoints. Preventing the catastrophe occurs, and the potential function decreases sharply. (2) After a catastrophe, control the traffic flow with a certain restriction rate to be as small as the stable traffic flow, preventing the potential function from moving to a smaller direction, in order to obtain the maximum capacity. rough example analysis, it is found that, in the case of accidental accidents that have less impact on road capacity, the control method of traffic flow based on the catastrophe characteristics is more effective and can better control congestion and improve road capacity; in the case of accidental accidents, the traffic flow control method is more effective. When the road capacity is greatly affected, it is difficult to effectively improve the capacity of the urban road system only by controlling the traffic flow. Data Availability e data were obtained from experiments of the traffic flow simulation procedures. Conflicts of Interest e authors declare that they have no conflicts of interest.
v3-fos-license
2020-08-13T10:05:47.944Z
2020-08-01T00:00:00.000
221126269
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/8/e039909.full.pdf", "pdf_hash": "d38e87b06cf8287a3531fb59db81203c1190c018", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2554", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "b14391c65d0b316cd08b4c02e9185df023b066f9", "year": 2020 }
pes2o/s2orc
Therapeutic radiographers’ delivery of health behaviour change advice to those living with and beyond cancer: a qualitative study Objectives Therapeutic radiographers (TRs) are well placed to deliver health behaviour change advice to those living with and beyond cancer (LWBC). However, there is limited research on the opinions of TRs around delivering such advice to those LWBC. This study aimed to explore TRs’ practices and facilitators in delivering advice on physical activity, healthy eating, alcohol intake, smoking and weight management. Setting and participants Fifteen UK-based TRs took part in a telephone interview using a semi‐structured interview guide. Data was analysed using the framework analysis method. Results Emergent themes highlighted that TRs are mainly aware of the benefits of healthy behaviours in managing radiotherapy treatment related side effects, with advice provision lowest for healthy eating and physical activity. Participants identified themselves as well placed to deliver advice on improving behaviours to those LWBC, however reported a lack of knowledge as a limiting factor to doing so. The TRs reported training and knowledge as key facilitators to the delivery of advice, with a preference for online training. Conclusions There is a need for education resources, clear referral pathways and in particular training for TRs on delivering physical activity and healthy eating advice to those LWBC. INTRODUCTION It is estimated that 40% of cancer cases are linked to unhealthy behaviours. 1 Based on evidence from systematic literature reviews and meta analyses, the World Cancer Research Fund (WCRF) recommend that individuals are physically active, limit consumption of energy dense foods, salty foods, red meat and avoid processed meat, eat more plant foods, maintain a healthy weight, limit alcoholic drinks and avoid tobacco to reduce their risk of cancer. 2 Those living with and beyond cancer (LWBC) are also advised to follow these guidelines, due to increasing evidence that healthy behaviours may improve physical and psychosocial outcomes after a cancer diagnosis. [2][3][4][5][6][7][8] Despite the potential benefits of healthy behaviours, few people LWBC are meeting the WCRF recommended health behaviour recommendations. 9 10 Those LWBC report one key reason for not adopting healthier lifestyle behaviours is lack of advice and support from their healthcare team. 11 Healthcare professionals (HCPs) are well placed to bring about positive health behaviour changes among cancer patients. 12 A trial of brief advice among breast cancer survivors showed that a simple physical activity recommendation from a HCP doubled the percentage meeting national exercise guidelines. 12 Despite this, research to date among both HCPs and those LWBC consistently shows that few oncology HCPs offer guidance to oncology patients on healthy lifestyle behaviours. [13][14][15][16][17][18][19][20][21] Reported barriers among HCPs in providing health behaviour advice for those Strengths and limitations of this study ► This study provides an insight in therapeutic radiographers' views on all key modifiable health behaviours for those living with and beyond cancer. ► The participants worked in different radiotherapy departments, offering insight into the practices among therapeutic radiographers in the delivery of healthy behaviour advice from a wide range of hospitals. ► Whilst data saturation was reached, the sample size was small and therefore the findings may not be representative of the views of the wider therapeutic radiography workforce. ► The response rate was low (20.8%), therefore the participants might be more interested in the role of health behaviours in cancer survivorship, which might bias the responses towards a positive view on the role of therapeutic radiographers in delivering advice within their role. Open access LWBC, include believing that giving advice was not part of their role, lack of time with patients, lack of referral programmes, lack of resources such as education leaflets for those LWBC and lack of knowledge regarding guidelines and research findings. [16][17][18][19][20][21] A recent qualitative study with 21 oncology HCPs identified that advice on health behaviours provided to those LWBC focussed on general health and controlling side effects, with few HCPs advising on health behaviours in the context of improving survival outcomes. 20 While these studies provide useful insight into the practices and barriers among oncology HCPs the participants within these studies were primarily oncologists and nurses and focussed on the provision of physical activity and weight management advice. There is limited research on the opinions of therapeutic radiographers (TRs) in delivering advice on health behaviours to those LWBC, despite at least 50% of cancer patients receiving radiotherapy as part of their cancer treatment. 22 TRs are the only health professionals qualified to deliver radiotherapy and play a central role in supporting cancer patients. 23 In the UK, the College of Radiographers recognise the importance of TRs in providing health behaviour advice to improve patient outcomes. 24 TRs are also seen as an integral part of the health force in driving improvements in well-being as outlined in the 2017 publication of "AHPs into Action, using Allied Health Professionals to transform health, care and well-being" which states that radiographers are key to implementing a preventative healthcare approach and that their expertise should be used to design and deliver health interventions. 25 TRs are ideally placed to deliver health behaviour advice, particularly through Making Every Contact Count (MECC). 26 27 MECC is a strategy whereby health professionals use every appropriate opportunity and interaction with patients to promote healthy behaviours and signpost to relevant healthcare services using an 'Ask, Advise, Act' framework. 27 MECC fits extremely well within the TRs' role in which patient education is a key part of radiotherapy practice with TRs providing care to the same patient every day, often over a number of weeks. 23 TRs therefore have the potential to make significant contributions in supporting positive health behaviour changes among those LWBC. However, despite these opportunities one survey in the UK among 102 TRs identified that TRs rarely advise patients on the key modifiable health behaviours including smoking, alcohol, healthy eating and exercise. 15 The findings also showed lack of knowledge and training as barriers among TRs in delivering advice on these topics. 15 Similarly, focus group interviews with 38 TRs identified that lack of knowledge and training were barriers to the provision of smoking cessation advice. 28 Challenges remain in translating behaviour change interventions into existing care pathways and practices in a way that is appropriate for use by health professionals. 29 Understanding TRs' practices, and what support they need in delivering advice on the topics of physical activity, healthy eating, alcohol intake, smoking and weight management could inform the development of interventions that will enable TRs' in delivering advice on improving health behaviours to those LWBC. Qualitative research is appropriate for exploring the beliefs, experiences and motivations of individuals on specific matters and allow for more information and clarification. 30 Limited qualitative data exists on TRs' practices and views on delivering advice on these health behaviour topics. This study therefore aimed to address this and through a qualitative methodology explore TRs' practices in delivering health behaviour advice, in addition to exploring the facilitators in delivering such advice. Preferences regarding training on delivering this advice were also explored. Participants and recruitment Participants were TRs working in the UK in a clinical role. They had provided their contact details on a previous online survey investigating TR's practices in delivering health behaviour advice agreeing to be invited for a follow-up telephone interview. An email was sent with an information sheet explaining the research and inviting these TRs. Those who agreed to take part signed a consent form prior to the telephone interview. Data collection Semi-structured individual telephone interviews were carried out between April and May 2019 by a lecturer in therapeutic radiography with an MSc who had completed qualitative interviewing as part of their training (NP). The interviewer had no previous relationship with the study participants. The topic guide (see online supplementary material 1) was based on the guide used within a previous study 17 which explored oncology HCPs views on the provision of lifestyle advice to cancer patients. This guide was adapted for use among TRs, with additional questions added to assess preferences for training on delivering advice. The topic guide was piloted with two participants to check for comprehension of the questions. This data was included in the analysis because no substantial changes were required. The interviews lasted approximately 30 min (range: 20 to 40 min) and were audio-recorded, anonymised and transcribed verbatim. The transcripts were verified by NP against each recording to confirm accuracy. The aim was to carry out interviews until data saturation was reached. It was anticipated that 10 participants would be required to reach data saturation because it was a homogeneous group. 31 After 10 interviews were carried out, they were transcribed verbatim. Following familiarisation with the data NP generated the initial codes and it appeared that saturation was reached after 10 interviews as no new codes occurred in the 10 th interview. 32 A further five interviews were carried out to confirm this. Patient and public involvement Patient input was not used in the design of the research methods. However, the topic guide was piloted with TRs in the academic setting. Additionally, the topic guide was piloted with two participants and these were included in the analysis. Analysis The interview transcripts were analysed using the framework analysis method. 33 This method was chosen because it is an appropriate method for analysing homogeneous data and semi-structured interview transcripts, it is also appropriate when using inductive qualitative analysis. 33 A random selection of transcripts (n=3) were independently coded by AF to check for reliability. The researchers NP, AF and RJB met and agreed on a final coding list in an iterative process (AF and RJB are both experienced qualitative researchers and health psychologists). These agreed set of codes formed the analytical framework which was then applied to all of the transcripts and the data summarised in a matrix using Microsoft Excel. Themes were generated by reviewing the matrix connecting the data between the participants and the codes. The completed consolidated criteria for reporting qualitative research checklist is available in the online supplementary material 2. 34 Themes are presented in the results with supporting quotes and the participants identifier (table 1). Participants The radiotherapy radiography workforce census in the UK only reports the workforce's professional grade, and no other demographics. 35 In the UK, TRs' level of professional skills and knowledge are categorised by agenda for change professional band grades 5 to 8. 36 Therefore, in this study, the participants gender and professional grade were collected; no other demographic information was collected (table 1). The response rate to taking part in the interview was 20.8%. Seventy-two TRs were emailed and invited to take part in an interview, 15 returned consent forms and completed the telephone interview. Fifteen interviews were conducted with 12 women and 3 men. The participants came from all regions of the UK, including England, Wales, Scotland and Northern Ireland. Themes Five main themes were identified: (1) TRs provide behaviour change advice to manage radiotherapy-related side effects; (2) TRs make judgements about when it is appropriate to deliver health behaviour advice; (3) Knowledge and training are key facilitators in the delivery of health behaviour advice; (4) TRs feel patients undergoing radiotherapy treatment seek guidance on health behaviours; and (5) TRs identify themselves as well placed to give health behaviour advice to patients. TRs provide behaviour change advice to manage radiotherapy-related side effects Most respondents reported that they only provided advice on health behaviours that they believed would minimise radiotherapy-related side effects. This meant smoking cessation and alcohol intake were the two health behaviours TRs mainly advised on. With head and neck patients, we give advice, particularly on smoking and drinking, obviously get worse side effects (TR 6). The only thing we do generally say is about drinking plenty of fluids, avoiding alcohol. But that's more to do with prostate side effects, bladder reactions and reducing gas (TR 14). Radiographers are comfortable talking about alcohol when it comes to managing side effects (TR 12). No TRs reported advising patients on healthy eating. Some TRs mentioned advising patients on dietary intake but this is to patients who are at risk of losing weight, for side effect management and potential impact on accuracy of radiotherapy treatment delivery. Healthy eating, I don't tend to discuss too much. A lot of patients have difficulty eating and we are encouraging maintaining weight while on treatment (TR 5). I'm not very sure if healthy eating is important. Any patients where we're treating, lower GI or pelvis, we would advise them to avoid very high-fibre foods, spicy foods, that might make them have very loose bowels, but other than that we say more or less keep on your same diet. We wouldn't generally discuss a healthy diet as a standard for all patients. No (TR 15). Open access Some TRs mentioned they advise patients to be physically active, however this was only in the context of managing radiotherapy and cancer-related fatigue. So exercise is one of my main ones that I focus on with all patients, particularly to help with their fatigue (TR 9). Exercise, I say that's its quite beneficial to help with fatigue (TR 12). I guess when we have patients come in, fatigue is one of the side effects, so we encourage our patients to remain active (TR 15). TRs make judgements about when it is appropriate to deliver health behaviour advice TRs explained only discussing health behaviours, particularly smoking and alcohol with patients if there were evident indications of a problem. TRs also often reported making a judgement of whether appropriate to advise a patient on a particular health behaviour. So, quite often you can tell if a patient is a smoker, you can smell it, or you can tell by their skin (TR 11). I tend to give advice when you make a judgement of when it's appropriate, an example might be if a patient smelt of smoke (TR 12). Had patients come in and will smell of alcohol and at that time I'll say to the patient that it can exacerbate side effects (TR 15). This meant TRs did not provide advice on health behaviours to every patient. But for those patients where it's not clearly going to benefit them to stop drinking, you would just mention it very briefly. Not every patient will have that information (TR 5). Knowledge and training are keys facilitator in the delivery of health behaviour advice Delivery of advice matched by knowledge The reported delivery of advice on health behaviours appeared to be matched by knowledge of the benefits among those LWBC. One participant explained how he only appreciated the importance of physical activity in cancer survivorship after attending a talk and being made aware of the evidence. My experience of appreciating the role exercise was from attending a talk. I suppose it was really just highlighting in the studies the benefits obviously of a healthy lifestyle and introducing physical activity for patients on treatment (TR 15). Healthy eating was a topic the participants felt particularly unqualified to deliver advice on, and reported lack of knowledge as a barrier to the delivery of advice on healthy eating. It's a difficult one, diet, I think. It's more a knowledge thing. If you don't have the knowledge about what you can and can't say, you're just not going to approach the subject (TR 12). A need for continuous postgraduate online training All interviewed said they would welcome postgraduate training on delivering health behaviour advice. The majority expressed a preference for online training to help overcome the barrier of limited time among TRs to attend training. Online, you're not having to take time out of clinical practice, online is more accessible (TR 6) Participants also mentioned that online training allows for yearly updates and continuous professional development. I think it'd be good (online training) because you can do it in your own time. Because I think that's sometimes the problem. You have this training once and then maybe it never gets brought up again. So it would be quite handy to have something small, every year, alongside all your other mandatory training (TR 12). Participants did acknowledge that face-to-face training allows for further questioning that's not possible with online training. I think one-to-one training, because you can ask questions that may not be covered within the online training (TR 1). To overcome the barrier of not all staff being able attend face-to-face training participants suggested it would be useful to train some TRs through face-to-face methods that they could then cascade to other TRs within the radiotherapy department. Maybe some face-to-face with some staff, that they could cascade down, might be useful as well (TR 4). A need for training in the undergraduate setting It was also suggested to incorporate training on delivering lifestyle advice into the undergraduate education programme. Certainly, get it into the undergraduate course to start with, making them aware it is part of the role (TR 9). It's still not something that I can say was primarily covered in the undergraduates' training about the benefits of healthy lifestyle, you know, there's no real formal education that I can see (TR 15). TRs reported knowledge of resources and referral pathways as facilitators in the delivery of advice Participants also felt knowledge of how to refer patients onto further support would enable them to have conversations on improving health behaviours. With some Open access TRs reporting that lack of knowledge of resources and referral pathways are barriers to initiating a conversation on behaviour change. There needs to be more information available to professionals of where exactly you can refer patients to, whether that be website, whether that be an app (TR 13). That's the only reason why they [therapeutic radiographers] don't want to open these conversations up, because they don't know where to go with it or how to refer on (TR 9). They also acknowledge that in the short time they have with patients if they had a resource then would be more inclined to advise. Having something on a piece of paper, education and having the resources, if you can do it in 2 min you should be able to slip that in (TR 2). You don't always have that information at hand, so if it was readily available, I think we'd give out a bit more [health behaviour information] if it was just the case of pointing them in the right direction that would be a quick and easy thing to do (TR 3). The benefit of incorporating patients' perspective into training Participants also mentioned that getting patients' perspectives on receiving advice on improving health behaviours should be incorporated into training. I think that would be better coming from the patients themselves, rather than just feedback from what journals, and other literature says (TR 7). If there'd even be patients that would be willing to maybe just even be involved with staff training (TR 3). TRs feel patients undergoing radiotherapy treatment seek guidance on health behaviours Many of the TRs also described that patients often ask them for guidance around health behaviour changes, particularly on diet and exercise. This shows that patients see TRs as credible sources of information on health behaviours. We are getting asked the question more and more about weight loss, healthy living, wanting to exercise more (TR 4). It is quite a common thing to be asked at the end of treatment, not so much the smoking and alcohol, I have to say, but diet and exercise is certainly something that people commonly ask (TR 3). TRs identify themselves as well placed to give health behaviour advice to patients TRs acknowledged that they are a consistent healthcare member for patients undergoing radiotherapy and have many opportunities to deliver lifestyle advice. Therefore, TRs recognised that they are well placed to deliver health behaviour advice to patients. We're in a unique position because we do see the same patient day after day and you do kind of start to develop a relationship with them (TR 10). I think we're well placed to help influence patients' behaviours and it's something we should be seen to encourage and report (TR 7). We're in the best position where we see the patients, for a number of weeks every day to encourage any changes (TR 8). From the interviews it appeared that many patients undergoing radiotherapy, excluding those at risk of malnutrition or significant weight loss, are primarily reviewed and assessed by TRs. This highlights that TRs are in an ideal position to deliver advice on health behaviours, particularly when asked about nutrition advice delivery. They routinely see the specialist radiographer, for the breast patients. But they don't have a dietitian appointment (TR 12). Prostate and breast are two tumour groups that are fully radiographer-led review and about 70% to 80% of our work load. They generally wouldn't be sent to a dietician (TR 15). Only have a dietitian on board for the head and necks (TR 9). DISCUSSION TRs in this study saw themselves as well placed to deliver health behaviour advice, but also reported that they do not routinely provide advice to all patients. TRs were particularly unlikely to provide advice on healthy eating and physical activity, and were more likely to provide advice on those behaviours they believed would minimise radiotherapy or cancer-related side effects. This is in line with previous research among TRs. 15 28 37 In one qualitative study a key facilitator reported among TRs in delivering smoking cessation support to patients was knowledge of the link between smoking and toxicity. 28 Another qualitative study that explored allied health professionals' views regarding the provision of dietary advice to patients, highlighted that TRs report giving dietary advice to help counteract the side effects of radiotherapy. 37 Additionally, in our study, if TRs did provide dietary advice, this tended to be general advice rather than cancer-specific advice on healthy eating. In some studies, oncology HCPs have reported they do not self-identify as the right person to provide lifestyle advice. 17 20 However, in this study TRs identified themselves as being well placed to deliver health behaviour advice and in a unique position as a consistent member of the multidisciplinary team providing care to patients. However, despite this, they do not feel qualified to deliver Open access advice, particularly on the topic of healthy eating. In the UK, poor diet has the biggest impact on the National Health Service budget, greater than alcohol consumption, smoking and physical inactivity. 38 It has been noted that there are insufficient dietitians to provide dietary advice to all patients who may need dietary support. 39 In response to this all HCPs are being asked to implement a preventative healthcare approach within their role and the delivery of healthy eating advice is fundamental to this. 23 24 40 41 Key to achieving this is that TRs will have the skills, knowledge and behaviours to improve the health and well-being of individuals. 24 As with other oncology HCP groups 16 17 20 37 this study identifies the need for education and training among TRs in delivering health behaviour advice, particularly on healthy eating and physical activity. This training should also address when and how to refer to other support if necessary, as this was identified as a key facilitator in the delivery of advice on health behaviours, particularly when time is a barrier to the delivery of this advice. [15][16][17] All interviews demonstrated that TRs would welcome training on delivering health behaviour advice and recommended it as a key facilitator in delivering advice, in addition to incorporating it into the undergraduate setting. The need for postgraduate training among TRs in delivering health advice has also recently been reported by Charlesworth et al 28 in relation to the delivery of smoking cessation advice. Our findings from this study provide additional insight into TRs preferences on the type of training on delivering lifestyle advice to those LWBC, with TRs demonstrating a preference of online training in the postgraduate setting. Among HCPs online education has been reported to be as effective as face-to-face education. 42 Additionally, the use of online learning enables HCPs to carry out training at time that fits in with clinical work. 43 44 TRs in this study identified this benefit of online learning in overcoming the limited time available for TRs to undertake continuous professional development and additional training. Interestingly, TRs in this study mentioned having patient input in the training would be helpful. While HCPs input is key to the development of interventions, patient members play key advocacy roles and their input can enhance the outcomes of interventions. 45 Patient input may also help overcome the reported barrier of fear of causing offence to a patient, which has been reported as a barrier among oncology HCPs in delivery of health behaviour advice. 17 Those LWBC wish to receive advice on health behaviours from their healthcare team 13 20 46 and is of particular importance as the period following a cancer diagnosis has been shown be a teachable moment and an ideal opportunity to motivate patients around the importance of healthy eating and physical activity. 47 48 This was made apparent in this study, whereby some TRs mentioned that healthy eating and exercise were the health behaviours patients ask for advice on more often, generally towards the end of their treatment. This further highlights the importance of supporting TRs in delivering evidencebased health behaviour advice to meet patients' needs. TRs have a responsibility to educate patients on the importance of following healthy behaviours given the increasing evidence showing implementing healthy behaviours improve a number of physical and psychosocial outcomes after a cancer diagnosis. 2 3 Among premenopausal and post-menopausal women living with and beyond breast cancer, a systematic literature review and meta-analysis of 82 follow-up studies (n=213 075 breast cancer survivors) identified that being overweight increases the risk of all cause and breast cancer mortality. 4 Being physically active after a cancer diagnosis is also correlated with improved survival and reduced recurrence. 5 6 49 While data is limited, emerging research suggests healthy dietary behaviours after a diagnosis may improve outcomes. 3 50 In a prospective observational study of 1009 patients with stage III colon cancer, a higher intake of a typical Western diet was associated with a threefold increased risk of disease recurrence and a 2.3-fold increased risk of all-cause mortality. 8 Additionally, those LWBC are at increased risk for developing cardiovascular disease, osteoporosis and diabetes and healthy behaviours can reduce the risk of developing these diseases. 51 52 Of those interviewed in this study it appeared that those with breast, prostate and colorectal cancer are primarily reviewed and assessed by TRs. Therefore, it is the responsibility of TRs to deliver advice on improving health behaviours to these patients. This is also particularly important because the strongest evidence for the benefits of diet and exercise is currently in breast, prostate and colorectal cancer survivors. 53 These are also the most common cancers in the UK and radiotherapy plays a key role in managing these cancers. 22 54 Therefore, with the right skills and knowledge, TRs could deliver advice on improving health behaviours. By supporting self-efficacy among patients towards the end of their treatment which very often is in the radiotherapy department can be empowering for patients. Among those with prostate cancer, implementing dietary changes brought psychological benefit, as a method of coping and regaining control over their diagnosis. 46 Strengths and limitations This is the first qualitative study among TRs to explore the provision of advice on all key modifiable lifestyle behaviours for those LWBC as per recommendations. 2 While the aim of qualitative research is not to generalise the findings the sample size was small, and therefore the findings may not be representative of the views of the wider therapeutic radiography workforce. However, data saturation was reached, likely due the homogeneous sample of participants. Additionally, the participants worked in different radiotherapy departments and therefore provide insight into the practices among TRs in the delivery of healthy behaviour advice from a wide range of hospitals. Also, the participants worked in cancer centres in England, Wales, Scotland and Northern Open access Ireland, providing insight into the practices across the UK. Another limitation of this study is the low response rate (20.8%) and that the participants might be more interested in the role of health behaviours in cancer survivorship, which might bias the responses towards a positive view on this topic and the role of TRs in delivering advice within their role. Despite this however, provision of health behaviour change advice was low, suggesting TRs may be even less likely to educate patients around the importance of healthy behaviours. Future research This study highlights the need for training and education among TRs on the delivery of health behaviour advice to cancer patients, both in the undergraduate and postgraduate setting. Particularly on the topics of physical activity, healthy eating and weight management. Higher Education Institutions have a responsibility in educating the Allied Health Professional workforce on implementing health promotion within their role. 55 Further research among pre-registration TR students and lecturers within therapeutic radiography should therefore explore how best to address this need. Future research among TRs should also use purposive sampling to identify the views and health promotion practices among those who may not have a primary interest in the area of health behaviours among those LWBC. CONCLUSION In conclusion, while the majority of TRs delivered some advice on health behaviours as part of their role, advice was mainly on smoking and alcohol intake. Most believed in the value of this advice in managing radiotherapy and cancer-related side effects. Provision of advice was lowest for weight management, healthy eating and physical activity. The findings show a need for training among TRs in delivering advice on improving health behaviours among those LWBC, with TRs reporting a preference for online training in the postgraduate setting. Twitter Nickola D Pallin @NickolaPallin Acknowledgements The researchers are grateful to the health professionals who participated in the study. Contributors NDP had the original idea for the study and obtained the funding with AF, RJB and KPJ. NDP developed the design of the study, acquired the data, analysed and interpreted the data, drafted and revised the article and approved the final manuscript submitted. AF provided behavioural science expertise, contributed to the development of the study design and the recruitment approach, analysed and interpreted the data, reviewed, edited and approved the final manuscript. RJB provided behavioural science expertise, contributed to the development of the study design and the recruitment approach, interpreted the data and reviewed, edited and approved the final manuscript. KPJ provided oncology expertise and intellectual input into the recruitment approach, design and approved the final manuscript. LC provided therapeutic radiography and public health expertise which informed the development of the study design and reviewed the article and approved the final manuscript. NW provided radiography and public health expertise and contributed to the development of the study design, reviewed the manuscript for important intellectual content and approved the final manuscript submitted. Competing interests None declared. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. Patient consent for publication Not required. Ethics approval The methodology for this study was approved by the Human Research Ethics committee of University College London (reference 12945/001). Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement All data relevant to the study are included in the article or uploaded as supplementary information. Data can be obtained by the corresponding author on reasonable request. Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https:// creativecommons. org/ licenses/ by/ 4. 0/. Author note This work was undertaken in the Department of Behavioural Science and Health, University College London, London, UK. The lead author was based there as a pre-doctoral research fellow.
v3-fos-license
2021-06-26T05:18:40.789Z
2021-05-01T00:00:00.000
235633131
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cureus.com/articles/59407-pembrolizumab-associated-seronegative-myasthenia-gravis-in-a-patient-with-metastatic-renal-cell-carcinoma.pdf", "pdf_hash": "d5f894d9600891e8f7a570c6b29bb51871ee7c86", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2555", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d5f894d9600891e8f7a570c6b29bb51871ee7c86", "year": 2021 }
pes2o/s2orc
Pembrolizumab-Associated Seronegative Myasthenia Gravis in a Patient With Metastatic Renal Cell Carcinoma Seronegative myasthenia gravis is a rare, but potential adverse effect of immune checkpoint inhibition. There have been few but increasing number of cases reported in recent years, and early recognition is important for prompt diagnosis and management. Here, we describe the case of a 65-year-old male with metastatic renal cell carcinoma on pembrolizumab diagnosed with new-onset seronegative myasthenia gravis and review literature on its management. Introduction Programmed cell death protein-1 (PD-1) is an inhibitory cell-surface receptor on T-lymphocytes that binds the ligands PD-L1 and PD-L2 [1]. PD-L1 and PD-L2 are expressed on the surface of T and B cells, dendritic cells, macrophages, and also on non-hematopoietic tissues, such as lung, endothelial, pancreatic islet, neurons, and keratinocytes [2]. A variety of malignancies have been found to express PD-L1 including nonsmall-cell lung cancer, ovarian, breast, cervical, colon, pancreatic, melanoma, gastric, and lymphoma. When PD-L1 on the surface of a tumor cell binds to PD-1, T-cell immune responses are downregulated, which enables survival of the malignant clone. Therefore, disruption of this binding releases the inhibition on Tcell activity, augmenting antitumor immune responses and clinical control of the tumor. This is the mechanism of action of checkpoint inhibitors, such as pembrolizumab and nivolumab, monoclonal antibodies that block signaling through the programmed death (PD-1) pathway for the treatment of various cancers [3]. With increasing use of these agents, there has been a growing number of reported patients diagnosed with myasthenia gravis (MG) while on checkpoint inhibitors, some with mortality rates as high as 30% [4]. Associated checkpoint inhibitors include pembrolizumab, nivolumab, and ipilimumab. To date, there have been few reported cases of pembrolizumab-induced MG, with manifestations including ocular MG, myopathy, and respiratory distress requiring mechanical ventilation [5][6][7][8][9]. Here, we describe a case of pembrolizumab-induced seronegative MG in a patient with metastatic renal cell carcinoma. Case Presentation A 65-year-old male with type 2 diabetes mellitus and hypothyroidism was initially diagnosed with clear cell renal carcinoma five years prior and was initially treated with left nephrectomy, radiation therapy, axitinib, and pazopanib. Due to refractory metastatic lesions involving lung and bone, patient was started on pembrolizumab. After his third infusion of pembrolizumab, he developed bilateral ptosis, diplopia, and dyspnea. Initially, the visual changes were attributed to macular edema/thickening on ophthalmology evaluation. Due to persistent symptoms, he was empirically started on prednisone 60 mg daily at an outside facility with some improvement in ocular symptoms. Pembrolizumab was discontinued, although there was favorable response since its initiation with marked reduction in size and number of lung metastases. Two months later, the patient was admitted to our facility for worsening dyspnea, diplopia, and bilateral ptosis despite ongoing prednisone. On admission, patient was hemodynamically stable with a negative inspiratory force of -60 cmH 2 O. He was found to have bilateral ptosis, difficulty maintaining upward gaze, and incremental bilateral upper extremity weakness on repetitive testing, consistent with MG. Recent CT chest demonstrated a 5-mm pleural-based nodule in left lower lobe, but no thymoma, and no further imaging was obtained. Serology testing for MG was negative for muscle-specific kinase (MuSK) antibody, striated muscle antibody, and acetylcholine receptor (AChR) antibodies including binding, blocking, and modulating antibody panel. Voltage-gated calcium channel type P/Q antibody was also negative. He was started on intravenous immunoglobulin (IVIG) 2 g/kg over a five-day course, prednisone 50 mg daily, and pyridostigmine as part of his treatment guided by neurology. Pembrolizumab remained held since the initial onset of symptoms. His symptoms improved with the IVIG course and he was subsequently 1 2 1 1 1 transitioned to pyridostigmine 60 mg three times daily to maintain response. His respiratory status remained stable on 2-4 liters supplemental oxygen via nasal cannula throughout his hospital stay. Patient tolerated IVIG, steroids, and pyridostigmine without adverse effects and he was discharged on pyridostigmine with outpatient neuromuscular clinic follow-up for electromyography and nerve conduction testing. Discussion MG is an autoimmune phenomenon characterized by fluctuating muscle weakness involving ocular, bulbar, respiratory, and limb muscles. Pathogenesis involves antibodies affecting the interaction between presynaptic nerve endings and postsynaptic muscle fibers at the neuromuscular junction (NMJ), but seronegative cases also occur. Serological testing is largely focused on the presence of antibodies against the AChRs and MuSK. Up to 15% of patients do not have detectable AChR antibodies, of which about 40% have detectable MuSK antibodies [10]. Seronegative MG occurs in a small fraction of patients who lack any detectable AChR or MuSK antibodies. Treatment of seronegative MG is similar to other subtypes, involving symptomatic treatment with acetylcholinesterase inhibitors (such as oral pyridostigmine), immunosuppression with glucocorticoids or azathioprine, and immunomodulatory interventions with IVIG or plasmapheresis [11,12]. Thymectomy is a treatment option in select patients, especially with thymoma-associated MG. Pyridostigmine improves acetylcholine interaction with its receptor in the synaptic cleft, thereby improving symptoms. It is often started at 60 mg orally three times a day with increases of 30-60 mg until symptoms improve. Cholinergic side effects involving gastrointestinal tract, sweating, and bradycardia are possible, especially when dose increases above 300 mg/day. Glucocorticoids, such as prednisone, are used with two approaches: slow induction with low-dose prednisone (10 mg orally/day with slow up-titration by 10 mg every 5-7 days) in milder cases or quick induction with high-dose prednisone (1-1.5 mg/kg/day), which can result in worsening symptoms in the initial days of treatment. Other immunosuppressant agents used less frequently include azathioprine, mycophenolate mofetil, cyclosporine, and methotrexate. In patients with symptomatic MG not responding to immunosuppression or severe MG with impending/ongoing myasthenia crisis, additional treatment is necessary with IVIG (2 g/kg over 3-5 days) or plasmapheresis. In this case, patient was symptomatic despite ongoing treatment with high-dose steroids, which prompted additional treatment with IVIG. Given the patient's clinical presentation consistent with NMJ involvement and lack of central nervous system (CNS) or distal peripheral nerve involvement, further serologic assessment beyond the NMJ for concurrent paraneoplastic syndrome was deferred. Paraneoplastic neurologic syndromes have been associated with various onconeural antibodies resulting from crossreactivity with tumor [13]. They can target the CNS (limbic encephalitis, cerebellar ataxia), peripheral nervous system (sensory neuropathy), or NMJ (Lambert-Eaton syndrome, MG). Though MG can independently manifest as a paraneoplastic syndrome, the association with de novo diagnosis after starting pembrolizumab suggests a likely checkpoint-inhibitor-associated presentation of MG. PD-1 inhibitors have been an important addition to treatment options in the field of oncology, but they are associated with various immune-related adverse effects including neurologic, gastrointestinal, pulmonary, endocrine, and other organ systems [14]. PD-1 modulates against autoreactivity, and its inhibition may lead to increased autoantibody production. This mechanism may be similar to other checkpoint inhibitors (CTLA-4), which have shown enhanced T-cell response and increased anti-AChR antibody production [15]. No clear clinical risk factors have been identified for PD-1 inhibitor-associated MG. However, cancer patients receiving checkpoint inhibitor therapy with pre-existing autoimmune diseases have been shown to have exacerbations or immune-related adverse event rates as high as 75% [16]. In 2017, Makarious and colleagues reported 23 cases of checkpoint-inhibitor-associated MG, of which 72.7% were de novo with no prior history; remaining were exacerbations [4]. MG-related mortality was reported to be as high as 30%. This highlights the importance of recognizing early manifestations of MG, frequent respiratory assessments, and collaborative management with neurology as the use of checkpoint inhibitors continues to grow in oncology. Conclusions In the setting of PD-1 inhibitory treatment, MG can be life-threatening. Early recognition through detailed history and physical examination should prompt early serologic testing for diagnosis. Prompt treatment, including discontinuation of the PD-1 inhibitor, steroids, acetylcholinesterase inhibitors, and IVIG, may be lifesaving. Management of PD-1 inhibitor-related MG requires close collaboration with neurology and oncology team members. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
v3-fos-license
2017-09-16T14:15:05.824Z
2017-09-01T00:00:00.000
10264607
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.170958", "pdf_hash": "a4d73c6a273b54365a8072bac0122b568dcc3af3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2556", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "94ea82dcf0034b7d9bf69a37d2916ac2fc33b275", "year": 2017 }
pes2o/s2orc
Caching for where and what: evidence for a mnemonic strategy in a scatter-hoarder Scatter-hoarding animals face the task of maximizing retrieval of their scattered food caches while minimizing loss to pilferers. This demand should select for mnemonics, such as chunking, i.e. a hierarchical cognitive representation that is known to improve recall. Spatial chunking, where caches with the same type of content are related to each other in physical location and memory, would be one such mechanism. Here we tested the hypothesis that scatter-hoarding eastern fox squirrels (Sciurus niger) are organizing their caches in spatial patterns consistent with a chunking strategy. We presented 45 individual wild fox squirrels with a series of 16 nuts of four different species, either in runs of four of the same species or 16 nuts offered in a pseudorandom order. Squirrels either collected each nut from a different location or collected all nuts from a single location; we then mapped their subsequent cache distributions using GPS. The chunking hypothesis predicted that squirrels would spatially organize caches by nut species, regardless of presentation order. Our results instead demonstrated that squirrels spatially chunked their caches by nut species but only when caching food that was foraged from a single location. This first demonstration of spatial chunking in a scatter hoarder underscores the cognitive demand of scatter hoarding. Introduction Scatter-hoarding animals face the formidable challenge of creating diverse, ephemeral cache distributions whose location they can remember accurately enough to retrieve later. It has been well-established that scatter-hoarding animals can remember the locations of caches they make (e.g. [1][2][3][4]), but they must also remember the contents of a cache, such as a blackcapped chickadee (Poecile atricapillus) remembering whether a seed is shelled or unshelled [5]. Scatter hoarders, such as the western scrub jay (Aphelocoma coerulescens), can also remember when a cache was made, a form of episodic-like memory [6]. Hierarchically organizing caches by content should theoretically improve a scatter hoarder's ability to accurately recall cache locations. This cognitive process is known as chunking, where a chunk is a collection of items that have commonalities and discriminability from other chunks [7]. Spatial chunking has been already demonstrated to improve spatial recall in laboratory rats (Rattus norvegicus) retrieving three types of food rewards in a 12-arm radial maze. When food items of a certain type were consistently found in the same locations, the rats retrieved the rewards in order of food preference [8,9], similar to the behaviour of the black-capped chickadees [5]. The rats also retrieved preferred favoured food items with fewer arm visits under these conditions [8]. If item type was switched but chunk integrity was maintained (i.e. food A was replaced with food B), rats retrieved preferred items in fewer visits compared to their performance when food type was randomly redistributed [9]. Such a hierarchical memory representation has also been demonstrated in songbirds in their organization of song syllables to be learned (e.g. [10]). We tested this hypothesis in wild fox squirrels, who cache and consume the large seeds of several tree species in their native eastern deciduous forest [11]. Fox squirrels and closely related eastern grey squirrels (S. carolinensis) respond adaptively to each nut that is encountered, adjusting eating and cache decisions to its relative value, the abundance of food, the type of food, and the perception of risk of pilfering [1,[12][13][14]. Squirrels cache preferred foods farther from the source (e.g. [13,14]) and at lower densities [15]. Finally, squirrels have been shown to encode and recall the spatial locations of their caches [16,17]. We tested the hypothesis that a scatter-hoarding fox squirrel employs a chunking strategy by allowing them to collect a series of four species of tree seeds, where both the location of the food source and the serial order of the nuts collected was systematically varied. We varied the complexity of the series to increase the cognitive load and measured the spatial overlap between caches of different nut species. We defined chunking as the creation of exclusive nut-species cache distributions that could not be explained by any other heuristic and predicted that chunking would vary with cognitive load. Material and methods The study was conducted on the University of California at Berkeley campus, with a population of marked, free-ranging fox squirrels, as described in earlier studies [12,13]. Forty-five individual squirrels (17 female and 28 male) participated in the study. We collected data from 10.00 to 16.00 between June 2012 and April 2014, avoiding the fall season when food is abundant due to tree mast. During the fall, squirrel caching behaviour is most stereotyped, and least variable [13]. For the eleven squirrels who participated in multiple sessions, the order of condition was randomized and predetermined, and at least one week passed between sessions. During each session, a squirrel was given a series of 16 individual nuts, in the shell, of four species that vary in weight, size and nutritional content [18]: almonds (Prunus dulcis, designated as A); hazelnuts (Corylus americana, H); pecans (Carya illinoinensis, P); and walnuts (Juglans regia, W). Previous studies demonstrated that squirrels show differential responding when caching almonds, hazelnuts and walnuts in comparison to other nut species, and in general show the ability to discriminate between different nut qualities, such as weight and perishability [12,13,19,20]. Each nut was weighed and assigned a unique code in order to include weight in analyses. Squirrels were given one nut per trial, either in runs of four (RUNS; four nuts of same species, e.g. AAAAHHHHPPPPWWWW) or in a pseudorandom order (PSEUDO; 16 nuts, no species was given twice in a row, e.g. AWHPWHAPHWPAWHPA). If a squirrel ate a nut instead of caching, the nut was replaced with one of the same type until the squirrel cached again. Squirrels sourced nuts under one of two spatial conditions: Multiple Locations (MULTI), where the squirrel would be given the next nut in the location it had just cached, and Central Location (CEN), where the squirrel had to return to a single location to collect the next nut. One experimenter served as the feeder, offering the squirrel each nut in the sequence. A second experimenter recorded the starting location using a handheld GPS navigator (Garmin Etrex H or 10). A third experimenter recorded the location of the cache, which in the MULTI source was also the next starting location. The first experimenter gave the squirrel the next nut. The second and third experimenters then alternated recording cache locations, to allow for an adequate number of GPS waypoints to be recorded for each cache. To minimize observer effects, experimenters maintained a distance of 5-10 m from the squirrel. Data analysis Least-squares mixed models and MANOVA tests were analysed with JMP PRO12.0 (SAS, Cary, NC, USA). We included the factors Order (Runs/Pseudorandom) and Source (Multiple/Central), as well as their interaction. We also included sex as an independent variable, and squirrel identity as a random effect in all models to account for individual variability and repeated measures. As weight and nut type were highly correlated, with all nut types having significantly different weights from each other (F 3,826.9 = 4789.05, p < 0.001), weight was not included in the models. Follow up pairwise comparisons were conducted using Tukey's HSD. Geographical data were analysed using ARCGIS version 10.3 (ESRI, Redlands, CA, USA). Based on error between known distances and distances calculated by GPS receivers for 75 locations (x = 1.3 m, 95% CI (1.04, 1.65)), we created a buffer (1.65 m) around each waypoint that likely held the true cache. Four polygons were created including the waypoints for each squirrel's caches by nut species and alternately by sequence (the first four caches, second four caches, and so forth). We calculated the intersection of these polygons, measuring areas where there was no overlap, or overlap between two, three or four nut types or sequences of four caches. Overlap was assessed as per cent of each squirrel's total cache area during a session. Because the overlap effect could also arise from other cache heuristics, such as a squirrel sequentially using different locales for the next few caches, we also examined how much squirrels overlapped their caches by sequence rather than by species. Squirrels overlapped caches less by sequence when foraging in MULTI (F 1,48.51 = 19.90, p < 0.001, r = 0.54; figures 1 and 2). There was no effect of Order, and no interaction effect between Source and Order. We compared the differences between overlap by nut species or sequence by Source for Pseudorandom data only. Results suggested an effect of species versus sequence, by Source, and their interaction (F 3,24 = 25.62, p < 0.001). Squirrels overlapped caches more in the Multiple Source condition than Central Source (t 54 = 3.13, p = 0.003); and more by Species than Sequence (t 54 = −4.17, p < 0.001). We compared the difference between the Central and Multiple Sourced conditions when squirrels received nuts in Runs. By necessity, the data for nuts and sequence were the same. A non-directional t-test showed more cache overlap when squirrels received nuts from a Central location (t 23 = −3.37, p = 0.003). See figures 1 and 2. Finally, we also tested whether this separation of caches by nut when central foraging could be achieved by a simpler heuristic, such as adjusting distance travelled from the food source based on nut size. For example, squirrels could be differentiating between the large nut species (pecans, average weight, x = 8.16 g; walnuts, x = 11.75 g) and the smaller nut species (hazelnuts, x = 3.22 g; almonds, x = 3.44 g). We found that the distributions of large nuts overlapped with each other more than the distributions of small nuts (52.33% versus 11.59%) in order RUNS (F 1,27 = 5.26, p = 0.030, r = 0.38). In order PSEUDO, the overlap between large nuts (23.1%) and small nuts (21.29%) was similar. Discussion The present study provides the first evidence that a scatter hoarder could employ spatial chunking during cache distribution as a cognitive strategy to decrease memory load and hence increase accuracy of retrieval. When foraging from a central location, squirrels showed little overlap of caches by nut species, regardless of the order in which different food types were presented (figure 2d,e). Squirrels may thus be able to organize caches hierarchically by cache contents, i.e. spatially chunk cache locations, regardless of the order in which they have encountered different nut species under the natural conditions of a species-diverse deciduous forest. to be using a different heuristic, caching to prevent overlap with areas they had previously cached in during that session rather than organizing caches by nut species (figure 2a,c). These observations suggest that when lacking the cognitive anchor of a central food source, fox squirrels use a different and perhaps simpler heuristic, to simply avoid the areas where they had previously cached. Another explanation for the break-down of chunking in this condition is not a constraint on memory capacity but instead an adaptive response to the higher energetic costs of spatial chunking when similar items are farther apart in space and time. This is because when encountering nuts pseudo-randomly, from multiple locations, the different items of one species might be more spatially dispersed than the same items encountered at a central source or encountered in runs. Because seed-bearing trees provide a superabundant food source, a caching squirrel that discovers this source may treat the chosen tree as a central place forager would, bringing seeds in a preferred direction towards the centre of its home range. When eating, squirrels often travel some distance from the food source, particularly in the face of competition [13,21,22]. But whether eating or caching, the distance travelled away from the food source will be influenced by the opportunity cost of leaving an ephemeral food patch to one's competitors for a long period of time. If the distance travelled is significant, returning to a central food source could be costlier than foraging from a new food source near where the squirrel just cached or ate. Thus, a tree squirrel may face a constant series of decisions -whether to forage from one site or many, similar to the multiple location foraging condition in our study. Our results may reflect the effects of these kinds of foraging decisions and show that squirrels may adjust behaviours dependent on how food is acquired. Finally, squirrels may be chunking by item value, and this value may be derived from other factors as well as nut species, such as the weight of that individual nut. Other studies have shown that squirrels modulate the distance they will carry a nut of a certain species, carrying larger individual nuts significantly farther than smaller individual nuts of the same species (e.g. [20,23]). Our observation that squirrels, under certain conditions, categorize and respond to a nut as either large or small, also suggests that they could be organizing caches by even more subtle hierarchical structures than simply nut species. Ethics. This research was conducted under a protocol approved by the University of California at Berkeley Animal Care and Use Committee.
v3-fos-license
2020-07-22T13:06:41.936Z
2020-04-29T00:00:00.000
220669663
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1002/chem.202002321", "pdf_hash": "41e1cca30e41ac8e469285e8a6a9e132c38f37b0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2558", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "9929e8e9e28a22370acc0cc66d882ca83453ae18", "year": 2020 }
pes2o/s2orc
The Photoisomerization Pathway(s) of Push–Pull Phenylazoheteroarenes Abstract Azoheteroarenes are the most recent derivatives targeted to further improve the properties of azo‐based photoswitches. Their light‐induced mechanism for trans–cis isomerization is assumed to be very similar to that of the parent azobenzene. As such, they inherited the controversy about the dominant isomerization pathway (rotation vs. inversion) depending on the excited state (nπ* vs. ππ*). Although the controversy seems settled in azobenzene, the extent to which the same conclusions apply to the more structurally diverse family of azoheteroarenes is unclear. Here, by means of non‐adiabatic molecular dynamics, the photoisomerization mechanism of three prototypical phenyl‐azoheteroarenes with increasing push–pull character is unraveled. The evolution of the rotational and inversion conical intersection energies, the preferred pathway, and the associated kinetics upon both nπ* and ππ* excitations can be linked directly with the push–pull substitution effects. Overall, the working conditions of this family of azo‐dyes is clarified and a possibility to exploit push–pull substituents to tune their photoisomerization mechanism is identified, with potential impact on their quantum yield. Introduction Molecular photoswitches can alter their chemical/biological functions by undergoing conformational, configurational, or structural changes upon application of light. Although nature exploits them to trigger key processes in living organisms, the synthetic analogs are increasingly used as memory devices, [1][2][3] actuators, [4] sensitizers, [5] and sensors. [6] Dyes based on the azo group are among the most investigated, the archetype being azobenzene (AB). [7,8] Its cis-trans photoisomerization is highly appreciated owing to its significant structural change. [4] Also, AB has multiple functionalization sites, which led to the development of derivatives exhibiting improved thermal stability and visible light absorption. [9] All these advances fostered its application in the nascent field of photo-pharmacology. [10] Azoheteroarenes are the most recent derivatives investigated in the quest of better azo-based photoswitches. [11] Their main ad-vantage with respect to AB relies upon their greater electronic and structural diversity, which allow for interesting functionality in their backbone. Examples are the possibility to achieve Tshaped Z-isomer structures with longer half-life times and better photostationary distributions, [12] the modulation of the hydrazone tautomerism to further tune their kinetics, [13] or the addition of metal-coordinating sites, which makes them ideal candidates to trigger spin transitions. [14,15] In AB, the photoswitch is triggered upon excitation to either of the productive np* and pp* states (typically, S 1 and S 2 ). For decades, there has been controversy about its isomerization mechanism [7,16] with conflicting experimental [17][18][19][20][21] and theoretical [22][23][24][25][26] reports. The current consensus is that the photoisomerization occurs once the molecule is in S 1 , through an S 1 /S 0 conical intersection (CoIn) with either rotational or inversion character. Specifically, it has been proposed that these CoIn are the extremes of a crossing seam connecting S 1 /S 0 , with rotation-(inversion-) like structures at its lower (higher) energy end. [24,27] The preferred pathway depends on the excitation energy, [28] solvent, [28] pressure, [29] temperature, [30,31] and has an impact on the quantum yield: AB photoisomerizes with a higher quantum yield when excited to the np* than to the pp* state, [30] which is attributed to the increased accessibility of the rotational pathway. [7,32,33] Azoheteroarenes are expected to follow a similar mechanism, but the picture is much less complete. [11] So far, their photoisomerization has been investigated in systems based on indole, [3] pyridine, [34,35] pyrimidine, [36] and thiazole. [37] These few cases suggest that the structural diversity of azoheteroarenes contributes to create a similar level of complexity as in the AB derivatives. A major limitation of these investigations is that the experiments do not provide direct information on the re- laxation pathways, whereas computations are often limited to exploring predefined regions of the ground-and low-lying excited states potential energy surfaces (PES). [3,[34][35][36][37] Although this picture is informative, it is still insufficient to ascertain the effect of temperature and excitation energy on the chosen pathway, and hence on the photoisomerization quantum yield and kinetics. These are important aspects that deserve deeper computational analyses, which more closely mimic the actual experimental conditions. Results and Discussion In this computational work, we analyze the E-to-Z photoisomerization of three phenylazoheteroarenes: the unsubstituted 3pyrazole (1) and 2-imidazole (2), and a derivative of the latter (2 a), featuring DPO (2,5-diphenyl-1,3,4-oxadiazole) and thiazine as the phenyl and heteroarene substituents, respectively (see Figure 1). Heteroarenes have an increased push-pull character, by virtue of the stabilization of a resonant form. [11] Such stabilization is progressively stronger in compounds 1, 2, and 2 a, which has important consequences on their thermal stability, and on the energy and nature of their productive np* and pp* transitions. [38] Specifically, their properties are systematically found at the edge of the explored values derived from the screening of 512 phenylazoheteroarenes. This can be taken as an indication that these compounds are representative of the structural and electronic diversity present within phenylazohe-teroarenes. What remains to be known is the impact of the increase in push-pull character on the photoisomerization mechanism. A literature survey of push-pull AB derivatives reveals conflicting reports, with computational PES analyses favoring rotation as the single relaxation channel [39] (B3LYP/6-31G* level), or rotation and inversion upon S 1 and S 2 excitation, respectively [40] (CAS(6,5)/4-31G level), and experiments (fluorescence [41] and absorption spectroscopy [42,43] ) favoring a unimodal relaxation through rotation. Yet, reported works on the topic remain scarce, and extrapolation to the realm of azoheteroarenes uncertain. Such a generalization is especially relevant given the possibility to use push-pull substituents to tune the photoisomerization quantum yield of azo-dyes. With this in mind, our goal is to identify the isomerization pathways, and the associated kinetics, of 1, 2, and 2 a upon excitation to the productive np* and pp* states. The initial assessment of the vertical np* (S 1 ) and pp* excitations (S 2 ) of 1-2 a at the respective E-isomer minima (Figure 1) shows no significant difference between 1 and 2, whereas the pp* is significantly redshifted in 2 a by the increased chargetransfer character from the thiazine to the azo group. [38] The shift is also clearly visible in the absorption spectra computed at the same level (i.e., wB97X-D/6-31G(d) level, see Figure 2 and Computational Details), which includes the conformational and vibrational transitions with the Nuclear Ensemble (NE) approach. [44] The ground, np*, and pp* states are connected with each other through CoIn. At the CASSCF (i.e., SA3-CASSCF/6-31G*) level, three CoIn were identified for (unsubstituted) phenylazoindole photoswitches. [13] These are: (i) CoIn A , characterized by a CNNC torsion angle close to 908 (corresponding to the rotation), (ii) CoIn B , which involves quasi-linear NNC angles (characteristic of an inversion), and (iii) CoIn C , which features an intermediate torsion, longer N=N distance, and CNN angles close to 1008. The former two (CoIn A and CoIn B ) connect the PES of the ground (GS) and np* states, whereas the latter connects the np* and pp* surfaces. CoIn A was found below the np* excitation energy at Franck-Condon (FC), whereas CoIn B is higher, and hence only accessible after excitation to pp* or above. Accordingly, it was proposed that excitation to np* leads to CoIn A , whereas excitation to pp* leads to CoIn C (pp*/ np*) and to CoIn B (np*/GS) before reaching the GS. [13] The former pathway would lead to a higher quantum yield (QY) than the latter. [13] Such characterization is very similar to what is known for AB, [24,27,45] except for the proposed non-planar (i.e., twisted geometry) CoIn C : the ultrafast decay from pp* to np* in AB, [19,20,46] suggests a structure close to the planar FC geometry instead. The CoIn A and CoIn B of 1, 2, and 2 a were characterized here with the Tamm-Dancoff approximation (TDA) and wB97X-D (see the Supporting Information for complementary ADC(2) computations) by using a static CoIn search method (see Computational Details). [47] Their structures feature the characteristic CNNC torsion (close to 908) and NNC (quasi-linear) bending, respectively (see Table S2.1 in the Supporting Information). The energy of CoIn A decreases by about 0.3 eV with the increase in push-pull character (see Figure 1). We associate this shift to Table S2.1 in the Supporting Information) as a longer N=N distance facilitates the CNNC rotation towards CoIn A . As expected (see above), CoIn A is found below the S 1 excitation energy at FC. In turn, CoIn B lies at about 3.3 eV from the respective E-minima, and slightly above the S 1 excitation energy at FC. An exception is 2 a, for which CoIn B and S 1 are almost degenerate. That suggests the opening of the inversion pathway upon S 1 excitation, in contradiction to the expected mechanism. Overall, the two sets of CoIn reported herein (A and B) present very similar structural features to those reported in the existing literature for AB [22] and azoheteroarene derivatives. [13,35] None of the levels tested herein (i.e., wB97X-D, ADC(2)) were able to locate the CoIn C as proposed in Ref. [13], not even for the same phenylazoindole. Although the identification of CoIn C may be an artefact from SA3-CASSCF/6-31G* (see below), [48] TDA-wB97X-D describes correctly the PES regions that correspond to the two dominant photoisomerization pathways. As such, we are confident that more sophisticated simulations based on molecular dynamics can be pursued at this level (see Computational Details). A swarm of Non-Adiabatic Molecular-Dynamics (NAMD) trajectories based on Tully surface-hopping were initiated at the np* and pp* states for 1, 2, and 2 a, and propagated for a maximum of 1000 fs, or until an S 1 -S 0 energy gap below 0.1 eV is reached (see Computational Details). In the latter case, it is assumed that population transfer to the ground state will occur, leading to either of the two minima (E or Z). Note that although the termination criterion does not presuppose the character of S 1 (np* or pp*), in practice, however, S 1 is the np* state for all terminated trajectories. In general, we favor the nomenclature np*/pp* to specifically refer to these states, and use the S 1 -S 2 nomenclature when the state character is not relevant, only the order. Figure 3 shows the structure at which state crossings occur based on the two main variables: the CNN angle and the CNNC torsion measured as the deviation from planarity (i.e., j CNNCÀ180 j . We chose this CNNC metric as a result of 1-2 a having no particular preference towards a clockwise or counterclockwise rotation about the CNNC dihedral, which results in a similar distribution of positive and negative CNNC dihedral angles. This is in contrast to some reported heteroarenes featuring a stereospecific relaxation mechanism. [49,50] In 1-2 a, the relaxation from the pp* to the np* state occurs at flat geometries similar to the E-isomer minimum (see gray circles). As mentioned before, flat geometries have been invoked to explain the ultrafast pp*!np* decay in AB. [19,46] This point is thus reinforced by our simulations, and its validity seems to be extended to azoheteroarenes albeit in contradiction with the proposed non-planar CoIn C of phenylazoindoles (see also Section S2 in the Supporting Information). [13] The CoIn connecting the np* and ground states combine both CNN inversion and CNNC torsion (see colored circles). In fact, the distribution of CoIn describes a crossing seam (as in AB [24,27] ), with CoIn A -and CoIn B-like structures at the extremes (see Figure 3, Figure S3.4, and Table S3.2 in the Supporting Information). CoIn B being higher in energy (see Figure 1), the inversion pathway is more often (but not exclusively) followed upon excitation at S 2 , whereas excitation to S 1 predominantly leads to a rotational mechanism (see Figure 1). There are, however, many trajectories in which CoIn B -like (CoIn A -like) are reached upon excitation to S 1 (S 2 ), which indicates that the excitation energy does not completely discriminate between photodeactivation pathways. This point might, in fact, be at the heart of the controversy about the dominant mechanism in AB and azoheteroarene derivatives, and their strong dependency on external factors such 1, 2, and 2 a. The CNNC angle is evaluated as the deviation from planarity, with 08 corresponding to the E-isomer, and 908 corresponding to CNNC either + 90 or À90 degrees. In color, the geometries that reached an S 1 /S 0 CoIn before the time limit (1 ps). The color code indicates trajectories initiated at S 1 (red) and S 2 (green). The black stars indicate the geometry of the CoIn obtained from static computations. In dark gray, all geometries at which a hopping between S 1 and S 2 occurred in the NAMD trajectories. All computations have been performed at the wB97X-D/6-31G(d) level, and the quasi-degeneracy along the crossing seam is verified by additional CC2 and ADC(2) computations with the TZVP basis set (see Section S3 in the Supporting Information). as temperature, pressure, or solvent. Finally, the vast majority of CoIn in 2 a are rotational (see Figure 3). The reason is that the np* and GS PES are no longer quasi-degenerate in a region of the crossing seam associated with the inversion pathway (see Figure S3.4 in the Supporting Information). This explains why push-pull derivatives favor the rotational over the inversion pathway. This outcome could have not been anticipated from the energy maps in Figure 1. The energy of S 1 and CoIn A for 2 a are similar to those of 1 and 2, and the lower S 2 excitation energy is counterbalanced by a more stable and, thus, equally accessible CoIn B . The low ratio of inversion-like CoIn in 2 a is thus not suggested by the static picture. In addition to the structural aspects at the crossing points, we analyze the time at which they are reached in the NAMD simulations. Overall, the CoIn connecting the np* and ground states in 1, 2, and 2 a are reached in approximately 500 fs, with significant differences depending on the compound and excitation energy (see t CoIn in Table 1). The main steps are (i) the relaxation from the pp* to the np* state, and (ii) the change in CNN and CNNC angles necessary to reach the crossing seam (see discussion above). The mechanism and kinetics of each individual steps is better understood by considering four characteristic times (see Scheme 1): t CoIn is the time until reaching a CoIn, t S 1 and t S 2 are the times spent in the S 1 and S 2 states, and t Last is the time required to reach a CoIn after the last crossing to S 1 . The difference between t S 1 and t Last (Dt in Table 1) reveals whether the system is retained in a region of the S 1 -PES with frequent crossings between S 1 and S 2 states, before it reaches a CoIn (see Computational Details). Both t CoIn and t S 2 depend on the relative energy separation between the np* and pp* states; in general, t S 2 decreases together with the energy gap. The stronger the push-pull character of a system, the more redshifted is the pp* state, resulting in a smaller gap, and a faster decay from the pp* to the np* state (see t S 2 in Table 1). It is, however, interesting to observe that the overall photoisomerization process (t CoIn ) does not follow the same trend as t S 2 , and this is for two reasons. In 1 and 2, the energy gap between the np* and pp* states is sufficiently large so that the relaxation from the pp* state is a one-way process. In other words, the trajectories proceed undisturbed along the np* PES towards a CoIn. In 2 a, however, the two PES overlap more often, which leads to an increased probability of hopping back to the pp* state, effectively delaying the evolution towards a CoIn. This is quantified by Dt in Table 1, and can also be verified in the time-evolution of the S 2 population ( Figure S3.1 in the Supporting Information). The second reason for the slower photoisomerization of 2 a is that its trajectories need longer times to reach a CoIn once in the np* surface, as quantified by t Last (Table 1). [51] Such a behavior is surprising if one considers that the energetic profile summarized in Figure 1 places the S 1 excitation significantly higher in energy than CoIn A . Along this line, the PES of the np* state shows no energy barrier between the FC region and CoIn A (see Figure S2.1 in the Supporting Information). The actual explanation for the longer t Last in 2 a is rooted in the CoIn structure distribution within the crossing seam. Generally, the trajectories that reach a CoIn with a pronounced rotational character display slower kinetics than those with a marked tendency toward inversion (see Figure S3.2 in the Supporting Information). This difference is a manifestation of the distinct timescale associated with the rotation and inversion towards the crossing seam. In particular, the inversion-like region of the seam is explored readily after populating S 1 . If a CoIn is not accessed therein, as in 2 a, the system then evolves toward the rotation-like region, exhibiting slower photoisomerization kinetics. This perspective is in agreement with what has been characterized computationally for phenylazoheteroarenes, [37] namely that the evolution along the np* surface implies an initial flattening of the CNN angle (i.e., inversion), followed by the CNNC torsion (i.e., rotation). To summarize, push-pull derivatives undergo a faster decay from S 2 to S 1 but a slower evolution from S 1 to S 0 because of both the hopping back to pp* and the longer time needed to reach the rotational CoIn. The clear preference of 2 a for the rotational pathway potentially suggests that the increase in push-pull character may result in a photoisomerization process with a higher quantum yield (see above). Conclusion We have characterized the static and dynamic photoisomerization pathway of three heteroarene derivatives (1-2 a). The static CoIn-search reports rotation-like (CoIn A ) and inversionlike (CoIn B ) conical intersections connecting the np* and Table 1. (Top) Ratio (R) of trajectories reaching a CoIn before the time limit (1 ps). (Bottom) The characteristic times described in the main text and in Scheme 1 (in fs). Confidence intervals associated with these values, as well as their convergence with the number of trajectories is given in Section S3.3 (in the Supporting Information). Scheme 1. Schematic representation of the four characteristic times discussed in the main text to analyze the NAMD. These are computed as an average for each set of trajectories representing the S 1 or S 2 excitation of 1-2 a. ground states at around 2.3 and 3.3 eV above the E-minima, respectively. The results from the NAMD describe a crossing seam connecting CoIn A and CoIn B , similar to what has been reported for AB. [24,27] The decay from the pp* to the np* state is more controversial. The non-planar CoIn C located [13] by CASSCF for phenylazoindoles could not be identified in 1-2 a (see Section S2 in the Supporting Information). However, the ultrafast (ca. 100 fs) relaxation observed from the pp* to the np* state that proceeds at planar geometries close to the E-isomer minimum, is in agreement with the literature on AB (100-300 fs). [19,52,53] The existence of CoIn C may thus be an artefact from SA3-CASSCF/6-31G*. With the increase in push-pull character, the pp* state of the heteroarene is progressively redshifted, leading to a stronger overlap with the np* state, which speeds up the decay towards np*. Once in the np* state, further 200-600 fs are necessary to reach the crossing seam connecting the np* and ground states, close to the values reported for AB (ca. 500-1000 fs). [7,18,19,21] The actual amount of time depends on which region of the crossing seam is accessed, with the rotational mechanism displaying a slower np*-to-GS relaxation. The unsubstituted heteroarenes (1 and 2) exploit both pathways, with rotation and inversion being slightly preferred upon excitation to the np* and pp* states, respectively. In contrast, the push-pull derivative 2 a exhibits a clear preference towards the rotational pathway upon excitation to both states, resulting in a slower photoisomerization than 1 and 2 as the process in 2 a is further slowed down by population transfer back to the pp*. Overall, push-pull derivatives feature a faster decay from pp* to np*, but a slower one from np* to the ground state. From a design perspective, push-pull derivatives may thus represent an appealing alternative to improve the photoisomerization quantum yields by virtue of its marked preference for the rotational pathway. It is worthwhile noting that that such preference could not be anticipated based on the energy maps ( Figure 1). This mismatch, as well as the significant differences between the static and dynamic pictures at describing the crossing region (CoIn vs. crossing seam), highlights the risk of establishing conclusions on the photoisomerization mechanism based on the energy of the relevant points on the PES, as commonly done. Computational Details Minimal energy crossing points were computed with CIOPT [47] interfaced with Gaussian 09 (G09). [54] Based on previous benchmarks, [38,55,56] we used TD-DFT within the Tamm-Dancoff approximation (TDA), the wB97X-D functional, [57,58] and the 6-31G(d) basis set. The Non-Adiabatic Molecular Dynamics (NAMD) simulations were performed with Newton-X [59,60] interfaced with G09. [54] Additional computations at the ADC(2)/TZVP level can be found in the Supporting Information. The initial conditions were generated from the Wigner distribution based on the harmonic oscillator, five states (S 0 -S 4 ), a Lorentzian broadening of 0.1 eV, an anharmonicity factor of 3, and at T = 300 K. From these initial conditions, we obtain the (i) absorption spectra and (ii) a set of the geometries and velocities that could initiate the trajectories. The selected initial conditions are those in which the S 1 and S 2 excitation energy is centered (and within + /À0.1 eV) at the peak of the respective transition in the spectrum (see Figure 2). A swarm of 25 trajectories has been initiated at each S 1 and S 2 for 1 and 2 (see Section 2.1 in the Supporting Information). Owing to the much larger size of 2 a, we reduced the number of trajectories to 20. Hence, a total of 140 trajectories were run. The trajectories were computed by using TD-DFT (within TDA) at the wB97X-D/6-31G(d) level. NAMD were simulated with the fewest-switches surface hopping [61] corrected for decoherence effects (a = 0.1 Hartree). [62] Time-derivative couplings [63] were computed for all states except S 0 , which is excluded due to the difficulties of TDA to describe the multi-reference character of the electronic wavefunction near a S 1 -S 0 CoIn. Moreover, such limitations imply that the trajectories must be terminated right before the conical intersection is reached, which implies that photoisomerization quantum yields cannot be quantified. Accordingly, trajectories ran for a maximum of 1000 fs or until an S 1 -S 0 energy gap below 0.1 eV is reached. In the latter case, it is assumed that the actual CoIn is very similar to the final geometry explored in the trajectory, and that it should be reached immediately after in time. The selected time limit of 1000 fs is sufficient to allow most of the trajectories to reach the CoIn (see Table 1). Trajectories are propagated in the microcanonical NVE ensemble. Evolution of the kinetic and potential energy for each set of trajectories is shown in Figure S3.6 (in the Supporting Information). Integration was done with a time step of 0.5 (0.025) fs for the classical (quantum) equations. This setup has been successfully employed to study other small-size organic molecules. [64][65][66][67][68] Surface-Hopping Molecular Dynamics exploit statistics to mimic the dynamics of nuclear wavepackets, [69][70][71][72] and hence we analyze them as a whole. The kinetics are assessed by using the characteristic times defined in Scheme 1. These are computed for each S 1 and S 2 excitation of 1-2 a as an average using the trajectories that reach a CoIn before the time limit of 1000 fs. Should the trajectories be allowed to continue beyond 1000 fs, the associated times would change, t CoIn and t S 1 would increase as the slower trajectories would start counting towards the average, whereas the change in t S 2 is harder to anticipate. As a general rule, the values are more representative when the ratio of trajectories that reached a CoIn within the time limit is closer to 1 (R in Table 1). Dataset: The dataset will be available upon publication at the Zenodo repository.
v3-fos-license
2018-12-11T10:17:24.556Z
2017-01-10T00:00:00.000
55823709
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.26480/gwk.01.2017.21.24", "pdf_hash": "4176edf4c5777a47bf7f4ff9b00b5a2f21a2c74b", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2559", "s2fieldsofstudy": [ "Political Science" ], "sha1": "4176edf4c5777a47bf7f4ff9b00b5a2f21a2c74b", "year": 2017 }
pes2o/s2orc
Addressing the road safety results impasse through an Outcome-Based Approach in the state of Penang, Malaysia In 2009, the World Health Organization (WHO) reported that more than 3,000 deaths occur on a daily basis and that nearly 1.3 million people die annually as a result of traffic collisions globally, thus making road accidents an international public health priority [1]. As a result, in 2010 the United Nations Road Safety Collaboration (UNSRC) launched the Decade of Action for Road Safety 2011-2020, with the aim of stabilizing and then reducing global road accident fatality trends by 2020 [2]. In addition, the global action plan recommends that countries need to develop their own national action plan for the decade such that it is consistent with or can be carried forward into the regional plan. Introduction In 2009, the World Health Organization (WHO) reported that more than 3,000 deaths occur on a daily basis and that nearly 1.3 million people die annually as a result of traffic collisions globally, thus making road accidents an international public health priority [1].As a result, in 2010 the United Nations Road Safety Collaboration (UNSRC) launched the Decade of Action for Road Safety 2011-2020, with the aim of stabilizing and then reducing global road accident fatality trends by 2020 [2].In addition, the global action plan recommends that countries need to develop their own national action plan for the decade such that it is consistent with or can be carried forward into the regional plan. In response to the UNSRC's action plan, Malaysia has also introduced many programs and initiatives to reduce the rate of road accidents in this country.For example, the Road Safety Plan of Malaysia 2006 -2010 and the Road Safety Plan of Malaysia 2014 -2020 have been established by the Malaysian Institute of Road Safety (MIROS) to serve as guidelines for all stakeholders and road safety authorities in order to achieve the reduction target [3,4].Until today, many actions, such as the establishment of a National Accident Database System, the Five Stages of Road Safety Auditing, the National Blackspot Program, the Road Safety Research and Evaluation, the Conspicuity Initiatives for Motorcycles, the National Targeted Road Safety Campaign, the Revision of the Road Transport Act (1999 Revision), the Integrated Enforcement, the New Helmet Standard MS1 1991 and the New Children's Motorcycle Helmet Initiatives are among the numerous initiatives that have been implemented in Malaysia [5,6] Even though the findings of many studies on the effectiveness of the road safety intervention programmes have reported positive results with regard to the programmes and actions in the national road safety plan [7,8,9], but still Malaysia has a long way to go to achieve its target of reducing the number of road accidents.This may be because those intervention programmes were actually had not been successful in tackling the core issues or problems concerning road safety.It has been reported that some initiatives or countermeasures might heighten the perception of being involved in road accidents for a short period of time only.[10,11]. Therefore, the Penang State Government has made an effort to establish the Penang Road Safety Strategic Plan 2014-2020 [12] that corresponds with the Malaysian Road Safety Plan 2014-2020 and the Decade of Action for Road Safety.However, the strategic plan is aimed at complementing the outcome-based approach instead of interventionbased approach.Outcome-based approach is a strategic plan that designed after the main problems and issues have been recognized together with the target on percentages of reduction of the specific problems that have been identified.Compare to intervention-based approach, the outcome based-approach has more clear objective. Ultimately, this paper aims to foster road safety countermeasures in the state of Penang through the outcome-based approach.There are 31 outcomes that have been identified as yardsticks of road safety actions for interested parties in Penang.Based on these outcomes, it is hoped that the road safety implementation plans in Penang can be more effective in reducing the rate of road accidents in this state. Addressing Current Road Safety Performance and Target In order to come out with an outcome based approach strategic plan, current road safety performance is needed to be addressed.For current conditions, it is claimed that there is substantial improvement in road safety performance in Malaysia since the launching of the Road Safety Plan Malaysia 2006 -2010, that also reflected in many regions in Malaysia, including Penang.However, the statistical data in Table 1 shows that the number of road crashes and fatalities continue to increase year by year for the state of Penang as well as for the whole of Malaysia.This contradicts the claim by many road safety studies that the implementation of conventional plans has been effective.Despite the many road safety initiatives introduced in the state of Penang in Malaysia, the results have not been encouraging.This is especially critical when the expectation of the Decade of Action for Road Safety is to halve all road safety performance by the year 2020.The conventional intervention-based approach has been proven to have limited success and potential for the expected more drastic results.A critical analysis on the matter suggested that the passion for road safety results had dwindled as many players approach them as only processes that may not necessarily provide us with aspired road safety outcomes.This paper discusses the development of Road Safety Strategic Plan for Penang State through the outcome based approach.This pioneering effort begins by defining the appropriate goals for 2020 and followed by describing four principal strategic pillars.The pillars are changing attitude, forgiving roads, safer for motorcycles and enabling data and information.Each pillar has its own targeted outcomes, making a total of 28 outcomes identified.The appropriate activities and interventions in order to achieve the outcome were also recommended.The remainder of the paper also discusses the implementation plan suggested to the authorities and decision makers and other players in order to achieve the road safety targeted outcomes.Early results will also be highlighted in the paper.With the intention to set the road safety's target for Penang, the reduction in the number of accidents was calculated by using the road safety data in 2012 as the baseline.The estimates for road crash reduction from 2012 to 2020 are shown in Figure 1.The first phase of the Road Safety Strategic Plan for the state of Penang was implemented in 2014 which consisted of establishment of serious injuries data and information in Penang.In order to achieve the target of the strategic plan, the first phase activity were more focusing on i) to improve the attitude among data owners (such as PDRM, hospital, municipal council etc) on the importance of data keeping and data sharing ii) to stimulate the effort to establish all baseline data in parallel with analysing the progress data.The house also comprises a roof, that encapsulates the ultimate goal of the road safety's target for Penang, which is to achieve a reduction of at least 10% from the previous year in terms of the number of fatalities, number of serious injuries, number of crashes, number of motorcycle fatalities and number of pedestrian fatalities; and three foundations, namely governance, resources and talent. Establishment of Four Pillars The most important elements in this framework are the strategic pillars that underpin the mission and approach to fulfil the road safety target in Penang.These pillars are: Pillar 1 -Enabling Data and Information, Pillar 2 -Changing Attitude, Pillar 3 -More Forgiving Roads, and Pillar 4 -Safer for Motorcycles.Programs and intervention plans have been identified for each pillar in order to comply with the objectives of the pillar.This strategic plan also outlines a comprehensive set of fifty -two activities that correspond with the outcomes from the workshops.The activities that have been identified also try to engage the participation of all stake holders in order to ensure that the road safety plan is optimized in this state.In addition, the establishment of this strategic plan is expected to coordinate all the actions, activities, programs and interventions that relate to road safety with the aim of achieving the target of road accident reduction in Penang. RESULTS AND DISCUSSION An outcome-based or performance-based approach is believed to offer a powerful and appealing way of reforming and managing the implementation of road safety plans in Penang.This section presents a brief outline of the outcomes of the four strategic pillars in the Road Safety Strategic Plan for Penang.The level of implementation of the strategic plan is suggested to be divided into two levels.First, the implementation will be focused on the community level, at the stage of state legislative assemblies (DUN) and districts.Second, the implementation will be focused towards the respective owners of state legislative assemblies (DUN), districts and state level. The Enabling Data and Information Pillar is the most crucial pillar in this strategic plan.Without the appropriate data on crashes and fatalities, it would probably be impossible to measure the effectiveness of other initiatives.Therefore, the initial step in this strategic plan is to achieve the eight outcomes in Pillar 1. Up to the day when this paper was written, the establishment of capacity building was still in progress.Meanwhile the other pillars were automatically associated with Pillar 1. The stakeholders, who include the Penang State authorities, road safety practitioners, enforcement officers, hospitals, politicians, and interested parties, have formed a WhatsApp group.These stakeholders are disseminating information on road crashes, fatalities, road users' attitudes and complaints to enable the responsible bodies, who are also members of this group, to respond automatically.Figure 3 below shows the road crash information that is being shared in this WhatsApp group.The outcomes in Pillar 1 will also influence the outcomes in the other pillars.Even though, for the time being, not all the outcomes have been fulfilled, the responsible bodies are making continuous progress with regard to the performance of the initiatives.Table 2 shows the outcomes that needed to be achieved in each pillar. CONCLUSION An efficient road safety strategic plan is required in order to further reduce the level of crashes and casualties.In this respect, policy makers or the state government need to select optimal countermeasures, and this study has revealed the initiatives that have been carried out by the State Government of Penang.In this study, a strategic plan that uses the outcome-based approach is optimistically believed to be able to assist in prioritizing the road safety actions and initiatives in Penang State, albeit the implementation of the approach is still at an early stage.The outcomebased approach guidelines in the strategic plan should be used regularly, in order to understand the effectiveness of the plan. It is also believed that the outcome-based approach offers many advantages for achieving the target of road accident reduction.It emphasizes the relevance in road safety countermeasures, and can provide a clear and unambiguous framework for the road safety strategic plan.It encourages all stakeholders to share their responsibilities and roles in road safety and it can guide the effective assessment and evaluation of road safety compared to the conventional approach.In the outcome-based approach, the sort of actions that need to be taken to tackle road safety problems are being addressed by prioritizing the crucial components of the problem first. In addition, the implementation of the strategic plan needs a team effort, and every responsible party must play their the road safety strategic plan.For example the state government may introduce new policy or strengthen existing policy, but still need to provide the effective enforcement mechanism to ensure that the plan works.Meanwhile, many of the road safety issues need the authorities' concern especially that relates to road planning, design and maintenance.Therefore the strategic plan is hoped that the relevant authority can be more focus and can provide better justification for all road safety issues.Moreover, the involvement of the community (community-based intervention), either by the community leaders or the community themselves should be carried out effectively. https://doi.org/10.26480/gwk.01.2017.21.24This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited Throughout 2014 , 2 .Fig. 2 Fig.2 The house of the Ro ad Safety Stra tegic Pla n f or Penang Sta te (Pena ng Roa d Safety Stra tegic Plan 2014 -2020) Figure 3 : Figure 3: Example of road crash report through the WhatsApp Group Table 1 Comparison between Road Safe ty Data for Penang and Natio nal Data f rom 2005 until 2012 ( Penang Road Safety Strategi c Plan 2014 -2020) Fig.1 Target for road crash reduction in Penang from 2012-2020 (Penang Road Safety Strategic Plan 2014-2020) Table 2 Pillars and Outcomes in Road Safety Strategic Plan for Penang.Reduction in the number of repeat offenders 5. Good coordination among the authorities 6. Upgrading public transport services 7. Good road maintenance with zero defects aim 8. Good design of roads, strong safety precautions with 'error-proof' approach 9. Pro-activeness towards addressing road defects, improving blackspots, and zero hazards during construction 10.Reduction in the number of private vehicles on the road and increasing usage of public transport 11.Implementation and enforcement plan for all appropriate supporting activities
v3-fos-license
2021-05-10T00:04:01.403Z
2021-01-06T00:00:00.000
234018226
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1364/ome.415253", "pdf_hash": "ed362b7fa2eba8acdf23c3b84ec5f8988109cda8", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2561", "s2fieldsofstudy": [ "Physics" ], "sha1": "0eefb871207b2dffea27e088267a615dc461920f", "year": 2021 }
pes2o/s2orc
Detailed investigation of absorption, emission and gain in Yb:YLF in the 78–300 K range : We present spectroscopic measurements focusing on a detailed investigation of temperature dependence of absorption, emission and gain in the uniaxial Yb:YLF laser gain medium. Measurements are carried out in the 78–300 K range, but we especially targeted our attention to the 78–150 K interval, which is the desired working range of liquid nitrogen cooled cryogenic Yb:YLF lasers/amplifiers. A tunable (770–1110 nm) Cr:LiSAF laser with around 100 mW continuous-wave output power and sub-0.2 nm bandwidth is used as an excitation source. The average power of the Cr:LiSAF laser is low enough to prevent heating of the sample, and its spectral flux (W/nm) is high enough to enable large signal-to-noise ratio measurements. Measured absorption data is used to cross-check the validity of the emission measurements, while the measured temperature dependent small-signal gain profile provided a second independent confirmation. The acquired absorption cross section curves match the previous literature quite well, whereas the measured strength of c-axis emission is stronger than some of the earlier reports. Direct measurements of small signal gain confirmed the emission cross section data, where single pass gain values above 50 have been measured for the 995 nm transition of E//c axis at 78 K. We further provide simple analytic formulas for the measured temperature dependence of absorption and emission cross section. Introduction High-average and peak power laser and amplifier systems based on ytterbium-doped gain media are interesting tools for many applications including nonlinear pulse compression [1], optical parametric chirped-pulse amplification [2], parametric-waveform synthesis [3], and table-top electron acceleration [4]. With its thermo-opto-mechanical strength, Yb:YAG gain media become the working horse of such applications [5][6][7][8][9][10]. However, the relatively narrow gain profile of Yb:YAG results in pulsewidths longer than 500-fs at room temperature, which even prolongs to few picoseconds at cryogenic temperatures [11]. Yb:YLF (Yb:LiYF 4 ) is an interesting alternative material with broad bandwidths that could ideally support 100-fs level pulses both at room and cryogenic temperatures [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]. Nevertheless, compared to Yb:YAG, Yb:YLF is not yet well studied and many of its thermo-opto-mechanical parameters are not well known, or their temperature dependence is not well investigated. For a successful design of a laser/amplifier system, one of the first parameters a laser scientist needs to know in detail is the absorption, emission and gain profile of the laser medium at different temperatures. For cryogenic Yb:YLF laser/amplifier system, our recent experimental work has shown that, during laser operation the average temperature of the liquid nitrogen cooled Yb:YLF crystal varies in the ∼80-150 K range depending on the thermal load on the crystal [30]. However, in our design/simulation attempts [31], we have realized that, the literature lacks temperature dependence of data even in these basic parameters (absorption, emission and gain): (i) data is plotted in small graphs with limited spatial resolution, (ii) the data is taken with low spectral resolution, (iii) the data is only given for a few selected temperatures, (iv) the data is only given in arbitrary units, and (v) sometimes the data available from variable sources conflict with each other. Unfortunately, due to the exponential nature of the amplification process (e.g. in a regen one can easily have >100 round trips), even small uncertainties in laser parameters could result in significantly different performance estimate. Hence, it is important to investigate the temperature variation of absorption, emission and gain in Yb:YLF carefully (especially in the 78-150 K range [30]), for the successful development of next generation of cryogenic Yb:YLF laser and amplifier systems which could potentially reach kW level average powers along with pulsewidths in the sub-250-fs range. Such spectroscopic information could also be useful for the optical refrigeration community (that partially base their efforts on anti-Stokes emission of Yb:YLF crystals) [32][33][34][35][36][37][38], in their attempts to develop nanometer scale quantum optical devices [36]. Motivated by this need, we have focused our attention on careful measurement of temperature dependence of absorption, emission and gain in Yb:YLF. As the main tool, we have used a home-build continuous-wave Cr:LiSAF laser, that could be tuned from 770 nm to 1110 nm [39][40][41]: a span that covers both the absorption and emission range of Yb:YLF gain media. The experiments involved careful measurement of absorption and gain in Yb:YLF at different temperatures while using a tunable Cr:LiSAF laser as an excitation/seed source. We have further measured emission spectrum of Yb:YLF, and calculated the emission cross section (ECS) spectra using the Füchtbauer-Ladenburg relation [42][43][44]. To our knowledge, we present here the first set of measurements on polarization, temperature and wavelength dependence of gain in Yb:YLF. Furthermore, along with a few other studies, this is the first detailed report on temperature dependence of absorption and emission in Yb:YLF. We also provide simple analytic formulas for the modelling of measured temperature dependence of absorption and emission, as it is usually very hard to get quantitative information from the published data. The measured absorption cross section (ACS) curves in this study match the previous literature quite well and considerably extend the available temperature dependent data. Our emission cross section (ECS) measurements underline a stronger c-axis emission than what is reported in some of the earlier work, and complements the literature with detailed analysis. The observed discrepancy in ECS results motivated us to implement a two-step verification approach for the validation of our results. As the first approach, we have compared the emission measurement results with the independently taken ACS data via the McCumber relation [45][46][47][48]. As the second step, ECS data is used to calculate gain cross section (GCS) spectra, which is then cross-checked via direct small signal gain measurements. Both schemes produced results that confirm each other quite well, as it will be outlined in detail through the manuscript. The paper is organized as follows: In Section 2, we summarize the experimental setup and the methodology that is employed. In Section 3, we start with presentation of absorption measurements. Section 4 presents the emission cross section measurement in great detail. In Section 5, we use the McCumber theory to validate our ECS measurements by comparison of absorption and emission data. In Section 6, we estimate gain profiles at different temperatures using the measured emission data, whereas in Section 7 we cross-check our results one more time by direct measurement of small signal gain. Finally, in Section 8, we close with a brief discussion. Figure 1 provides a sketch of the experimental setups used in this study for the emission, absorption and gain measurements. A 675 nm tapered-diode laser (TDL) [49,50] pumped continuous-wave (cw) Cr:LiSAF laser ( Fig. 1(a)) was used in the spectroscopy experiments [39][40][41]. A 3-mm-thick crystal quartz birefringent filter (BRF) with an optical axis 45°to the surface of the plate was used to tune the laser wavelength [51]. The BRF was inserted at Brewsters angle into the laser resonator near the cavity high reflector mirror. The free spectral range of the plate is greater than 350 nm, preventing wavelength jumps and instabilities. Via simple rotation of the BRF plate, the Cr:LiSAF laser provided broadly tunable cw output with up to 100 mW of output power in the 770-1110 nm range (usage of two different output couplers was required to cover the whole range) [52]. The spectral width of the Cr:LiSAF laser output was narrower than 0.2 nm (FWHM: full-width at half-maximum) in the whole tuning range (was below 0.05 nm in most of the range). Compared to setups that use lamp sources, the usage of Cr:LiSAF laser enables a relatively high spectral flux (W/nm). Combined with a diffraction limited beam profile this enables measurements with large signal-to-noise ratio. During the measurements, the Cr:LiSAF laser output is collimated using a 100 mm lens, and the beam diameter on the sample was around 1.35 mm for absorption and gain measurements. A smaller beam size of around 100 µm is used for the emission measurements to minimize self-absorption effects (the excited volume is smaller, and the fluorescence collected transverses a smaller path to reach the detector, which helps to minimize radiation trapping). Fig. 1. A simplified schematic of the setups used in spectroscopy experiments: (a) A simple schematic of tapered-diode laser (TDL) pumped cw Cr:LiSAF laser with broad tunability (770-1110 nm) that is used as the seed/excitation source, (b) Absorption measurement system (HR mirror is on a flip mount), (c) Fluorescence emission measurement system and (d) Small-signal gain measurement setup. The Yb:YLF crystal that is mostly used in measurements was an a-cut sample, with the axes oriented as shown. The dashed line in (d) indicates a relatively long distance, which is used to minimize ASE on the detector. M1-M3: Cr:LiSAF laser cavity dichroic and high reflector mirrors, OC: Output coupler, BRF: birefringent tuning filter, HR: High reflector, PM1-PM2: Sensitive power meters, P: Film polarizer, Spec: Spectrometer used for measuring the emission spectrum, BD: beam dump, HWP: Half-wave plate, DM: Dichroic mirror, A: Aperture, Det: Detector used in gain measurements. Several a-cut and c-cut Yb:YLF crystals with 1% Yb-doping and 2 cm length were available for the experiments. The crystals were indium soldered from the top side to a multi-stage pyramidal cold head, which was cooled to cryogenic temperatures by boiling liquid nitrogen using a vacuum sealed dewar system. Silicon based thermal sensors connected to the cold head near the crystal enabled real time measurement of cold head temperature with ±0.1 K accuracy. For temperature dependent measurements, we have used the slow cooling cycle of the dewar (that typical took 6-7 hours) [53]. Care is taken during the measurements not to heat up the crystal by minimizing the power of the excitation beam as well as the excitation duration. For the absorption measurements a very simple setup is used as it is shown in Fig. 1(b). The Cr:LiSAF output beam (diameter: 1.3 mm) is send through the center of the Yb:YLF sample. The incident (I i ) and transmitted power (I t ) levels were measured carefully using a sensitive silicon photodiode power meter (Thorlabs S121C, resolution: 10 nW). The absorption of Yb:YLF crystal at each wavelength is calculated after subtracting the background loss of the system which includes small residual losses from the AR coatings of Yb:YLF crystal surface, and dewar windows. The sensitivity of the absorption measurements is limited by the intensity fluctuations of the Cr:LiSAF laser as well as by the accuracy of determining the background losses, which could be time dependent due to possible water condensation on the dewar inner windows or crystal surfaces (an almost leak free home/made dewar with a pressure better than 10 −8 mbar at 78 K is used to minimize such effects). At the same time, the relatively narrow linewidth of the Cr:LiSAF laser (which stayed below 0.05 nm in most of the tuning range) is adequate to resolve many of the peaks at most of the temperatures. Once the transmission (T) or absorption (A=1-T) of the sample is estimated at a specific wavelength and temperature, we have calculated the corresponding absorption cross section value (σ a (λ, T)) using: where ln is the natural logarithm function, d is the thickness of the sample (20 mm in our case), and N Yb is the density of active Yb ions (1.4 x10 20 ions/cm 3 for the 1% Yb-doped YLF [18]). As mentioned earlier, if the absorption is known, and if the details of the energy levels creating the transition is known, McCumber relation could be used to estimate emission cross section form absorption cross section (or vice versa) using [45][46][47][48]: where h is the Planck constant, k is the Boltzmann constant, c is the speed of light in vacuum, E zl is the energy of the zero phonon line transition, Z u (Z l ) is the partition function of the upper (lower) laser manifold, and their ratio could be calculated using [54]: Here E hi (E li ) are the corresponding individual intra-manifold energies of the higher (lower) lying laser levels. There are some minor differences in the reported values of energy levels for Yb:YLF in literature [55][56][57], which might cause observable differences in the calculated ECS/ACS values due to the existence of exponentials in the McCumber relation (both in Eq. (2) and (3)). Here we have used the more recently reported values in [55]: The corresponding wavelength for the zero-phonon line energy (E zl = 10293 cm −1 ) is around 971.53 nm (an assignment made at 12 K [55]). In our measurements, we have positioned this transition at 971.7 ± 0.25 nm at 78 K and 972 ± 0.25 nm at 200 K, respectively. We have used Eq. (2) to transfer knowledge from ACS to ECS, and vice versa, to confirm and cross-check our measurements. The fluorescence emission spectra were measured at a 90°angle to the Cr:LiSAF beam propagation direction using a window at the side of the dewar (Fig. 1(c)). A thin film polarizer (Thorlabs LPNIRE100-B) was used for selecting the fluorescence emission in the relevant axis. In the first set of experiments, we have checked the excitation wavelength dependence of fluorescence spectra and we have observed that the fluorescence spectral shape does not change for pump wavelengths in the 900-1020 nm range (some materials like Alexandrite might show different emission spectra when excited at different wavelengths [58]). As a result, we have chosen to use around 20 mW of 930 nm Cr:LiSAF light in the fluorescence emission measurements, a wavelength on the shorter wavelength side of emission, which enabled easy separation of background pump signal. A 3648 pixel CCD array (Toshiba TCD1304AP) based Ocean Optics spectrometer was used for recording the fluorescence spectra. The spectrometer had a spectral resolution of 0.1 nm in the 900-1060 nm range (spectrometer recorded 3600 data points with an average wavelength separation of ∼0.05 nm). The calibration of the spectrometer (estimated to be better than ±0.25 nm) was first confirmed using Ar and Kr lamp lines. The overall spectral response of the measurement system was corrected using the known spectral response curves and a fluorescence intensity accuracy better than ±20% is estimated for the 900-1025 nm range (the sharp drop of Si CCD array sensitivity creates higher uncertainty for wavelengths above 1025 nm). Note that care is taken to minimize self-absorption effects, which is known to produce errors in emission spectrum measurements in Yb-based systems. For that, a relatively lowly Yb-doped (1%) sample is used, and the Yb:YLF crystal is excited from its edge using a relatively small (100 µm) excitation beam. Furthermore, we have also corrected the spectra for self-absorption losses, similar to what is employed in [59], by calculating the self-absorption losses of the emission beam that reaches to the spectrometer (absorption of ∼2 mm of 1% Yb-doped material is considered). Note that, despite the care, as it will be discussed in detail later, we have still observed some self-absorption induced small errors in emission spectra especially for measurements near room-temperature: lower ECS strength is measured at shorter wavelengths due self-absorption. On the other hand, since the overlap of absorption and emission bands decrease with decreasing temperatures, errors due to self-absorption effect were minimal in the 78-150 K range, which is the main region of interest for the cryogenic laser/amplifier systems. This issue will be discussed in more detail while presenting the experimentally measured values. The normalized emission cross section (σ e (λ)) curves are obtained by multiplying the measured fluorescence emission spectrum by a factor of λ 5 (where λ is the emission wavelength) [42,53], and then renormalizing the curves. The emission cross section in absolute units is calculated using the modified Füchtbauer-Ladenburg formula [42][43][44]: where I a,c (λ) are the measured emission intensities in the a, c axis of the uniaxial crystal, n is the average refractive index of gain medium (∼1.46 [60] around 1 µm in YLF), τ R is the radiative lifetime of the upper laser level ( 2 F 5/2 ) involved in transition. For Yb:YLF, due to the presence of radiation trapping (on top of possible non-radiative effects), it is difficult to accurately measure radiative lifetime. One alternative solution is to obtain emission cross section values from the measured absorption data using McCumber relation (Eq. (3)), and calculate the radiative lifetime using Füchtbauer-Ladenburg formula given above. This approach yielded a value of ∼2040 ± 180 µs in a recent study [47] (a value of 2270 µs is found in an earlier study [48]). In our work, for the 1% Yb-doped YLF sample, we have measured fluorescence lifetimes of 1970 ± 50 µs at 78 K and 2120 ± 50 µs at 300 K, in a setup which we tried to reduce the radiation trapping effects [61]. We believe that, the slightly longer fluorescence lifetime we measured at 300 K still might be due to the radiation trapping effect which is hard to completely eliminate at room-temperature. Overall, we have chosen to use a radiative lifetime value of 2000 µs in this study. We would like to also mention here that, after repeating the temperature dependent measurement required for ECS determination 3-4 times, we have seen that, despite ultimate care in the fluoresce collecting and measurement process, due to alignment sensitivities, it is very challenging to make accurate absolute measurements of I a (λ) and I c (λ). As discussed in detail in [47], we have perfected our alignment process to the best of our abilities (confirmed the setup with the homogeneous Yb:YAG crystal), and also used the measured temperature dependent absorption data and McCumber relation to reduce the uncertainties in our ECS calculations (by cross-checking the ECS and ACS data with each other). The setup that is used for the gain measurements is shown in Fig. 1(d). In the setup, the weak Cr:LiSAF beam plays the role of a low power seed beam, which we use to measure small signal gain. Two 2 inch size 45°dichroic mirrors were used to couple in and out the 960 nm high power pump source that is used for the main excitation of Yb:YLF crystal. The 960 nm pump diode had an M 2 of around 220 and was focused to a spot diameter of 3.2 mm inside the gain media. The smaller Cr:LiSAF beam (1.3 mm) pass through the central region of the excitation area and observes an almost flat-top excitation profile. In small signal gain measurements of the E//a axis, the gain medium is excited with 2 ms long pump pulses with 2 kW of peak power at 0.1 Hz repetition rate. Due to the low duty cycle employed, the average absorbed pump power was below 0.5 W in these measurements, and hence the heat load on the crystal is quite limited (we estimate an average temperature rise below 0.1 K at this average pump power [30], whereas instant peak temperatures might be 2-3 K higher). For the gain measurements in E//c axis, the pump pulse duration is reduced from 2 ms to 250 µs to minimize amplified spontaneous emission (ASE) effects observed in this higher gain emission band. Note that, to minimize the role of ASE in gain measurements, we have put two small circular apertures and propagated the amplified Cr:LiSAF beam 2-3 meters (ASE component is reduced by spatial filtering). Furthermore, a background signal is also taken (while seeding the system above 1050 nm, a region outside of gain bandwidth) and this small background is reduced from the measurements. The Cr:LiSAF beam power is recorded again with the sensitive silicon photodiode power meter (Thorlabs S121C sensor in the PM100A power meter), and the analog output of the power meter with sub-1-µs response time was monitored by a fast 500 MHz oscilloscope and simultaneously recorded by the computer for gain calculations. The measured small signal gain values are the maximum gain values of the system that is acquired just after the tail of the 960 nm pump. To enable a further independent cross-check of ACS/ECS results, we have compared the measured small signal gain spectra at 78 K and 295 K, with the gain spectrum estimated from earlier emission and absorption measurements. The effective gain cross section (GCS) spectra (σ g (λ,T)) of Yb:YLF is estimated using: where β is the fractional population inversion level and all the other parameters are as defined above. The small signal gain coefficient (g 0 ) could be calculated from σ g (λ,T) using: where N yb is the number density of the active Yb 3+ ions in the YLF sample. Note that single pass fractional gain Exp(g 0 d) could be then easily estimated (d: length of the gain medium). As a side note, for a long gain sample such as ours, the inversion will strongly vary with position: will be higher at the front of the crystal where most of the absorption happens and will then decay towards the end of the crystal (the exact profile will depend on incident pump power, pump spot size, pump spectral width, absorption saturation effects, etc. . . ). Hence, position dependence should also be included in the above discussion for an accurate calculation of Yb:YLF gain. Absorption measurements We start presentation of our experimental results with Fig. 2(a-b), which shows the measured absorption of the 1% Yb-doped 2 cm long YLF sample at 78 K and 300 K, for both E//a and E//c axis, respectively. The corresponding absorption cross section values are then shown in Fig. 2(c-d), where we have used the known doping and thickness of the sample for the calculation. Note that, to enable better visibility a second axis is used to show the ACS values at room temperature. Moreover, a 5-times vertically expanded version of the 78 K ACS curve is also shown to enable a zoom in view to the details of the measurement. The measured room-temperature absorption profile matches very well (difference smaller than ±5%) to the measurement presented in [18] for a 5% Yb-doped YLF sample, and is also in relatively good agreement with other measurements in literature [21,47,48,55,59,[63][64][65]. Note that the scale of ACS graphs for RT emission is different for the different polarizations, and the absorption strength in E//c axis is stronger compared to E//a axis. To our knowledge, Fig. 2 contains the first detailed cryogenic absorption measurement of Yb:YLF for wavelengths longer than 975 nm: knowledge of absorption in this region is important in determination of quasi-3-level operation parameters of Yb:YLF laser systems [10]. As outlined in [55], due to strong electron-phonon coupling, it is difficult to identify origins of the absorption lines of Yb especially at room-temperature, and different absorption peaks appear for different polarizations. Here, in Fig. 2(e), we also provide the energy level diagram of Yb:YLF, where the data on intra-manifold energies of the Stark levels are taken from [55]. Note that, we have also included calculated wavelengths for transitions and Boltzmann occupancy factors for the different Stark levels at selected temperatures of 78 K, 150 K and 300 K [62]. Note that origins of some of the absorption peaks can be easily understand by looking at the possible transitions in Fig. 2(e). On the other hand, some transitions, like the absorption peak around 935 nm is believed to be a vibrionic side-peak induced by strong electron-phonon coupling [55]. We refer the reader to Ref. [55] for a more detailed discussion on origins of transitions observed in the measured absorption. For cryogenic lasing/amplifier applications the ACS at 78 K is more interesting, and we see that, as the phonon energies decrease with decreasing temperature, due to reduced electron-phonon coupling effects, the transitions get stronger and narrower. The measured ACS spectrum at 78 K matches relatively well to most of the previously reported measurements in literature [47,59,62], except small minor differences on peak values, which might be due to the limited resolution and dynamic range of different measurement systems. As an example, we have measured a peak absorption value of 5.6 × 10 −20 cm 2 for the 960.35 nm transition of E//c absorption, where peak values of ∼5.9 × 10 −20 cm 2 [25], ∼5 × 10 −20 cm 2 [62], and 6.5 × 10 −20 cm 2 [59] are reported in earlier work. Similarly, for the same peak, we have measured a value of 2.5 × 10 −20 cm 2 for the E//a axis, that is in relatively good agreement with 2.7 × 10 −20 cm 2 of [62], ∼2 × 10 −20 cm 2 of [25], and ∼2.4 × 10 −20 cm 2 of [59]. As a side note: (i) our measurements differ from what is reported in [17,47], (ii) further cryogenic absorption data for Yb:YLF is also available in [56] (15-300 K in arbitrary units), and in [21,55,64] (at 15 K, for un-polarized light). The difference of 78 K absorption between the E//a and E//c axis is also noteworthy. First of all, the E//a absorption is broader and enables efficient pumping around 934 nm, 948 nm and 960 nm. Note that, the 934 nm and 948 nm absorption transitions are rather weak in the E//c axis absorption. As mentioned above, the 960 nm transition with a FWHM of ∼2 nm is strong for both axes (stronger in E//c) and enables efficient pumping of cryogenic Yb:YLF laser systems [26,27,29]. The strong zero-phonon absorption line around 971.7 nm could ideally enable pumping with a lower quantum defect, which has a measured value of ∼4 × 10 −20 cm 2 and ∼1.8 × 10 −20 cm 2 , for the E//a and E//c axis, respectively. On the one hand, this line is too sharp for high-power diode pumping at 78 K. On the other hand, under thermal load, the Yb:YLF crystal could easily reach 125-175 K temperatures, and the linewidth of the 971.7 nm line at these temperatures (∼0.5-1 nm) might be sufficient enough for the utilization of this pumping scheme. As discussed earlier, our absorption measurements are taken using the Cr:LiSAF laser, where measurement of absorption spectra at each temperature requires tuning of the Cr:LiSAF laser wavelength to the desired point and measuring incident and transmitted powers, and the overall process takes time. As a result, we have only measured absorption spectra of Yb:YLF at 78 K and 295 K. However, we have measured variation of absorption with temperature at selected wavelengths using the cooling cycle of the dewar, and the results are summarized in Fig. 3. Note that, the measured variation of absorption at the pump wavelength region (960.4 nm and 971.7 nm) contains important information on the variation of pump absorption with temperature, whereas the measured variation of absorption at the lasing wavelength region (above 990 nm) provides important information for the amount of self-absorption losses. It is educational to try to roughly understand the trends observed in Fig. 3. For that, we can use the energy level diagram that is shown earlier in Fig. 2(e). As an example, for wavelengths at the far edge of absorption (above 995 nm), the absorption originates mostly from the highest lying Stark level of the 2 F 7/2 manifold. At 78 K, this Stark level is mostly empty (posses 0.1% of the ions), and as one heats up the crystal this level slowly starts to fill up: 0.78% and 5.57% occupancy at 150 K and 300 K, respectively. As a result, for wavelengths such as 1008 nm, 1018-1019 nm, the measured absorption is very low at 78 K, but it starts to slowly increase with temperature as the population of the highest lying Stark level of the 2 F 7/2 manifold increases. As another example, around 960 nm the absorption is due to the transition from the lowest lying Stark level of the 2 F 7/2 manifold. The estimated Boltzmann occupancy factor for this level decreases from around 97.24% to 57.03% as the temperature is increased from 78 K to 300 K. As a result, the measured absorption for this band decreases with temperature due to the phonon broadening of the transition. As the last example we can look at the absorption near the zero phonon line around 971.7 nm, where again the absorption is believed to originate from the lowest lying Stark level of the 2 F 7/2 manifold. Similar to the 960 nm transition, the absorption at this wavelength steadily decreases with temperature for the E//a axis. On the other hand, for the E//c axis, the absorption first decreases then saturates and then even start to increase with temperature. This shows that, as discussed in [55], even in a simple looking system like Yb:YLF, where one has only two energy manifolds, due to strong electron-phonon coupling, it is not always easy to interpret the measured trends in absorption. We will continue on this discussion in Section 5, where we will use the McCumber theory to calculate absorption spectra from the measured emission profile at different temperatures. Figure 4 shows the measured variation of normalized emission cross section with temperature for the (a) E//c and (b) E//a axes, respectively. The peak of the emission in E//c axis is around 995 nm. For the E//a axis, the peak of the emission is around ∼971.7 nm at 78 K, but we have chosen to normalize the spectra with respect to main emission band around 1016 nm. It is clear that, as the temperature increases the emission peaks get broader and smoother. As an interesting point, it is already noticeable that, the E//a emission band centered around 1016 nm possesses a broad bandwidth even at cryogenic temperatures, which we will elaborate in greater detail soon. [47]. First of all, when one looks at from a general perspective, we see that, the variation of spectral shape of the emission with temperature is rather similar in all studies, except a few work in which the resolution of the instrument created weaker lines with broader bandwidths and smoother emission profile. However, there are some observable differences in terms of absolute strength of the transitions especially for the E//c axis as we try to outline below. Emission cross section measurements For a more comprehensive comparison/investigation of the ECS data near the important laser bands, we have prepared Fig. 6, which shows the 1016 nm band of E//a axis, along with the ∼995.2 nm and ∼1019.5 nm transitions of the E//c axis in greater detail. Starting from the broadband E//a transition ( Fig. 6(a)), we see that the transition has a FWHM bandwidth of around 9.7 nm at 78 K, which extends to a FWHM of 11 nm at 125 K, and gets even broader as the temperature increases. Due to the broad bandwidth of this transition, earlier Yb:YLF amplifier work has been mostly based on this emission [23,27,28,67]. At 78 K, the peak value of emission cross section in this transition is measured to be around 0.85 × 10 −20 cm 2 , that is in good agreement with the values reported in literature (∼1x 10 −20 cm 2 [25,59], ∼0.75x 10 −20 cm 2 [62], ∼0.88x 10 −20 cm 2 [47]). For the E//c axis, usually the sharp peak around 995 nm is employed in cryogenic lasing experiments due to much higher gain in this band [25,62]. We have measured a peak ECS value of 9.3 × 10 −20 cm 2 for the 995 nm line, which is in good agreement with the recent measurement (9.4 × 10 −20 cm 2 ) by Korner et al [47], but rather higher than what is reported in other studies (5.7 × 10 −20 cm 2 in [62] and ∼4 × 10 −20 cm 2 in [25,59]). Bensalah et al. measured a peak value of ∼12 × 10 −20 cm 2 for the E//c axis 995 nm transition at 12 K, which confirms the strength and sharpens of this peak at cryogenic temperatures [21,55,64], and support the relatively high 78 K value presented in this work. When we look at the broader 1019.5 nm transition of E//c axis, we find a FWHM value of 3.7 nm at 78 K, which extends to a FWHM of 6.5 nm at 125 K. The 1019.5 nm band of E//c axis is not as broad as the 1016 nm E//a band, but we have measured a peak value of 2.7 × 10 −20 cm 2 for the 1019.5 peak at 78 K, indicating presence of 2-3 fold higher gain in this band. The peak value we have measured at 1019.5 nm is slightly higher than earlier measurements in literature (2 × 10 −20 cm 2 in [47], 1.8 × 10 −20 cm 2 in [25,59]). A peak ECS value of around 4 × 10 −20 cm 2 is reported at 12 K in [21,55,64], which again might be seen as a confirmation of the larger ECS value reported in our work. The recent detailed ECS results reported by Cante at al. for Yb:LLF [68], a crystal with quite similar spectroscopic behavior [21], also shows similar spectroscopic strength for E//c axis. The lower values reported in some of the earlier Yb:YLF studies might be due to the limited resolution of measurements or local heating of the sample which could broaden and weaken the transition (even a few K could create a difference for some transitions). As another note, when we look at the overall strength of the E//c and E//a axis (the integrated emission strength), we have measured a quite strong E//c axis in our measurements, with an integrated spectral intensity ∼2-2.5 times higher than E//a axis (around 2 at room-temperature which increased monotonically to around 2.5 at 78 K). Looking at the literature, this observation is in good agreement with the spectra shown in [21,47,55,64], but some other studies report a slightly weaker E//c emission (∼1.5-2 ratio rather than a ∼2-2.5 ratio between E//c and E//a emission). Unfortunately, it is rather difficult to get absolute measurements of ECS, and slight differences in the measured ratio of different axis could also result in observed differences in reported ECS values. Hence, in the coming sections, the accuracy of our ECS measurements will be discussed by comparing the data with what is obtained from ACS using McCumber relation and also via comparison with direct small signal gain measurements in different axis. So far we have shown graphs of ECS (λ) at selected temperatures (Figs. 4-6), but it is also quite helpful to look at the same data from a different perspective, by plotting ECS (T) at selected wavelengths. For that purpose, Fig. 7 shows the measured variation of ECS with temperature at selected wavelengths (data acquired by vertically slicing ECS data presented in Figs. 5,6). As expected, as the temperature increases, we see a decrease of ECS value for the peaks of 78 K emission and a slow increase of ECS value for the dips of the 78 K emission. Basically, via increasing phonon energy and increasing rate of electron-phonon coupling, the emission strength is distributed: peaks get wider, dips start to fill, and the ECS spectra gets smoother. If the initial emission peak is very sharp/strong (e.g. 995 nm emission in E//c axis), the slope of decrease is faster. If the initial strength is moderate (such as the 1019.5 nm line of E//c axis and 1016 nm line of E//a axis), the reduction of ECS value with temperature is slower, which might be advantageous in the design of efficient cryogenic lasers/amplifiers. To our knowledge, the data in Fig. 7, which provides important basic information on laser design, is presented to the first time for Yb:YLF in literature. Hence, we close this section with Table 1, which provides equations for modeling the temperature dependence of emission cross section of Yb:YLF as a function of temperature at selected wavelengths. Note that a relatively complex functional form of: σ e (λ, T) = a 0 + a 1 T + a 2 T 2 + a 3 T 3 + a 4 T 4 + a 5 T 5 (8) has been chosen and higher polynomial orders are included only if they are required to obtain a nice fit to the experimentally measured data (a standard Boltzmann distribution formula did not provide a good match to some of the measured trends, and hence, is not preferred). Note, again the coefficients a 0 -a 5 in Eq. (8) are wavelength dependent fit parameters. Emission & absorption cross-check In this section, we will cross-check our independently obtained emission and absorption cross section results using the McCumber relation. To start with, Fig. 8 shows the calculated ACS curves (obtained from the ECS data presented in the earlier section using Eq. (2)). In general, the observed trend in the curves matches the data in the literature quite well. For a more direct comparison, we could compare the estimated ACS data with the direct ACS measurements that we have taken at 78 K and 300 K. For that purpose, Fig. 9 shows the measured ACS data at 78 K and 300 K in both axis along with the curves estimated from ECS (directly measured ACS data is shown with open markers). As a first observation, note that the ACS estimation via McCumber equation is quite noisy and incorrect for wavelengths below 950 nm. Basically, this region is the lower wavelength edge of the ECS data where the SNR ratio in ECS is low, and combined with the limited dynamic range of the spectrometer as well as the exponential nature of McCumber theory, this creates noisy data below 950 nm. Above 950 nm, we observe a relatively good fit to the measured ACS data at 295 K, till one reaches the longer edge of absorption around 1025 nm. In a similar manner, the 78 K curve matches very well to the experimental data in the 950-1000 nm range for the E//a axis and 950-985 nm range for the E//c axis. Essentially, once the measured ACS gets below 0.01 × 10 −20 cm 2 , a measurement fluctuation of 0.1% in transmitted power creates up to ±5% error in the measured ACS values. As a result, as also discussed in previous literature, the ACS curves obtained from ECS measurements using the McCumber theory provides a better estimate for absorption on longer wavelength side of the absorption curves; whereas, on the short wavelength side, direct absorption measurements provide a much higher accuracy in ACS determination efforts. As an overall observation, we see that the directly measured ACS data and the ACS data calculated from the measured ECS curve fits relatively well to each other in the wavelength range where their harmony is expected, for both 78 K and 295 K measurements. To cross-check the ACS and ECS curves via McCumber equation at other temperatures, we have used the measured variation of absorption data at selected temperatures (presented earlier in Fig. 3). Figure 10 shows the experimentally measured variation of ACS (open markers) with the McCumber theory estimate based on ECS measurements (solid curves). It is clear that, a relatively good fit exists between the experimentally measured and calculated values, which provide a further sanity check for the earlier measured ECS data. We would like to finalize this subsection with Fig. 11, which shows the calculated variation of Yb:YLF absorption cross section with temperature on the long wavelength side of absorption, between 975 nm and 1050 nm. The graph is in logarithmic scale, as the absorption in this longer end reduces sharply with temperature, and is ignorable for many applications. On the other hand, the solid-state optical refrigerating studies with Yb:YLF [32][33][34][35][36][37][38], that is based on anti-Stokes emission of Yb:YLF crystals, ideally needs to achieve absorption at the longer edge (optimum pump wavelength is around ∼1020 nm) [69]. Moreover, absorption in this region, which overlaps with the regular lasing region of Yb:YLF, appear as self-absorption losses in laser cavities, and should be included in laser modelling. Figure 12 provides the same information from other perspective, and shows the variation of ACS with temperature at several selected wavelengths. If we investigate the data, in terms of their strength near the main lasing bands, we see that the absorption around the 995 peak of Yb:YLF is strong even at 78 K (2 cm of 1% Yb-doped YLF absorbs around 30%), and the laser is 3-level even at cryogenic temperatures (see also the Boltzmann occupancy percentages presented in Fig. 2(e)). At the 1019.5 nm transition of E//c axis and the 1016 nm transition of E//a axis, the ACS curves are more than two order of magnitude weaker at 78 K, enabling almost 4-level lasing at cryogenic temperatures in these transitions. Note that, our measurements for the 1020 nm E//c axis absorption of Yb:YLF matches quite well with the earlier curve reported in [70]. To our knowledge, it is hard to find qualitative information in literature on variation of Yb:YLF ACS with temperature. Hence, as we did earlier with ECS data, we will close this section with Table 2, which provides equations for modeling the temperature dependence of absorption cross section of Yb:YLF as a function of temperature at selected wavelengths. Note that a similar functional form of: (9) has been chosen where the coefficients b 0 -b 6 in Eq. (9) are wavelength dependent fit parameters. Gain cross-section calculations In this subsection, we would like to present calculated gain cross section (GCS) curves for Yb:YLF. Figure 13, which shows the variation of GCS spectra with temperature for both axis for an assumed inversion level of 25%, could be used to initiate a discussion. As we have already discussed while presenting the ECS results, the 1016 nm emission band of E//a axis provides a bandwidth of around 10 nm, which is already sufficiently broad to tolerate sub-150-fs level pulses either via mode-locking or via amplification. Note that the 1016 nm band is not only broad, but it also possesses a relatively flat gain profile, which minimizes gain narrowing effects in amplifiers [27]. If the whole available gain bandwidth of E//a axis could be used, for example in a Kerr-lens mode-locked cryogenic Yb:YLF laser, there is potential to generate sub-100-fs pulses. In contrast, the gain in E//c axis of Yb:YLF is much higher, at the expense of reduced bandwidth. As an example, the 1019.5 nm emission band, which has around 2-3-fold higher gain than the 1016 nm band of E//a axis, owns a FWHM of ∼4 nm, but this could already support sub-300-fs pulses. As we can see, the 992.5 and 995 nm peaks of Yb:YLF is quite narrow and has very high gain at 78 K. Considering the additional advantage of lower quantum defect (995 nm laser pumped at 960 nm), this line becomes very attractive for cw lasing [25], cw amplification [71], or Q-switched lasing with ns-level pulses [62]. One important point to underline here is, in our earlier work, we have seen that, under thermal load, the average temperature of Yb:YLF crystal increases around 0.1-0.2 K per 1 W of absorbed power [30]. Hence, as mentioned earlier, under typical cryogenic laser operation, the average temperature of the Yb:YLF crystal is typically in the 125-150 K range (depending on absorbed pump power, pump spot size, etc..). Hence, it is important to investigate the gain spectra of Yb:YLF in greater detail at slightly elevated temperatures. For that, Fig. 14 shows the calculated gain of Yb:YLF at a temperature of 125 K, for inversion levels between 0 and 100%. Note that, at this temperature, the gain spectra of Yb:YLF gets much more smoother, at the expense of reduced strength. It is interesting to see that, at large inversion levels beyond 75%, the zero-phonon line around 971.7 nm also shows the potential of lasing for the E//a axis: ideally a 960 nm pumped 971.7 nm cryogenic Yb:YLF laser will work with a quantum defect of only around 1.2%. As another observation, note that, even the 993.5-995 nm transitions are much smoother and at these temperature (∼125 K), it also seems to have the potential to support sub-500-fs long pulses upon mode-locking or amplification. Small signal gain measurements In this final section, we would like to present direct gain measurement results of Yb:YLF and compare it with the estimates of the gain cross section (GCS) that is presented in the previous section. This provides another alternative approach to investigate the error bars in the earlier emission measurements. To start with, Fig. 15 shows the measured gain spectra of Yb:YLF at (a) 78 K and (b) 295 K along with the calculated GCS profiles at the same temperatures, for an inversion level of 75%. This inversion level is close to the estimated inversion amount in the first few millimeters of the crystal along its length, which then decreases as the beam propagates deeper inside the gain medium due to the lower amount of absorption. The small Cr:LiSAF beam (1.3 mm) also passes through the center of the larger pump spot (3.2 mm), and observes a sweet spot in terms of gain. When we look at the measured gain profile at 78 K, we first notice that the measured gain in E//c axis is much higher than the gain in E//a axis, which confirms our estimates based on Fig. 15. Comparison of measured variation of small single gain with calculated gain cross section spectra in Yb:YLF for E//c and E//a axes, at temperatures of (a) 78 K and (b) 295 K. The data is taken using a 2 cm long 1% Yb-doped YLF sample at an incident pump energy of 0.5 J and 4 J for the 78 K and 295 K, case respectively. An inversion level of 75% was used in GCS calculation (this is the estimated average inversion level in the first few millimeters of the crystal). An average inversion of around 50% and 20% is estimated through the whole length of the crystal for the 78 K and 300 K cases, respectively. earlier spectroscopic measurements. Moreover, the measured gain spectra match relatively well to the estimated GCS profile for both polarizations, again confirming the flatness of the spectral response of the measurement system. Note that for the E//c case, the measured gain is slightly higher around the 1019.5 nm peak than what the GCS curve estimates. We have also measured a single pass gain above 50 near 995 nm, a much higher value than the GCS curve predicts. This data point is not shown in Fig. 15 on purpose (which would otherwise scale down all the other data), but can be observed in Fig. 16, where we plot variation of measured small single gain with temperature at selected wavelengths for both axes. We believe the discrepancy between the measured and estimated gain around the 995 nm peak is again potentially due to the limited resolution of our emission measurement spectrometer. A very high gain (43-db gain in 14 passes) is also reported recently in a cw amplifier employing the 995 nm line of Yb:YLF (2.6 mW cw signal is amplified to 40 W level), which confirms the strength and sharpness of this line at cryogenic temperatures [71]. It is also interesting to see the sharp decrease of gain, especially for the 995 nm transition: the measured gain already decreased from a value of 50 to around 10 and later to 4 as the temperature increased from 78 K to 100 K and 125 K, respectively. It is important to note that, the measured decrease in gain with temperature involves coupling of many effects: (i) decreasing ECS with temperature, (ii) increasing absorption at seed wavelength with temperature (increasing self-absorption losses), (iii) decreasing absorption at pump wavelength, (iv) increasing fluorescence lifetime with temperature due to increasing role of radiation trapping. We have recently shown that the strong temperature dependence of the 995 nm transition could be used to accurately estimate temperature of Yb:YLF crystal with sub-1-K sensitivity [30]. In closing we would like to briefly discuss Fig. 15(b), in which we observe a mismatch between the measured gain spectra and the calculated GCS curve at room temperature. Note that above 1015 nm the results agree very well, and this is the region where the absorption band does not really overlap with the emission band, and self-absorption effects are minimal. On the other hand, we see that, despite the care taken to minimize self-absorption effects in our ECS measurements, the estimated gain profile does not match the measured gain especially below 1000 nm. Hence, as discussed in literature, the best way to get the correct ECS curve at elevated temperatures is usage of absorption data along with the McCumber theory. Of course, this approach has its own problems/issues, especially in estimating the ECS data at the longer wavelengths. Another approach would be usage of samples with Yb-doping levels below 0.1% or lower; however, the SNR ratio might be an issue at these low doping levels. In our work here, our focus was on cryogenic temperatures (78-150 K), in which the absorption and emission bands do not overlap as much due to the lower phonon energies: as a result, the measured ECS curves do not suffer from radiation-trapping errors as much (as confirmed in Fig. 15(a)). On the other hand, the measurement in Fig. 15(b) shows that, the ECS and GCS curves presented in this work at elevated temperatures might be showing lower values at shorter wavelength region of the spectrum due to ECS/ACS overlap. Conclusions In conclusion, we have systematically studied temperature dependence of absorption, emission and gain in Yb:YLF in the 78-300 K range. Combining all three in a single study enabled comparison of independently taken experimental results with each other and helped us pinpoint possible error sources. To our knowledge, direct measurements of polarization, wavelength and temperature dependence of small signal gain in Yb:YLF is presented to the first time in this study. We further provided simple analytic formulas for the measured temperature dependence of absorption and emission cross section. We have seen that, the E//a axis of Yb:YLF shows broad gain bandwidth at the expense of lower gain, whereas the E//c axis combines moderate gain bandwidth with high gain. The spectroscopic results show that, future generation of cryogenic Yb:YLF systems has the potential to combine sub-250-fs pulse duration with kW level average powers.
v3-fos-license
2019-01-22T22:30:20.687Z
2018-12-09T00:00:00.000
57758977
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/cmmi/2018/9840962.pdf", "pdf_hash": "bd675ee5dd3e57f7f517292e4fae096bc6475e03", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2562", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "bd675ee5dd3e57f7f517292e4fae096bc6475e03", "year": 2018 }
pes2o/s2orc
Prostate Osteoblast-Like Cells: A Reliable Prognostic Marker of Bone Metastasis in Prostate Cancer Patients The main aim of this study was to investigate the putative association among the presence of prostate cancer cells, defined as prostate osteoblast-like cells (POLCs), and showing the expression of typical morphological and molecular characteristics of osteoblasts, the development of bone metastasis within 5 years of diagnosis, and the uptake of 18F-choline evaluated by PET/CT analysis. To this end, prostate biopsies (n = 110) were collected comprising 44 benign lesions and 66 malignant lesions. Malignant lesions were further subdivided into two groups: biopsies from patients that had clinical evidence of bone metastasis (BM+, n = 23) and biopsies from patients that did not have clinical evidence of bone metastasis within 5 years (BM−, n = 43). Paraffin serial sections were obtained from each specimen to perform histological classifications and immunohistochemical (IHC) analysis. Small fragments of tissue were used to perform ultrastructural and microanalytical investigations. IHC demonstrated the expression of markers of epithelial-to-mesenchymal transition (VIM), bone mineralization, and osteoblastic differentiation (BMP-2, PTX-3, RUNX2, RANKL, and VDR) in prostate lesions characterized by the presence of calcium-phosphate microcalcifications and high metastatic potential. Ultrastructural studies revealed the presence of prostate cancer cells with osteoblast phenotype close to microcalcifications. Noteworthy, PET/CT analysis showed higher uptake of 18F-choline in BM+ lesions with high positivity (≥300/500 cells) for RUNX2 and/or RANKL immunostaining. Although these data require further investigations about the molecular mechanisms of POLCs generation and role in bone metastasis, our study can open new and interesting prospective in the management of prostate cancer patients. The presence of POLCs along with prostate microcalcifications may become negative prognostic markers of the occurrence of bone metastases. Introduction Metastasis to bone is a common feature in advanced prostate cancer (PCa) patients. PCa is one of the most frequent cancer in men and represents a great public health problem, with a total of 265,000 new diagnosis every year in both Europe and United States of America [1]. Frequently, prostate cancer patients show bone osteoblastic metastatic lesions at diagnosis [1,2]. e evidence that prostate cancer cells in patients enter the circulation in large numbers but still preferentially colonise to the bone has a number of implications. Prostate cancer cells have the ability to adhere at the main proteins of the extracellular matrix or at the bone marrow [2]. Also, the colonisation bone by prostate cancer cells suggests that metastatic cells have morphological and/ or molecular characteristics that make them capable to survive in the bone [2][3][4]. e type of bone metastases formed in prostate cancer is a reflection of the local interaction between tumour cells and the bone remodeling system-a complex mechanism which remains to be fully characterized. Bone metastases in prostate cancer are most often osteoblastic (involving the deposition of newly formed bone), but can also be osteolytic (characterized by destruction of normal bone) or mixed. e development of either osteolytic or osteoblastic lesions results from functional interactions between tumour cells and osteoclasts or osteoblasts, respectively [5]. However, the mechanisms responsible for the formation of prostate cancer metastasis to bone are complex and certainly involve both osteoclasts and osteoblasts activity [6]. In this context, the binary classification between osteoblastic and osteolytic lesions represents two extremes of a continuum which involves dysregulation of the normal bone remodeling process and which is yet to be fully understood. A detailed characterization of the osteoblastic-osteolytic spectrum and of premetastatic tumour cells could therefore pave the way for both the identification of early markers for bone metastasis and of novel drug targets to improve quality of life of patients with advanced prostate cancer. As concerns the origin of metastatic cells, different hypotheses have been formulated. For a long time, the main theories of the formation of bone metastases contemplated the occurrence of specific genetics change in primary cancer cells that thus acquired the ability to spread to and thrive in distant organs [7,8]. In this context, the epithelial-tomesenchymal transition (EMT) could represent the key biological process adopted by epithelial cancer cells to promote tissue dissemination [9]. On note, in our recent study, we demonstrated a putative association between the occurrence of EMTand the development of breast cancer cells showing an osteoblast-like cells phenotype in lesions with microcalcifications [10,11]. In addition, we observed that the presence of breast osteoblast-like cells (BOLCs) in breast infiltrating cancer was associated with the formation of bone metastatic lesions within 5 years from diagnosis [12,13]. e main aim of this study was to investigate the putative association among the presence of prostate cancer cells, defined as prostate osteoblast-like cells (POLCs), and showing the expression of typical morphological and molecular characteristics of osteoblasts, the development of bone metastasis within 5years of diagnosis, and the uptake of 18 F-choline evaluated by PET/CT analysis. Collection of Prostate Samples. In this study, we enrolled 110 patients undergoing prostate biopsies. From this selection, we collected prostate biopsies from each patient and, when available, data of PET/CT analysis. e study was approved by Institutional Ethical Committee of the "Policlinico Tor Vergata." Experimental procedures here reported were performed in agreement with the e Code of Ethics of the World Medical Association (Declaration of Helsinki). All patients have signed the informed consent prior to surgical procedures. From each sample, paraffin serial sections were used for both histological and immunohistochemical investigation. Also, 1 mm 3 of tissue were studied by transmission electron microscopy and microanalytical analysis. Exclusion criteria were history of previously or concomitant other neoplastic diseases, autoimmune diseases, viral chronic infections (HBV, HCV, and HIV), and any antitumoral treatment received before biopsy. Immunohistochemistry. To study the immunophenotypical profile of prostate metastatic cells, we performed immunohistochemical reactions to investigate the expression of the following biomarkers: vimentin (EMT), BMP-2, PTX-3, RUNX2, RANKL, and VDR (mineralization process). For antigen retrieval, 3 μm thick paraffin sections were treated with citrate pH 6.0 or EDTA citrate pH 7.8 buffers (95°C for 30 min). en, primary antibodies listed in Table 1 were incubated for 1 hour at room temperature. HRP-DAB Detection Kit (UCS Diagnostic, Rome, Italy) was used to reveal the reaction of primary antibodies with their specific target. Immunohistochemical signal was assessed independently by two investigators by counting the number of positive cancer cells (out of a total of 500 in randomly selected regions). 18F-Choline PET/CT Analysis. Among patients enrolled in the study, 11 were subjected to 18 F-methylcholine ( 18 Fcholine) PET/CT analysis. Results of 18F-choline PET/CT were collected to verify a possible correlation between 18 Fcholine uptake in prostate tumours and the presence of POLCs. 18 F-choline PET/CT analysis was performed as previously described [19,20]. From each patient, standardized uptake value (SUV) max and SUV average were recorded. Histological Classification. Prostate biopsies were classified in 44 benign lesions (BL) and 66 malignant lesions according to EAU-ESTRO-SIOG Guidelines 2017 [23]. We subdivided the malignant lesions in those (BM+, n � 23) taken from patients with clinical evidence of bone metastasis and those (BM−, n � 43) from patients without clinical evidence of bone metastasis after 5 years from diagnosis. From radiological point of view, all metastatic sites showed typical characteristics of osteoblastic lesions. Calcifications were present in 38 out of 110 prostate biopsies. In particular, we observed psammoma bodies in 32% of BL, 87% of BM+, and 79% of BM−. Patient baseline characteristics are reported in Table 2. EMT Characterization. Immunohistochemical analysis of vimentin expression was performed in order to evaluate the number of prostate cells that acquire mesenchymal phenotype (Figure 1(a)-1(c)). As shown in a recent study [24] (Figure 2(g)). In particular, the signal in BM+ appeared very intense both in nucleus and in cytoplasm ( Figure 2(h)), while it was less intense and mainly nuclear in BM− (Figure 2(i)). Expression of POLC Biomarkers in Gene Expression Datasets. We examined expression of the EMT and bone markers studied by IHC in public datasets comprising gene expression profiling data from patients with primary tumours or metastatic castration-resistant prostate cancer (CRPC). Individual gene comparisons did not show a univocal behaviour (Figure 3(a)). Only VDR expression was significantly upregulated in CRPC compared to primary tumours. Interestingly, however, the small set of genes was able to discriminate most of the primary tumours from metastatic CRPCs in unsupervised clustering (Figure 3(b)). Furthermore, analysis of a second dataset with annotated metastatic sites showed that the gene set expression was remarkably higher in tumour specimens taken from bone metastases compared to primary tumours and other metastatic sites (Figure 3(c)). ( Figure 4(d)). Finally, significant differences in VDR expression were observed (Micro+ 192.00 ± 18.02 vs Micro− 109.30 ± 19.14; p � 0.001) (Figure 4(e)). Ultrastructural Characterization of Prostate Cancer Cells. TEM analysis allowed us to characterize ultrastructure of prostate cells in malignant lesions. Specifically, we observed both cuboidal and large spindle-shaped cells with abundant clear cytoplasm in BM+ ( Figure 5(a)). Moreover, in these lesions, we identified several calcifications and prostate cancer cells with morphological appearance of osteoblasts containing cytoplasmic electrondense granules made of HA ( Figure 5(b)). In addition, EDX microanalysis demonstrated that all calcifications here detected were made of calciumphosphate (hydroxyapatite) (Figure 5(b)). 18F-Choline PET/CT Analysis. We collected PET/CT data of 11 patients: 5 BM+ and 6 BM− ( Figure 5(c)). Despite the low number of patients, we found significant differences between both SUV max and SUV average between BM+ and BM− ( Discussion Prostate metastasis to the bone more often results in osteoblastic lesions, though it is known that prostate bone metastases can display both blastic and lytic characteristics during the early phases of their formation [25]. In addition, there is evidence that during the early phases of osteoblastic metastases formation, it is possible to observe osteolytic lesions, suggesting an overall increase of bone remodeling at these sites. e pathophysiology of bone metastases is frequently explained by the theory of the vicious cycle proposed for the first time by Mundy and Guise [26]. According to this theory, cancer cells resident in bone cause bone destruction because they are capable to stimulate osteoclast activity. In return, cancer cells receive positive feedbacks from humoral factors released by the bone microenvironment during bone destruction and remodeling [27]. Indeed, it is widely accepted that the bone microenvironment is crucial to the success of cancer cells in bone. In a recent study, we described for the first time the characteristics of prostate cells involved in the production of prostate calcifications demonstrating their similarity with osteoblasts [24]. In addition, our research group described the presence of osteoblast-like cells in breast cancer (BOLCs) showing a correlation between the appearance of BOLCs in primary lesions and development of bone metastases. Based on these studies, the main aim of this study was to investigate the possible correlation between the presence of prostate cancer cells showing expression of typical morphological and molecular markers of osteoblasts and the development of bone metastasis in prostate cancer patients within 5 years from diagnosis of primary lesion. To this end, we collected 110 prostate biopsies (44 benign and 66 malignant lesions). Malignant lesions were subdivided in biopsies from patients with clinical evidence of bone metastasis (BM+, n � 23) and those from patients without clinical evidence of bone metastasis (BM−, n � 43). As already reported by Scimeca et al., we found a significant correlation between vimentin expression, one of the most important markers of mesenchymal cells [14], and the presence of prostate osteoblast-like cells (POLCs). Specifically, our data showed a significant increase of positive cells WA25 WA46 WA47 WA19 WA11 WA30 WA18 WA35 T31 T37 WA32 WA42 WA7 WA53 WA28 WA10 WA14 T65 T45 WA54 WA40 WA6 WA24 WA13 WA3 WA37 WA20 WA4 WA55 WA31 WA22 T54 WA23 T59 T51 T5 WA39 T47 T64 WA5 WA16 WA29 T1 T49 T42 T20 T62 T32 T63 T41 T44 T85 T43 T75 T68 T82 T52 T83 T70 T73 T12 T11 T7 T60 T56 T67 T57 T27 T55 T17 T19 T39 T46 T26 WA26 T69 WA33 T24 T21 T29 T25 T40 T50 T8 T61 T53 T10 T48 T3 T66 T58 T6 in prostate cancer of BM+ group as compared with BM−. In addition, we proved that primary prostate cancer lesions of BM+ patients were characterized by the expression of osteogenic molecules able to induce osteoblast differentiation and to increase osteoblast function such as mineralization. Among them, BMP-2 is a potent inducer of bone formation through the stimulation of osteoblast differentiation. BMP-2 exerts this effect via two types of serine/threonine kinase receptors: BMP-2 binds the type II receptor, which subsequently activates the type I receptor by a direct association [28]. Our results showed an increase of BMP-2 expression in prostate malignant lesions. Conversely, the absence of significant differences of BMP-2 expression between BM+ and BM− suggests that it could be involved in the early phases of cancer transformation rather than during metastatic process. In support of this, several studies demonstrated the ability of BMP-2 to induce malignant transformation of epithelial tissues [29][30][31]. However, we also demonstrated the association between BMP-2 expression and the presence of prostate calcifications, regardless of the lesion type. us, BMP-2, in association to EMT phenomenon, can participate to induce mesenchymal-like cells to acquire osteoblast phenotype. As concerns PTX-3, PTX-3 is a multifunctional glycoprotein produced by a variety of cells [32,33]; our results displayed a significant correlation between the presence of PTX-3-positive prostate cells and bone metastasis formation. Also, it is important to emphasize that BM− group showed the same number of PTX-3-positive cells of BL, suggesting that the presence of PTX-3-positive cells could represent a reliable predictive element for the development of bone metastasis from prostate cancer. ese data are in line with recent studies that demonstrated the involvement of PTX-3 in osteoblast proliferation, differentiation and function [34][35][36], and bone metastasis from breast cancer formation. To further characterize the phenotype of POLCs, we investigate the expression of the main markers of osteoblasts, RUNX2, RANKL, and VDR. RUNX2 is the first transcription factor required for the determination of the osteoblast lineage [37]. In particular, RUNX2 is detected first in preosteoblasts and its expression is upregulated during the early phases of osteoblast differentiation. In line with this, our results displayed an increase of RUNX2-positive prostate cells in malignant lesions respect to BL, but no difference was observed between BM+ and BM−. erefore, the acquisition of RUNX2 expression by prostate cells seems to be linked to cancer transformation rather than to metastatic process. In agreement with the physiological role of RUNX2 in osteoblast function [38], we did not observe an increase of RUNX2-positive cells in Micro+ with respect to Micro− lesions. Indeed, mature osteoblasts lose the expression of RUNX2 during the mineralization phase of bone formation. Conversely, analysis of RANKL and VDR showed a putative correlation among the presence of RANKL and/or VDR positive prostate cancer cells, bone metastasis formation, and microcalcifications. As regards the formation of bone metastasis, the presence of RANKL-positive prostate cancer cells can trigger osteoclast activity by binding to the osteoclast receptor RANK [39]. Indeed, RANKL is a type II membrane protein expressed by osteoblasts that is able to induce osteoclasts proliferation and function. In addition, at the primary lesion site, RANKL expression can reflect the presence of cells analysis of this set of genes in patients with primary and metastatic prostate cancer further showed that deregulated expression of these markers of EMT, bone mineralization, and osteoblastic differentiation occurred preferentially in the setting of metastatic disease and particularly at metastases in bone, further supporting their relevance as adverse prognostic markers. It is important to highlight that in this study, POLCs were also characterized from the ultrastructural point of view. In particular, we observed the presence of cytoplasmic vesicles containing HA granules in prostate cancer cells showing osteoblast phenotype (POLCs) [24,[40][41][42]. Of note, although preliminary, our data showed a significant correlation between the uptake of 18F-choline PET/ CT and the presence of POLCs in prostate cancer tissues. If confirmed in a larger patient cohort, this evidence could provide the scientific rationale for the development of algorithms able to predict the metastatic potential of primary prostate cancer lesions by 18F-choline PET/CT analysis [43]. is study proposes a new cell type generated by a process of cell transdifferentiation and related to formation of bone metastasis: the POLCs. Although our data require further investigations about the molecular mechanisms of both POLCs generation and metastasization to the bone, this study opens new and interesting prospective for the management of prostate cancer patients. e presence of POLCs could become prognostic markers for occurrence of bone metastatic disease. Conclusion e clinical course of metastatic bone disease in prostate cancers is often long, with patients experiencing sequential skeletal complications over a period of several years. ese include bone pain, fractures, hypercalcemia, and spinal cord compression, all of which may profoundly impair patient's quality of life. In addition, once prostate tumour cells are engrafted in the skeleton, curative therapy is no longer possible and palliative treatment becomes the only option. us, the identification of early markers of bone metastasis and especially the characterization of the cells involved in the metastatic process can lay the foundation for the identification of new tools for monitoring, prevention, or cure of bone metastatic diseases and providing support to the physicians in the management of prostate patients. In this context, positron emission tomography (PET)/computed tomography (CT) has emerged as a significant and promising staging modality for primary, recurrent, and metastatic prostate cancer. Much more important, the identification of highly sensitive and specific radiotracers can implement the therapeutic/diagnostic perspectives for prostate cancer patients "opening the way" for the development of new theranostic approaches. PSMA PET/ CT ligands labelled with 18 F and 68 Ga have certainly revolutionized the management of metastatic prostate cancer selecting patients who may benefit from targeted systemic radionuclide therapy. In a nuclear oncology theranostic design, 68 Ga-PSMA already constitutes the diagnostic positronemitting of beta − emitter Lutetium-177 PSMA ( 177 Lu-PSMA) [44] and alpha-emitter Actinium-225 PSMA ( 225 Ac-PSMA) [45]. Finally, the results reported here about the phenotypic characterization of POLCs could provide a scientific rationale for the development of theranostic anti-POLC radiomolecules for the cure and prevention of prostate cancer bone metastasis. Data Availability e data used to support the findings of this study are included within the article. Gene expression data from two studies in prostate cancer patients (reference [21] and [22]) were retrieved from the cBioPortal platform. Expression of the selected genes was compared between primary tumours and metastatic CRPC and, for the second dataset, among primary and different metastatic sites. Conflicts of Interest e authors declare that there are no potential conflicts of interest relating to the manuscript.
v3-fos-license
2023-09-03T06:17:16.500Z
2023-09-01T00:00:00.000
261462356
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41746-023-00894-9.pdf", "pdf_hash": "b0dcfec9b2560057edcca8d8536adc284dbc52c8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2563", "s2fieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ], "sha1": "8b07a2d44c8bfdb0fb256862c8981043245afd9a", "year": 2023 }
pes2o/s2orc
Deep learning analysis of blood flow sounds to detect arteriovenous fistula stenosis For hemodialysis patients, arteriovenous fistula (AVF) patency determines whether adequate hemofiltration can be achieved, and directly influences clinical outcomes. Here, we report the development and performance of a deep learning model for automated AVF stenosis screening based on the sound of AVF blood flow using supervised learning with data validated by ultrasound. We demonstrate the importance of contextualizing the sound with location metadata as the characteristics of the blood flow sound varies significantly along the AVF. We found the best model to be a vision transformer trained on spectrogram images. Our model can screen for stenosis at a performance level comparable to that of a nephrologist performing a physical exam, but with the advantage of being automated and scalable. In a high-volume, resource-limited clinical setting, automated AVF stenosis screening can help ensure patient safety via early detection of at-risk vascular access, streamline the dialysis workflow, and serve as a patient-facing tool to allow for at-home, self-screening. INTRODUCTION The arteriovenous fistula (AVF) is often touted as the "lifeline" for dialysis patients.According to the National Kidney Foundation (NKF), vascular access is globally ranked as a top priority for dialysis patients, healthcare providers, and clinical research 1 .Preserving dialysis access is a high priority for providers and patients because the consequences of AVF dysfunction and subsequent access failure significantly contributes to patient morbidity and healthcare costs.Unfortunately, AVF dysfunction is not uncommon.One 5-year study from 2018 that analyzed AVF failures found a cumulative patency loss rate of 19.7% and 33.3% during the early and late period, respectively 2 .According to the United States Renal Data System (USRDS), from 2016-2018, the cumulative incidence of loss of primary unassisted patency at 1 year was 51.8%, the loss of primary assisted patency was 19.0%, and the loss of secondary patency was 3.3%.It is well documented that the most common cause of AVF dysfunction and subsequent failure is stenosis and thrombosis [3][4][5][6] .One study found that the incidence of stenosis is 4.6-10.8%,and the incidence of thrombosis is 2.3-7.7% 7 .Nearly all thrombosed AVFs have an underlying stenotic lesion 8 .While patients on hemodialysis are in a general prothrombotic state, which increases the risk for stroke and ischemic heart disease, studies have found that vascular access-related complications are the leading cause of hospitalizations among dialysis patients 9 .Once there is access site thrombosis, urgent intervention is required for salvage in order to prevent permanent loss of the AVF. Vascular access complications, such as stenosis and thrombosis, are significant drivers of resource utilization, cost, morbidity, and mortality [10][11][12][13][14] .Screening for AVF stenosis improves the longevity of AVFs, reduce costs for healthcare systems, and improve the quality of life for patients.The current Kidney Disease Outcomes Quality Initiative (KDOQI) guidelines recommend screening for AVF stenosis through "the examination and evaluation of the access by means of physical examination to detect clinical signs that suggest the presence of AV access flow dysfunction" 15 .A lesion is considered clinically significant if it contributes to clinical signs and symptoms, such as arm swelling, prolonged bleeding after dialysis, or changes in the access bruit (rumbling sound) or thrill (tactile sensation); regardless of sustained changes in measurements such as access flow or venous pressures [16][17][18] . Auscultation (i.e., listening for internal body sounds) is a noninvasive method, compared to digital subtraction angiography or venous cannulation, and more convenient compared to ultrasound for detecting abnormal blood flow 19 .Additionally, a change in access bruit or thrill may be one of the earliest clinical indicators that a stenosis is developing and can be measured using a low-cost and widely available digital stethoscope.However, the reality is that auscultation is a highly subjective physical exam technique and largely depends on the skill of the listener [20][21][22][23] .Since the timely diagnosis of stenosis is crucial for maintaining dialysis access, applying deep learning to AVF blood flow sounds can enhance the ability of healthcare providers to screen for AVF stenosis both reliably and efficiently. In this Article, blood flow sounds are recorded using a digital stethoscope at six distinct locations along each patient's AVF.The overall schematic of our project is demonstrated in Fig. 1.We choose to pre-process the recorded one-dimensional blood flow audio signals into two-dimensional image representations to leverage the state-of-the-art models developed by the computer vision community.We trained our models using supervised learning with labels validated from concurrent duplex ultrasound.We found that these models could better predict patients with a stenosis compared to non-machine learning analyses of the same sound files.A deep learning model trained on normal and abnormal blood flow sounds that can identify AVF stenosis could establish a level of objectivity to the subjective interpretation of auscultated sounds via the extraction and quantification of relevant features from the blood flow audio signals.Deep learning has already been successfully utilized to help predict AVF failure and successful maturation based on various patient parameters 24,25 .Additionally, deep learning affords a level of automation over the screening process.Our proposed technology could even serve as a patient-facing tool to allow for at-home, self-screening of AVF stenosis.This ability could be especially helpful in underresourced areas where patients may not be receiving routine screening.The timely and accurate detection of AVF stenosis using deep learning analysis of AVF blood flow sounds can reduce downstream healthcare costs, and more importantly, improve the quality of life of patients. Data Table 1 summarizes the demographic and clinical characteristics of the patients enrolled in our study. Table 2 gives a breakdown of the distribution of stenotic and patent AVFs by location. Frequency spectrums To gain some intuition about how the blood flow sounds differs by location along the AVF and how patent and stenotic sounds differ from each other at each location, we computed the averaged frequency spectrum across all patients in the training set.We also derived scalar metrics from the averaged frequency spectrums including the area under the curve, peak frequency, maximum frequency, and full width at half max height.Fig. 2 displays the averaged frequency spectrums and quantitative scalar measures.Ultrasound imaging and blood flow velocities measured by concurrent duplex ultrasound were used to inform the binary ground truth label of either "Patent" or "Stenotic".The deep learning models are trained following the supervised learning paradigm.b The 6 locations along the arteriovenous fistula from where blood flow sounds are collected numbered in increasing order from most distal to most proximal based on the anatomic definitions of the arm: artery, anastomosis (where the artery joins the vein), the distal vein, the middle vein, the proximal vein, and the arch of the vein.Shown in this illustration is the brachiocephalic fistula, but the brachiobasilic, radiocephalic, and radiobasilic fistula is also studied in this paper.c, d Laminar flow through a patent arteriovenous fistula (AVF) generates a quiet "whooshing" sound.As an AVF develops stenosis, laminar flow will transition to turbulent flow.Increasing turbulent flow will result in an increased amount of higher frequency components in the generated sound.Clinically, the sound heard when auscultating a stenosed AVF is often described as a "highpitched systolic bruit or thrill".The two image representations of sound explored in this study are the mel-spectrogram and the recurrence plot.The mel-spectrogram is generated from applying the short-time Fourier Transform (STFT) to the waveform.The recurrence plot is generated from a recurrence quantification analysis (RQA) of the frequency spectrum, which is obtained from applying the Fourier Transform (FT) on the waveform.The illustrative example patent and stenotic waveforms, frequency spectrums, mel-spectrograms, and recurrence plots seen here are taken from a patent and stenotic "proximal" vein, respectively. Individual, location-based models First, we studied binary classification of AVF blood flow sound at each location separately.We studied combinations of two different pre-processing methods with three different model architectures.The first method is to create a Mel-spectrogram image representation of the blood flow sound using a short-time Fourier transform.For the spectrogram image, we also explore three different time resolutions at the maximum frequency resolution.The second method is to create a recurrence plot image representation of the blood flow sound by applying recurrence quantification analysis to the signal in the frequency domain.Each image representation of sound is then used to train the three different model architectures.The first model is a 6-layer convolutional neural network (CNN).The second model is a ResNet-50 CNN pre-trained on ImageNet.The third model is a vision transformer (ViT).We refer to these models as "locationbased models" since they are only trained on sounds from a single, given location.Fig. 3 depicts the model architectures and summary of the results for each pre-processing method and model architecture combination.For these individual, locationbased models, we further study how important it is to contextualize these models with metadata regarding the anatomical origin of the artery and vein used to create the AVF.Results from these studies are depicted in Supplementary Figs. 12 & 13 and Supplementary Table 1. Universal model with and without location metadata Next, we study the importance of contextualizing AVF blood flow sounds with location metadata.For this we study the ViT architecture trained on the Mel-spectrogram images.We refer to these models as "universal models" since they are trained on sounds from all the locations.In experiment II, we aggregate all the sounds from each location to train one ViT, but without any location metadata given to the model.In experiment III, we aggregate all the sounds from each location and supply location metadata to the ViT.We study various categorical encoding methods for encoding the location metadata including ordinal encoding, one-hot encoding, and learned embeddings.Fig. 4 shows the results from training our universal ViT with and without location metadata. Evaluation on held-out test set Finally, we study how well our models perform on our held-out test set.In particular, we look at the individual location-based ViT models trained on 368 × 128 spectrogram images, the universal ViT model trained on 368 × 128 spectrogram images with location metadata encoded via learned embeddings, and a non-deep learning, rule-based algorithm that classifies sound based on how loud the sound is as measured by the AUC of the frequency spectrum.For the two deep learning methods, the threshold that corresponds to the largest geometric mean of sensitivity and specificity based on the averaged ROC curve from 10-fold cross-validation was selected as the final threshold value.Fig. 5 shows confusion matrixes stratified by location to allow for direct comparisons and the sensitivity, specificity and F1 scores. Patient level analysis Lastly we study how well the individual, location-based ViT models trained on 368 × 128 spectrogram images performs at the patient level.Fig. 7 shows the confusion matrix at the patient level and the sensitivity, specificity, and F1 scores. DISCUSSION Examining the frequency spectrums in our illustrative examples of stenosis at each location (Supplementary Fig. 1b-f, Supplementary Fig. 2a-f), one can see that a stenosis is characterized by a "double-peak".The left (lower frequency peak) corresponds to diastole (when the heart's ventricles relax) and the right (higher frequency peak) corresponds to systole (when the heart's ventricles are contracting).During systole, there is a momentary increase in the velocity of blood flow all throughout the vasculature, including the AVF.The increased velocity through a stenosed AVF directly contributes to increasing the jet Reynolds number.The flow regime is more likely to transition to turbulent flow at the site of the stenotic lesion during systole because at baseline (during diastole) the stenotic lesions is already characterized by higher Reynolds number by virtue of the diminished lumen diameter and its direct effect on increasing velocity.This increased propensity to develop turbulent flow during systole at the stenotic site is responsible for the second higher frequency peak seen in our frequency spectrums and clinically corresponds to the "high-pitched systolic bruit of thrill" heard during auscultation.A patent AVF is better able to accommodate the increased throughput of blood during systole, and the second higher frequency peak is not as prominent or entirely absent.Supplementary Figs. 1 and 2 provides more illustrative examples of patent and frequency spectrums at each location. To gain a better understanding of the data and to see how well these individual observations generalize, we computed the average frequency spectrum across all patients, stratified by location and patency status (Fig. 3).The "double-peaking" is not as distinct compared to the individual examples likely because the higher frequency peaks blend together when averaged.However, the distributions do appear to be bimodal, correlating with systole and diastole of the heart cycle.On average, the stenotic frequency spectrums have higher AUC values compared to their locationcontrolled counterparts, at all five studied locations.The AUC for the frequency spectrum corresponds to energy, which we perceive as loudness.Additionally, on average, the stenotic frequency spectrums reach higher maximum frequencies compared to the location-controlled counterparts, at all five studied locations.This is consistent with higher degrees of turbulent flow (caused by the stenosis) resulting in higher frequency components in the generated sound.Finally, on average, the stenotic frequency spectrums all have peak frequencies that are right shifted compared to the patent frequency spectrums, at all five studied locations, which correlate with the fact that even during diastole, blood is flowing faster at the stenotic site due to the reduced lumen size.In short, from our data we observe that, on average, blood flow through a stenotic lesion is louder and has higher pitch, which is consistent with the clinical physical exam 19 . Through a series of experiments, we see if we can train a deep learning model to learn this difference in blood flow sound between a patent and stenotic AVF.In addition to the overall goal of building the best classifier, our experiments also help assess (1) how important is it to contextualize the sound with information about the location along the AVF from which the sound was sourced from and (2) how important it is to contextualize the sound with information regarding the anatomical original of the artery and vein used to construct the AVF. Experiment one allows a direct comparison of the three different models architectures and two different pre-processing methods explored.In experiment one, we build independent classifiers trained on patent and stenotic sounds at each location, testing every combination of the three model architectures with the two pre-processing methods.The three model architectures explored are a CNN, and ResNet-50 pre-trained on ImageNet weights, and a ViT.The two pre-processing methods explored are spectrogram images and recurrence plot images. From experiment one, we observe that spectrogram images outperform the recurrence plot image, achieving higher AuROC and AUPRC values for each model architecture (note that the AuPRC values should be interpreted in the context of the true positive rate for each location as precision and recall do not consider the true negative rate).The spectrogram images represent frequency as it varies with time, and so the spectrograms contain information from both the time and frequency domain.The recurrence plots are constructed from the frequency spectrum, and so the recurrence plots contain information only from the frequency domain.At first thought, it may be intuitive to believe that the differences between patent and stenotic sounds are only encoded in the frequency domain, as suggested by our analysis on the frequency spectrums of the sounds.However, the spectrograms outperforming the recurrence plots means there is also useful information encoded in the time domain that is helping the model learn the difference between patent and stenotic sounds.For the spectrogram images, we also explored three different time resolutions at a constant frequency resolution (374 × 128, 128 × 128, 32 × 128), and the best performing spectrogram resolution was the largest (374 × 128).Note that for the ViT, we resized the time resolution of 374 to 368 to be compatible with the 16 × 16 patch tokenization step.This further supports the argument that there are distinguishing features in the time domain and is consistent with the general idea that the model performs better when given more information to learn from.In our patient population of mature fistulas, we do not expect there to be any changes in heart rate or blood pressure based on degree of AVF stenosis and this is corroborated in Table 1.Thus, it seems unlikely that the time-dependent information being leveraged by the models is related to heart rate.We speculate that the timedomain phenomenon the models are learning is related to stenosed AVF's having higher blood flow velocities. From experiment one, we also observe that the vision transformer outperforms both convolutional neural network architectures on the spectrogram images.The convolution operator aggregates information via spatial sliding windows or kernels which use the same learned weights as it slides across an image.This architecture structurally introduces two important inductive biases inherent to CNN: translational equivariance and locality.Pooling layers, used in conjunction with convolutional layers in our models, helps the model achieve translational invariance.Translational equivalence and invariance mean that an object can be detected irrespective of its location in the image.The locality bias is the notion that closely space pixels are more correlated than pixels that are far away. While spectrograms and natural images are both images from a data structure point of view (i.e. a grid of pixel values), the two images represent fundamentally different natural phenomenon.The inductive biases of translational invariance and locality structurally built into the CNN architecture are not as suitable for processing and interpreting spectrograms.While translation invariance is a good assumption for natural images whose axis convey a measure of physical distance (i.e. a cat in the upper left corner is the same as a cat in the lower right corner), the same is not true for spectrograms.A spectrogram conveys time on the x-axis and frequency on the y-axis.It may be a fair assumption that translational invariance applies to the time axis (i.e. a sound event happening at 5 s is the same as happening at 10 s), but it does not make much sense to uphold translational invariance to the frequency axis because semantic meaning is encoded in the frequency domain.Furthermore, the spectral properties of sound are non-local.The pitch of a sound is determined by the fundamental frequency, while the quality or timbre of a sound is determined by its harmonics (the nth harmonic has a frequency F n = nF 1 , where F 1 is the fundamental frequency).The fundamental frequency and its harmonics are not locally grouped despite originating from the same sound source.For example, if the fundamental frequency is 100 Hz, then its harmonics are 200 Hz, 300 Hz, etc.The locality bias, again while useful for natural images, is not a good inductive bias for spectrogram images because the frequencies associated with a given sound event are non-locally distributed. The vision transformers, by using the self-attention mechanism, structurally lack these two inductive biases of translational invariance and locality, which are usually quite useful biases for natural images.Typically, the vision transformer must learn these inductive biases from the data itself; however, for spectrogram images it makes good sense to disregard these biases as the they do not pertain to spectrogram images.The ViT is not structurally constrained to the inductive biases of translational invariance and locality like the CNN, which allow the model to explore the parameter space more freely to find a better set of generalizable rules for classifying spectrograms.This explains the superior performance of the ViT over the convolution-based neural networks in classifying the spectrogram images of blood flow sound.Moreover, the convolution operator is a local operator, meaning only information that falls within the predefined window size can be aggregated.ViT maintain a global receptive field at every layer.Thus, ViT can learn long range dependencies and Here we used the averaged area under the curve (AUC) value of the averaged patent and stenotic frequency spectrums from Fig. 3 as a threshold for deciding how to classify each sound in the test set.For example, at the anastomosis site the AUC of the averaged patent frequency spectrum is 2772 and the AUC of the averaged stenotic frequency spectrum is 4142.The average of the two AUC values is 3457.In the test set, if a sound has a frequency spectrum AUC greater than 3457, we classify the sound as stenotic, and vice versa.d Summary of sensitivity, specificity, and F1 score for the three approaches.aggregate global information in early layers, resulting in improved performance 26 . After establishing that the ViT trained with 368 × 128 spectrogram images performs the best, we use this combination to understand how important location metadata is.From qualitative inspection of the averaged frequency spectrums in Fig. 2a-e, we see how each location's averaged frequency spectrum has a distinctive global shape, which suggests that the blood flow sounds differ from each other depending on the location.From Fig. 2f, see that at the anastomosis site, the sounds have the largest average AUC value.The sounds have the smallest average AUC value at the venous arch location.In other words, the blood flow sound is loudest at the anastomosis and softest at the venous arch, again highlighting how the characteristics of blood flow sounds changes as a function of location.Thus, it appears to be important to contextualize the blood flow sounds with location metadata. We set out to experimentally confirm our observations through experiments I-III.In experiment I, we built independent classifiers, one for each location.In experiment II, we aggregate all the sounds from each location to train one ViT, but without any location metadata given to the model.In experiment III, we aggregate all the sounds from each location and supply location metadata to the ViT.Comparing the results between experiment II and III, we see that the AuROC and AuPRC improves from 0.68 ± 0.05 and 0.28 ± 0.09 (for the model lacking location information) to 0.82 ± 0.04 and 0.54 ± 0.08 (for the model considering location information), respectively.This jump in performance confirms the importance of accounting for the location along the AVF from which the sound was sourced from.Using learned embeddings to encode the categorical location information gave us the best performance results.Supplementary Fig. 11 shows the results for integer encoding and one-hot encoding.Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e.encoding "venous arch" as 1,10,100) results in progressively improved performance metrics (Supplementary Fig. 11a-c).These results are counterintuitive because in theory it should not matter what the integer values are since we are optimizing the same loss function in each case; the model can learn to increase or decrease the weights associated with location metadata and converge on the same solution.However, it seems that artificially increasing the importance of the location metadata at initialization (via larger integer values) leads to better performance.In the setting of limited data and computation resources, we speculate that increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.The fact that we achieve progressively better results with increasing scalar integer encoding values further emphasizes the importance of contextualizing the sounds with location metadata. Next, we seek to understand if it is important to contextualize the blood flow sound with metadata regarding the anatomical original of the artery and vein used in the creation of the AVF.In this study we used AVFs made from the brachial and radial artery, and the cephalic and basilic vein.In experiment IV, we test if a ViT can distinguish the brachial from the radial artery based on blood flow collected at the "artery" location.Results are shown in Supplementary Fig. 12.An AuROC value of 0.78 ± 0.11 suggest that there is a difference in blood flow sound between the radial from brachial artery.The difference in sound likely stems from the fact that the brachial artery is almost two times larger than the radial artery and has thicker vessel walls 27,28 .In experiment V, we test if a ViT can distinguish the cephalic from the basilic vein based on blood flow collected at the "arch" location.Results are shown in Supplementary Fig. 13.An AuROC value of 0.52 ± 0.13 suggest that there is not much difference in blood flow sound between a cephalic and basilic vein.The difference between the basilic and cephalic vein is only about 1-2 mm in most people, which likely explains the model's lack of ability to differentiate the sound of blood flow between the veins 29,30 .In experiment VI, we test how well the individual, location-based ViTs perform when also given metadata regarding the anatomical origin of either the artery or the vein.We notice no improvement between the models given venous origin metadata in experiment VI compared with the models in experiment I (Supplementary Table 1), consistent with our model's lack of ability to discern cephalic from basilic vein in experiment V. Interestingly, despite our model being able to distinguish the radial from the brachial artery, there is no improvement between the models given artery origin information in experiment VI compared with the models in experiment I (Supplementary Table 1).Thus, the anatomical original of the artery or vein seems to be unimportant in the context of building classifiers to identify AVF stenoses based on blood flow sound. On evaluation on the held-out test set, we see that the individual, location-based ViTs outperform the universal ViT with location metadata (Fig. 5a, b).The individual, location-based models implicitly contextualize the sounds with location information since they are only trained on sounds coming from the given location.The individual, location-based ViTs can focus exclusively on learning the features that distinguish patent from stenotic at that given location.The "universal" ViT must learn a feature extractor that generalizes across all six locations, which likely hinders performance because the relevant features that define patent vs stenotic varies with location due to inherent differences in sound at each location.What it means to be "stenotic" at the "arch" location is different than "stenotic" at the "anastomosis" location, despite both receiving the same "stenotic" label.We can qualitatively see these differences in Fig. 3a, e.For example, on average, the blood flow sound is louder at a patent anastomosis site compared to a stenotic venous arch site. In evaluation on our test set, we also tested a simple non-deep learning approach based on our conclusion that, on average, the blood flow through stenotic lesions is louder than through patent vessels (Fig. 3).For each location, the half-way point between the averaged patent frequency spectrum AUC value and the averaged stenotic frequency spectrum AUC value is used as a threshold for evaluating the test set.For the test set, sounds with frequency spectrums AUC values that fall above the threshold are classified as stenotic, and those with AUC values below the threshold are classified as patent.This approach gives us inferior results compared to the two deep learning approaches.While general spectral properties that correlate clinically seem to emerge from the averaged frequency spectrums, judging from both the large standard deviations in Fig. 2f and from visual inspection of the individual frequency spectrums in Supplementary Figs. 1 and 2, there seems to be large degree of heterogeneity among the sounds on an individual level.This underscores the need for highly parameterized deep learning models over simpler rule-based algorithms for screening for AVF stenosis based on blood flow sound.Finally, we perform a patient-level analysis on our held-out test set using our best performing model, and we achieve a sensitivity, specificity, and F1 score of 0.924, 0.791, 0.907, respectively (Fig. 6).As a reference for performance, a clinical trial that studied how well a single expert nephrologist could identify stenosis in hemodialysis arteriovenous fistulas based on a physical exam, also using ultrasound as the ground truth, reported a sensitivity of 0.96 and a specificity of 0.76 31 .Thus, our model is able to screen for stenosis at a level comparable to that of an expert nephrologist performing a physical exam. One of the limitations of this study is that we only studied brachial/radialcephalic/basilic fistulas.Although the most common types of fistulas, other fistula types using other artery and veins exist, and our conclusion that the anatomical origin of the artery and vein is not important may not generalize.Additionally, our model cannot be used to identify stenosis on the arterial side of an AVF, although this is much rarer than stenosis on the venous side.This is due to the lack of training data we have of arterial stenosis (only 6 examples).Furthermore, an important clinical implication of adjusting for class imbalance during our training process is that this can potentially cause the model to be mis-calibrated.In our case, we are using a weighted loss function (in essence, oversampling the minority class), which can potentially cause the model to be overconfident when making positive class (i.e.stenotic sounds) predictions.Empirically from our calibration plots for the individual, location-based ViT shown in Fig. 6, we do see that our model tends to be overconfident, which is another limitation for our model.While overconfident predictions may be the result of our class imbalance adjustments, we find our class imbalance adjustments necessary to achieve good model discrimination.We show the ROC and PR curves from 10-fold cross-validation for the ViT trained on 368x128 spectrogram images without using a weighted loss function in Supplementary Fig. 14.Compared to the ROC and PR curves shown in Fig. 3c, we can see how adjusting for class imbalance improves model discrimination in our case.Another important limitation of this study is how we validated our data.Stenotic lesions were identified with duplex ultrasound.Clinically, a stenotic lesion identified on ultrasound does not always necessitate a percutaneous angioplasty (the procedure for treating a stenotic AVF).An important clinical question is when to intervene on a stenotic AVF once found.While our study demonstrates promise for using deep learning analysis on blood flow sound as a quick and economical screening tool for identifying the presence of stenotic lesions, future work correlating sound to AVFs that ultimately require percutaneous angioplasties may further improve the utility of such technology. In summary, our study presents a novel, fast, and easy approach for screening for AVF stenosis in hemodialysis patients using deep learning to analyze the sound of AVF blood flow.The final models we recommend for deployment are the individual, location-based vision transformer models trained on 368 × 128 spectrogram images.Our preliminary model evaluation shows that this technology can screen for stenosis at a level comparable to that of a nephrologist performing the physical exam, but with the advantage of being automated and scalable.In routine practice, the onus of performing the physical exam to screen for stenosis during dialysis sessions typically falls on the dialysis technician.Thus, this technology could help dialysis technicians, who are often challenged with a high-volume of patients each day, ensure patient safety while also streamlining workflows to reduce costs.The clinical implication is that our new screening tool can help catch cases of stenosis that may otherwise be missed due to understaffed dialysis centers (the patient to staff ratios at dialysis centers can exceed 90:1 and reach upwards of 300% the recommended limit by the NKF) 32,33 .Additionally, our technology could serve as an indirect gateway to ultrasound in the diagnostic workup.Instead of performing an ultrasound on every patient, routine screening can be done via our technology and screening ultrasound is only performed on those flagged for potential stenosis to help facilitate efficient resource allocation.Note that routine ultrasound screening is separate from the routine physical exam screening that is to be performed at each dialysis session.We foresee our technology facilitating the screening process that takes place at the dialysis sessions, and not to be used as a complete replacement for ultrasound.There is potential for this technology to even be patient facing.The next step in implementation would be to deploy the model onto a server and create an API that will allow users to upload a sound and receive back a prediction.The next step in terms of validation of effectiveness and regulation would be to run a prospective clinical trial using our deployed model. Turbulence induced sound The sound produced by blood flowing through an AVF can be an important indicator of the AVF's patency status.Blood flow through a patent AVF is laminar and will create a quiet "whooshing" sound.A stenosed AVF can be conceptualized as a converging-diverging nozzle.Flow through a convergingdiverging nozzle is characterized by jet Reynolds number show in Eq. 1: where u is the velocity, D is the jet diameter, v is the kinematic viscosity of the fluid.Experiments have shown that if Re exceeds about 2000, the jet flow will be turbulent 34 .A stenosed AVF will have a reduced lumen diameter relative to a patent AVF.By conservation of mass and momentum, as the lumen diameter decreases, fluid velocity will increase.From the jet Reynolds equation, we can see that this inherent inverse relationship between velocity and diameter means that velocity and diameter have opposing effects in determining the overall Reynolds number.However, as an AVF develops stenosis, the velocity of blood flow will increase by a larger factor relative to how much the diameter will decrease.This can be understood from a simplified volumetric flow rate equation Q= u 1 πr , where Q is the constant volumetric flow rate, u 1 is the fluid velocity at radius r 1 and u 2 is the fluid velocity at radius r 2 , assuming an incompressible, Newtonian fluid, which is an acceptable assumption for blood 35 .In this simplified model, a reduction in the lumen radius by 2 will result in an increase in velocity by a factor of 4. In other words, as an AVF develops stenosis, the increased fluid velocity u caused by the reduced diameter D will overall result in a net increase of the jet Reynolds number.Once the jet Reynolds number crosses a certain threshold (i.e.2000), the flow regime will transition from laminar to turbulent.Turbulent flow produces a different sound compared to laminar flow.This concept of turbulent fluid induced noise is characterized by Lighthill's wave equation.Turbulent fluid flow collaterally generates pressure and density variations in the fluid, which in turn generates the pressure and density variations that we perceive as noise 36 .Increasing turbulence will result in an increased amount of higher frequency components in the generated sound 37 .Clinically, the sound heard when auscultating a stenosed AVF is often described as a "high-pitched systolic bruit or thrill" (Fig. 1c). Data collection A total of 433 patients with AVFs were enrolled in this study.All recordings were performed in the same clinical setting, which is an outpatient vascular ultrasound lab.The enrolled patients are visiting clinic for routine ultrasound screening.Patients with AVFs post-ESRD and pre-ESRD (pre-emptively placed AVF in light of deteriorating kidney function) were included in this study.Patients with arteriovenous fistulas, created with either the radial or brachial artery and either the cephalic or basilic vein, were recruited for this study.On the arterial side, 80% of patients had fistulas created from the brachial artery; 20% of patients had fistulas created from the radial artery.On the venous side, 65% of patients had fistulas created from the cephalic vein, 35% of patients had fistulas created from the basilic vein.In summary, four fistula variations are analyzed in this study: brachiocephalic fistulas (52%), brachiobasilic fistulas (28%), radiocephalic fistulas (13%), radiobasilic fistulas (7%). For each patient, blood flow sounds were collected at 6 different locations along the patient's AVF (Fig. 1b).Of the 6 sounds, one was collected from the artery, one was collected at the anastomosis site (i.e., where the artery has been surgically joined to the vein), and four sounds were collected along the vein.The locations were designated, from most distal to most proximal, as "arterial" for the artery, "anastomosis" for the anastomosis site, "distal" for the distal vein, "middle" for the middle vein, "proximal" for the proximal vein, and "arch" for the arch of the vein (i.e., the point along the fistula closest to the shoulder).Note we use terminology "proximal" and "distal" based on the anatomic definitions of the arm.A total of 2565 AVF blood flow sounds were included in this study.Sounds were collected using a 3 M Littmann Core digital stethoscope at a sampling rate of 4000 Hz.Each sound was recorded for 15 s.Sounds were collected over a two year period from 2021 to 2023. The sounds from the blood flow were labeled as "patent" (normal) or "stenotic" (abnormal).The labels are validated from concurrent duplex ultrasound (blood flow sound recorded by stethoscope and ultrasound imaging were done at the same time).The final label of "patent" vs "stenotic" at each location was determined after interpretation of the corresponding ultrasound imaging and velocity reports by a board-certified vascular surgeon.The diagnosis of stenosis is established when the measured blood flow velocity by duplex ultrasound is at least double that of a preceding segment.Our dataset included 2113 patent sounds (83%) and 452 stenotic sounds (17%).Note that for some patients only 5 sounds were collected.Instead of discarding an "incomplete" set, we kept them in the study to maximize the number of samples. The data was divided into train, validate, and test sets.First, 20% of the data was randomly reserved to serve as the held-out test set for final model evaluation.Then 10-fold cross-validation was used within the training dataset (the remaining 80%).Cross-validation is used throughout the experiments (explained in more detailed below) for model training, model hyperparameter tuning and optimization, and comparison among models.The splits are done on the patient-level to prevent data leakage.Of the patients that do have a stenotic lesion, the vast majority will only have 1 stenotic lesion.There are a few cases where a patient has a stenotic lesion present at 2 separate sites; however, since the train, validation, and testing splits were done on the patient level, they would both appear in the same set. Deep learning models Three different deep learning models were explored in this study: a convolutional neural network (CNN) trained with no preset weights, a ResNet-50 pre-trained on ImageNet, and a vision transformer (ViT) with no preset weights.The CNN consisted of 6 convolutional layers.The number of filters used was 8, 16, 32, 64, 128, 256 for the 1st, 2nd, 3rd, 4th, 5th, 6th layer, respectively.Each layer uses a rectified linear (ReLu) activation function.Following each convolutional layer was a max pooling and batch normalization layer.After the six convolutional layers, the feature vector is flattened via global average pooling.The feature vector is then fed into three fully connected layers consisting of 32, 16, and 1 node(s).The first two fully connected layers uses a ReLu activation function, while the last node uses a sigmoid activation function to perform the final binary classification of "Patent" versus "Stenotic".This model was trained using an adaptive moment estimation (Adam) optimizer at a learning rate of 1 × 10 −3 .To address the issue of class imbalance, a weighted binary cross-entropy loss function which gives more importance to the minority class (i.e., the stenotic sounds) is used to calculate the loss.The class weights ratio used mirror the inverse of the class distribution in the training set.The same weighted binary cross-entropy loss function is used with the other models as well.An illustration of the 6-layer CNN is shown (Fig. 4a). The second model explored was a ResNet-50.In brief, a ResNet-50 is a CNN that is 50 layers deep with residual or skip connections that allows activations from earlier layers to be propagated down to deeper layers 38 .For this model, we also leverage transfer learning by using a ResNet-50 pre-trained on ImageNet21k, a large dataset consisting of over 14 million natural images that belong to over 20,000 classes 39 .One fully connected layer consisting of one node with a sigmoid activation function was added on top of the ResNet-50 to perform the final binary classification of "Patent" versus "Stenotic".This model was trained using an Adam optimizer over the weighted binary cross-entropy loss function.First, the ResNet-50 weights were kept frozen only the final fully connected layer was trained at a learning rate of 1 × 10 −3 .Then the entire model (ResNet-50 plus the fully connected layer) was finetuned, trained at a learning rate of 1 × 10 −5 .An illustration of the ResNet-50 is shown (Fig. 4a). The final model explored was a ViT.For our ViT, first the model input is tokenized into 16 × 16 patches.The patches are flattened and fed into a linear transformation layer to create a lower dimensional embedding and combined with positional encodings, which are learnable embeddings.The embedded patches are then inputted into a sequence of 10 transformer encoders.Each transformer encoder is comprised from 2 subcomponents.For each encoder, the first subcomponent is a 6-headed multiattention layer, which implements the multi-headed self-attention mechanism.The second subcomponent for each encoder is a fully connected feed-forward network using ReLu activation functions.After the 10 transformer encoders, the feature vector is flattened and passed to 3 fully connected layers consisting of 2048, 1024, and 1 node(s).The first two fully connected layers uses a ReLu activation function, while the last node uses a sigmoid activation function to perform the final binary classification of patent versus stenotic.This model was trained using an adaptive moment estimation (Adam) optimizer at a learning rate of 1 × 10 −3 over the weighted binary cross-entropy loss function.An illustration of the ViT in shown (Fig. 4a).All models are trained for 200 epochs, and the weights that correspond to the lowest validation loss are take to be the final model weights. Pre-processing Our three chosen models work with two-dimensional image data, while our raw audio data is one-dimensional timeseries data.To make our data compatible with our models, we first preprocess our audio data into two-dimensional image representations.Two different image representations of sound are explored in this study: Mel-scaled, decibel (dB)-scaled spectrograms and recurrence plots. A spectrogram depicts the spectrum of frequencies of a signal as it varies with time.The x-axis represents time, the y-axis represents frequency, and amplitude of a particular frequency component at a given point in time is represented by the intensity of color.The spectrograms are generated from the AVF blood flow sounds using short-time Fourier transforms as follows.First, the audio signals are windowed using a Hann window of size 512 and a hop length of 256.A 512-point fast Fourier transform is applied to each window to generate a spectrogram.The Mel-scaled, dBscaled spectrograms are generated by logarithmic rescaling of the amplitude and frequency axis.The amplitude axis is converted to the dB scale.The frequency axis is transformed onto the Mel scale, characterized by Eq. 2, where f is frequency in Hz.The resulting Mel-scaled, dB-scaled spectrograms are 374 × 128 (time resolution x frequency resolution) in size.To study the effects of varying time resolution on the spectrogram image, spectrograms with dimensions 128 × 128 and 32 × 128 are also created using bicubic interpolation.The time domain encompasses 15 s.A recurrence plot is an image that visualizes the set of all pairs in time (t n , t m ) in which x t n ð Þ ¼x t m ð Þ; where x is the systems trajectory vector through the phase space.The phase space is a multidimensional space that represents every possible state of a system, with each degree of freedom of a system represented as an axis 40 .In this study, we generate recurrence plots of the frequency spectrum.First, a Fourier transform is applied over the entire audio signal to generate the frequency spectrum.Then the frequency spectrum is discretized.For example, let T ¼ ft 0 ; t 1 ; t 2 ; t n t N represent the discretized points over which the frequency spectrum spans, separated by the interval δ.Then the trajectory of the frequency spectrum through the phase space is given by X Averaged frequency spectrums An averaged frequency spectrum is computed across all patients in the train and validate sets, stratified by label and location.Four spectral parameters are extracted from each frequency spectrum: total area under the curve (AUC), peak frequency, max frequency, and full width at half max (FWHM).The frequency spectrum is used to extract four spectral parameters from each AVF recording.Total area under the curve (AUC) is approximated using the composite trapezoidal rule for definite integrals, defined as R b a f x ð Þdx ¼ 1 2 P n j¼1 x j À x jÀ1 with partition length of 0.1 i.e., x j À x jÀ1 ¼ 0:1Þ and frequency range (a-b) of 0-2000 Hz.Peak frequency (x peak ) is defined as the frequency value that corresponds to the peak of the highest amplitude.Maximum frequency is estimated as the highest frequency with amplitude greater than 0.1.Full width at half max (FWHM) is calculated using the horizontal frequency span at half of the maximum amplitude, where FWHM ¼ x n À x m , and A simple, non-deep learning approach is explored using the AUC values from the averaged frequency spectrums.For each location, the half-way point between the averaged patent frequency spectrum AUC value and the averaged stenotic frequency spectrum AUC value is used as a threshold for evaluating the test set.For the test set, frequency spectrums AUC values that fall above the threshold are classified as stenotic, and those with AUC values below the threshold are classified as patent. EXPERIMENTS In experiment I, we build independent, location-based binary classifiers, one for each of the following locations: "anastomosis", "distal", "middle", proximal", and "arch".In other words, each location-based model is trained only on sounds originating at the given location.Note we do not build a model for the arterial location given we only have 6 examples of stenosis.For each location, we test the three different model architectures (a 6-layer CNN, a ResNet-50 pre-trained on ImageNet weights, and a ViT) with the two pre-processing methods (spectrograms and recurrence plot images).For the spectrogram images, we tested 3 different sizes of varying time resolution at the constant, maximum frequency resolution of 128: 374 × 128, 128 × 128, and 32 × 128.Note that for the ViT, the 374 × 128 spectrogram image is resized to be 368 × 128 to be compatible with the 16 × 16 patch tokenization step. In experiment II, we test how well a ViT trained on 368 × 128 spectrogram images performs in classifying the blood flow audio signal as patent or stenotic using audio signals from all six locations, but without supplying the model with any metadata regarding which location the sound is sourced from. In experiment III, we test how well a ViT trained on 368 × 128 spectrogram images performs in classifying the blood flow audio signal as patent or stenotic using audio signals from all six locations, this time with location metadata regarding where the sound is being sourced from explicitly fed into the model.This is accomplished by first encoding the categorical location information into some numerical representation, and then concatenating that numerical representation to the feature vector coming from the last transformer encoder layer.We explore three different methods of encoding the categorical location metadata: an ordinal encoding scheme where each location is encoded as an integer, using one-hot encoding, and using a learned embedding.For the learned embedding layer, a 6 × 4 embedding matrix E is learned as part of the training.Within the ordinal encoding scheme, we study the effects of using scalar multiples of the integer encodings.An illustration of this modified ViT architecture is shown in Fig. 4a. In experiment IV, we test if we can build a binary classifier to distinguish if the blood flow audio signal is coming from either the radial or brachial artery.For this task, we train the ViT on spectrogram images using only patent radial and patent brachial sounds taken at the "artery" location. In experiment V, we test if we can build a binary classifier to distinguish if the blood flow audio signal is coming from either the basilic or cephalic vein.For this task, we train the ViT on spectrogram images using only patent cephalic and patent basilic sounds taken at the "arch" location. In experiment VI, we test how well a ViT trained on 368x128 spectrogram images performs in classifying the blood flow audio signals as patent or stenotic when also given information about the anatomical original of either the artery or vein used in the creation of the fistula, for each location.This is accomplished in a parallel manner to experiment III, where first the categorical information about the anatomical origin of the artery or vein is encoded as different integers (1 for brachial artery, 0 for radial artery; 1 for cephalic vein, 0 for basilic vein), and then concatenated to the feature vector coming from the last transformer encoder layer.An illustration of this modified ViT architecture is shown in Fig. 4a. Fig. 1 Fig. 1 Schematic of overall project.a Sound of blood flow captured by digital stethoscope.The one-dimensional blow flow audio signal is preprocessed into two-dimensional image representations, which were used to train the deep learning models investigated in this paper.Ultrasound imaging and blood flow velocities measured by concurrent duplex ultrasound were used to inform the binary ground truth label of either "Patent" or "Stenotic".The deep learning models are trained following the supervised learning paradigm.b The 6 locations along the arteriovenous fistula from where blood flow sounds are collected numbered in increasing order from most distal to most proximal based on the anatomic definitions of the arm: artery, anastomosis (where the artery joins the vein), the distal vein, the middle vein, the proximal vein, and the arch of the vein.Shown in this illustration is the brachiocephalic fistula, but the brachiobasilic, radiocephalic, and radiobasilic fistula is also studied in this paper.c, d Laminar flow through a patent arteriovenous fistula (AVF) generates a quiet "whooshing" sound.As an AVF develops stenosis, laminar flow will transition to turbulent flow.Increasing turbulent flow will result in an increased amount of higher frequency components in the generated sound.Clinically, the sound heard when auscultating a stenosed AVF is often described as a "highpitched systolic bruit or thrill".The two image representations of sound explored in this study are the mel-spectrogram and the recurrence plot.The mel-spectrogram is generated from applying the short-time Fourier Transform (STFT) to the waveform.The recurrence plot is generated from a recurrence quantification analysis (RQA) of the frequency spectrum, which is obtained from applying the Fourier Transform (FT) on the waveform.The illustrative example patent and stenotic waveforms, frequency spectrums, mel-spectrograms, and recurrence plots seen here are taken from a patent and stenotic "proximal" vein, respectively. Fig. 2 Fig. 2 Averaged patent and stenotic frequency spectrums across all patients, stratified by location.We computed the averaged frequency spectrum of blood flow sounds for patent (blue) and stenotic (red) fistulas across all patients in the training and validation sets (311 patients total) at a the anastomosis site, b the distal vein site, c the middle vein site, d the proximal vein site, e and the venous arch site.f Descriptive, numerical summary of the averaged frequency spectrums include the area under the curve (AUC), peak frequency, maximum frequency, and full width at half max. Fig. 6 display calibration plots for each individual, location-based ViT models trained on 368 × 128 spectrogram images along with each model's Brier score. Fig. 3 Fig. 3 Schematic of model architectures and summary of results of location-based models.a The models explored in this study: a Convolutional Neural Network (CNN), a ResNet-50 pre-trained on ImageNet weights, a Vision Transformer (ViT).b Summary of results of Experiment 1: independent binary classifiers to distinguish patent vs stenotic at each location.In experiment 1, we compare the three model architectures and the two pre-processing methodsspectrograms and recurrence plot imagesat each location.For the spectrogram images, we tested 3 different sizes of varying time resolution at the constant, maximum frequency resolution of 128: 374 × 128, 128 × 128, and 32 × 128.*Note that for the ViT, the 374 × 128 spectrogram image is resized to be 368 × 128 to be compatible with the 16 × 16 patch tokenization step.For the recurrence plot images, we used a resolution of 128 × 128.Model performance is quantified by the area under the receiver operating characteristics curve (AuROC) and the area under the precision recall curve (AuPRC) from 10-fold cross validation.c The ROC (top) and PR curves (bottom) for detecting stenosis at each location for the best performing model in Experiment 1: ViT trained on 368 × 128 spectrogram images.The ROC and PR curves for the other model architectures and pre-processing methods are shown in the Supplementary Figs.3-8.The gray shading represents ± 1 standard deviation.Variance is calculated from the 10 different folds used in the 10fold cross validation. Fig. 4 Fig. 4 Universal Vision Transformer with and without location metadata.a Modified ViT architecture that also takes an encoded categorical input (i.e.location metadata) via concatenation to the flattened feature vector coming out of the last transformer encoder layer.b The ROC (top) and PR curves (bottom) for Experiment 2: universal binary classifier to distinguish patent vs stenotic, with no location metadata.The 368 × 128 spectrogram images from every location are aggregated together and used to train the conventional ViT (Model 3) without supplying the model any metadata about the location from which the spectrogram is sourced from.c The ROC (top) and PR curves (bottom) for Experiment 3 : universal binary classifier to distinguish patent vs stenotic, with location metadata.The 368 × 128 spectrogram images from every location are aggregated together to train the modified ViT (shown here), this time with location metadata supplied to the model.The categorical location information is first one-hot encoded, then fed into an embedding layer that converts the one-hot encoded vectors into a dense numerical vector representation that is then concatenated to the flattened feature vector.The embedding layer is trained along with the ViT.The gray shading represents ± 1 standard deviation.Variance is calculated from the 10 different folds used in the 10-fold cross validation.d Summary statistics of the universal model with and without location metadata.Results from other methods of encoding categorical information are shown in the Supplementary Fig.11. Fig. 5 Fig. 5 Evaluation on held-out test set.a Confusion matrices for the individual, location-based ViT trained on 368 × 128 spectrogram images.b Confusion matrices for the "universal" ViT trained 368 × 128 spectrogram images with location metadata.We stratify the results by location to allow for side-by-side comparison.c Confusion matrices for a simple, non-deep learning approach for detecting stenosis at each location.Here we used the averaged area under the curve (AUC) value of the averaged patent and stenotic frequency spectrums from Fig.3as a threshold for deciding how to classify each sound in the test set.For example, at the anastomosis site the AUC of the averaged patent frequency spectrum is 2772 and the AUC of the averaged stenotic frequency spectrum is 4142.The average of the two AUC values is 3457.In the test set, if a sound has a frequency spectrum AUC greater than 3457, we classify the sound as stenotic, and vice versa.d Summary of sensitivity, specificity, and F1 score for the three approaches. Fig. 6 Fig. 6 Calibration plots.Calibration plots for the individual, location-based vision transformer trained on 368 × 128 spectrogram images evaluated on the test set at each location: a anastomosis b distal c middle d proximal and e arch.The dotted black line represents a perfectly calibrated model.The solid orange line represents a logistic regression curve fitted to the points.f The Brier score for each individual, locationbased vision transformer. Fig. 7 Fig. 7 Patient level analysis.a Confusion matrix for the individual, location-based vision transformer trained on 368 × 128 spectrogram images evaluated on the test set at the patient level.At the patient level, the patient is considered a "stenotic patient" if the patient has a stenotic lesion anywhere along their arteriovenous fistula.If the patient has no stenotic lesions anywhere, then the patient is counted as a "patent patient".For the predicted label for each patient, each individual, location-based model must predict patent at every location for the overall prediction to be a patent prediction.If any of the individual, location-based models predicts stenosis, then the overall prediction is counted as stenotic.b Sensitivity, specificity, and F1 score for the patient-level analysis. ÞjjÞ, where Θ is the Heaviside step function.The final recurrence plots are size 128 × 128.All images representations (both recurrence plots and spectrograms) are normalized prior to input into the model into the range [−1,1]. Table 1 . Clinical and demographic characteristics of the patients included in this study. Table 2 . Breakdown The recurrence states of x t n ð Þ are states x t m ð Þ that fall within a given radius Ɛ around x t n ð Þ.The recurrence plot is constructed as an N x N lattice of squares with side length δ and with each coordinate axis reporting T. The value at coordinates t n ; t m ð Þ is given by the recurrence value function Rðt n
v3-fos-license
2023-08-07T13:39:42.472Z
2023-08-07T00:00:00.000
260621138
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jeccr.biomedcentral.com/counter/pdf/10.1186/s13046-023-02779-x", "pdf_hash": "a12b2032b8fc80cd536246cf5f3065c3f5d55807", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2564", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "6fc479a1996cdf4600fb6e5f69ee1b13468e0a84", "year": 2023 }
pes2o/s2orc
Anti-VEGF therapy selects for clones resistant to glucose starvation in ovarian cancer xenografts Background Genetic and metabolic heterogeneity are well-known features of cancer and tumors can be viewed as an evolving mix of subclonal populations, subjected to selection driven by microenvironmental pressures or drug treatment. In previous studies, anti-VEGF therapy was found to elicit rewiring of tumor metabolism, causing marked alterations in glucose, lactate ad ATP levels in tumors. The aim of this study was to evaluate whether differences in the sensitivity to glucose starvation existed at the clonal level in ovarian cancer cells and to investigate the effects induced by anti-VEGF therapy on this phenotype by multi-omics analysis. Methods Clonal populations, obtained from both ovarian cancer cell lines (IGROV-1 and SKOV3) and tumor xenografts upon glucose deprivation, were defined as glucose deprivation resistant (GDR) or glucose deprivation sensitive (GDS) clones based on their in vitro behaviour. GDR and GDS clones were characterized using a multi-omics approach, including genetic, transcriptomic and metabolic analysis, and tested for their tumorigenic potential and reaction to anti-angiogenic therapy. Results Two clonal populations, GDR and GDS, with strikingly different viability following in vitro glucose starvation, were identified in ovarian cancer cell lines. GDR clones survived and overcame glucose starvation-induced stress by enhancing mitochondrial oxidative phosphorylation (OXPHOS) and both pyruvate and lipids uptake, whereas GDS clones were less able to adapt and died. Treatment of ovarian cancer xenografts with the anti-VEGF drug bevacizumab positively selected for GDR clones that disclosed increased tumorigenic properties in NOD/SCID mice. Remarkably, GDR clones were more sensitive than GDS clones to the mitochondrial respiratory chain complex I inhibitor metformin, thus suggesting a potential therapeutic strategy to target the OXPHOS-metabolic dependency of this subpopulation. Conclusion A glucose-deprivation resistant population of ovarian cancer cells showing druggable OXPHOS-dependent metabolic traits is enriched in experimental tumors treated by anti-VEGF therapy. Supplementary Information The online version contains supplementary material available at 10.1186/s13046-023-02779-x. Background Extensive genetic and phenotypic variation exists among tumors (inter-tumor heterogeneity) but also within tumors (intra-tumor heterogeneity).This diversity can in part be ascribed to the genetic instability that arises through various routes and can influence tumor evolution and patient outcome.Therefore, cancer is often composed of distinct subclonal populations that expand, evolve and undergo selection, continually adapting to the surrounding microenvironment [1,2].The subclonal architecture of cancer is dynamic and it can vary during the disease course.Besides the cancer ecosystem, another type of selection is operated by cancer therapeutics: drugs or radiation may destroy cancer cells but also inadvertently provide a potent selective pressure for the expansion of resistant variants [1].Anti-angiogenic therapy, as for example the anti-VEGF monoclonal antibody bevacizumab, has been approved for treatment of cancer in specific clinical settings [3].In the case of ovarian cancer, bevacizumab has been approved in clinical practice together with chemotherapy based on results from randomized clinical trials showing significant benefits in terms of progression-free survival, with acceptable tolerability and no detrimental effects on quality of life [4].Importantly, it has been demonstrated that inhibition of tumor angiogenesis in mouse models generates a selective pressure, driving surviving cancer cells to undergo metabolic reprogramming in order to adapt to the new hypoxic environment and eventually become resistant to therapy [5].Along this line, our laboratory previously unvealed that bevacizumab caused glucose and ATP deprivation in experimental tumors, leading to AMP-activated protein kinase (AMPK) activation [6].Glycolysis is generally fostered by anti-angiogenic therapy as the result of a long-term adaptive process [7], but different tumor areas can disclose prevalently glycolytic or oxidative metabolism, as reported by several studies [8][9][10].These findings suggest that the harsh hypoxic and glucose-restricted microenvironment of tumors treated with anti-angiogenic therapy might select specific subpopulations of tumor cells endowed with distinctive capability to resist glucose starvation.The concept that glucose concentrations are often decreased in solid tumors compared with the surrounding normal tissue is well-established and prompted investigation of the strategies adopted by cancer cells to maintain their metabolic requirements [11][12][13][14][15]. Previous studies have shown that among cancer cell lines marked differences can be found in their capability to survive glucose limitation, and an RNA interference screen pinpointed mitochondrial oxidative phosphorylation (OXPHOS) as the major pathway required for optimal proliferation under low glucose conditions [16].Among many metabolic pathways, lipid metabolism is increased in rapidly proliferating cells, because of their tremendous need of energy, membrane biosynthesis and generation of signaling molecules [17,18].In cancer cells, fatty acids (FAs) are available from either exogenous sources or from de novo FAs synthesis. In this study, we evaluated whether differences in the sensitivity to glucose starvation existed at the clonal level in ovarian cancer cell lines, investigated the effect of anti-VEGF therapy on this phenotype in patientderived xenografts and explored the basis of this phenomenon by multi-omics analysis.Moreover, we found significant increased OXPHOS dependent metabolic activities in GDR clones compared with GDS clones and we observed that anti-VEGF therapy caused imbalance of the GDR/GDS ratio in experimental tumors, with substantial enrichment of GDR clones. Clones generation Clonal populations were obtained from ovarian cancer cell lines including IGROV-1, OC316, SKOV3, A2780, OAW42 and A2774.In order to obtain single-cell clones, we used the limited dilution method: by dispensing 100 µL per well of a suspension at a concentration of 100 cells/9.6 mL, we obtained, according to Poisson distribution, at most 2 cells per well in around 91% of wells.Specifically, we obtained 35% of empty wells, 37% containing a single cell, and 19% containing two cells.The wells containing more than 2 cells were around 9%.By dispensing instead 50 µL per well of the same suspension, we obtained at most 2 cells in around 98% of wells.Specifically, we obtained 59% of empty wells, 31% of singlecell, and 8% of doublets.The wells containing more than 2 cells were only 2%.Once the clones covered > 80% of the well, they were detached, split into two wells of a new 96-well plate, which were cultured either in standard (11 mM) or low (0.2 mM) glucose medium.Through daily observations, we evaluated by microscopy the behaviour of clones under glucose starvation and after 72 h we classified clones in two categories by evaluating the percentage of confluency with Fiji cell quantification tool (National Institutes of Health, USA).GDS clones were characterised by a percentage of cell confluency below 20%, whereas GDR clones presented a cell confluency above 20% after 72 h under glucose starvation. In vivo experiments For tumor establishment, 8-week-old NOD/SCID mice (Charles River, Wilmington, MA) were subcutaneously (s.c.) injected into both flanks with 0.3-0.5 × 10 6 tumor cells mixed at 4 °C with liquid Matrigel (Becton-Dickinson; Franklin Lakes, NY).Tumor volume (mm 3 ) was calculated as previously reported [19].When tumors were about 150 mm 3 , anti-human VEGF mAb (bevacizumab) was administered intraperitoneally (i.p.) at 100 µg/dose bi-or tri-weekly to NOD/SCID and mice were sacrificed 48 h after the last treatment.Control mice received i.p. injections of PBS.To obtain GDS and GDR clones from xenografts, we isolated the ex vivo cell cultures from both bevacizumab-treated and control xenografts and calculated the percentage of GDS and GDR clones on the total number of clones that we obtained.The human patient-derived xenograft (PDX) model of ovarian cancer was reviewed and approved by the Veneto Institute of Oncology (IOV) Institutional Review Board and Ethics Committee (EM 23/2017) and was performed in accordance with the Declaration of Helsinki.The patient provided written informed consent to participate in this study.Patient-derived xenograft was previously derived in our lab from cancer cells contained in ascitic effusions from patient bearing epithelial ovarian cancer (EOC) [7,20], and utilized in this study.PDX was obtained by intraperitoneally injecting 1 × 10 6 PDOVCA62 cancer cells in NOD/SCID mice.Two days after injection, mice were treated twice per week with i.p. injections of bevacizumab or PBS (control mice) until ethical endopoint. Cell viability Cell viability of clones in glucose deprivation assay in vitro and in PDX-derived ex vivo cultures was evaluated using Annexin V/PI Staining Kit (Roche Applied Sciences; Penzberg, Germany).Cells were stained with 2 µl Annexin-V Fluos, 2 µl Propidium Iodide and 100 µl Hepes buffer, according to the manufacturer's instruction.Following an incubation of 15 min in the dark, staining was blocked with 200 µl Hepes buffer.Labelled cells were analysed by LSR II cytofluorimeter (Becton Dickinson).Cell viability of GDS and GDR clones in vitro and GDR clones cultured in FBS deprived medium or oleic acid enriched one was calculated by using CellTiter 96 ® Aqueous One Solution Cell Proliferation Assay (MTS) following manufacturer's instructions (Promega, Milan, Italy).For pyruvate deprivation experiments, GDS and GDR clones were cultured for 72 h under pyruvate deprivation and cell viability was measured by using Sulforhodamine B Assay Kit (SRB) following manufacturer's instructions (Abcam, Cambridge, UK). Seahorse analysis Seahorse analysis allows dynamic measure of the oxygen consumption rate (OCR) and extracellular acidification rate (ECAR).We applied this technique to investigate glycolysis and OXPHOS in GDR and GDS clones.To this end, 2.5 × 10 4 cells per well (n = 5 replicates per sample) were plated in RPMI medium supplemented with 10% FBS.The following day, cells were placed in a running DMEM medium and pre-incubated for 30 min at 37 °C in atmospheric CO2 before starting Seahorse measurements.The cartridge of the instrument was loaded with and dispensed four different metabolic inhibitors at 20 min intervals: oligomycin, carbonyl cyanide 4-(trifluoromethoxy)phenylhydrazone (FCCP), antimycin A and rotenone.Spare respiratory capacity was calculated by subtracting the mean basal respiration values from the maximum respiration values.Glycolytic capacity was calculated as the difference between the mean ECAR following the injection of oligomycin and the basal ECAR values. Glucose and lactate measurements GDS and GDR clones were cultured in 6-well plates under normal conditions and after 72 h supernatants were collected.Glucose and lactate concentrations were determined by colorimetric methods on an automated analyzer (Dimension RxL, Dade Behring, Milan, Italy).Values were normalized to cells number at the end of the incubation period. Single-cell acidification assay Single-cell acidification assay was performed to evaluate the secretion of acid from individual living tumor cells, as previously described [21].Cells from IGROV-1 clones were splitted into small (picoliter/nanoliter) aqueous droplets in oil (making a water-in-oil emulsion) using microfluidic technology.Each droplet contains at most a single cell and molecules secreted by this single cell are retained by the droplet.To screen droplets with higher throughput in a semi-automated way, we engineered an inverted microscope, so that each droplet can be analysed using laser-induced fluorescence at approximately 1 kHz.For each droplet the ratio of emitted fluorescence intensities at 580 and 630 nm is calculated in real time. Mass spectrometry-based untargeted metabolomics GDR and GDS cells clones were grown for 48 h in biological triplicate.At 48 h each clone was rapidly rinsed in saline solution (~ 2s), aspirated, and metabolism was quenched by adding ~ 15mL of liquid N2 to the dish.The plates were then stored at -80 °C, and extracted and analysed within seven days.Extraction was done by adding 1 mL of ice-cold 90% 9:1 MeOH:CHCl3 to each plate and cells were scraped centrifuge at 4 °C for 15 min at 10000xg.Supernatants were collected for metabolomics analysis.Flow Injection Analysis High resolution mass spectrometry (FIA-HRMS) was used for untargeted metabolomics [22].A portion of the extract (8 µL) was analysed by LTQ-Orbitrap XL Mass Spectrometer (Thermo Fisher Scientific) equipped with an electrospray source operated in negative and positive modes.Each run was carried out by injecting 8 µL of sample extract at a flow rate of 50 µL/min (Agilent 1200 Series) of mobile phase consisting of isopropanol/water (60:40, v/v) buffered with 5 mM ammonium at pH 9 for negative mode and methanol/water (60:40, v/v) with 0.1% formic acid at pH 3 for positive mode.Reference masses for internal calibration were used in continuous infusion during the analysis (m/z 210.1285 for positive and m/z 212.0750 for negative ionization).Mass spectra were recorded from m/z 50 to 1 000 with 60 000 resolutions.Source temperature was set to 240 °C with 25 L/min drying gas and a nebulizer pressure of 35 psig.MS/MS fragmentation pattern of the significant features was collected and used to confirm metabolite identity.All data processing and analysis were done with Matlab R2016a (The Mathworks, Natick, MA) using an in-house developed script [23].The statistical significance of single metabolite between GDR vs. GDS clones were computed by univariate pairwise comparison Mann-Whitney-Wilcoxon test (JMP pro12, SAS).For biological interpretation of the metabolomic data, we mapped the significant metabolites into biochemical network using MetaboAnalyst (www.metab oanal yst.ca).Enrichment analysis (EA) tools were used to identify metabolic pathways most likely to be involved in GDS, GDR differences. Whole exome sequencing (WES) analysis WES was performed on IGROV-1 (3 GDR and 3 GDS) and SKOV3 clones (3 GDS and 3 GDR).Genomic DNA was extracted with Easy DNA kit (Life Technologies), quantified with Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA) and subjected to quality control using agarose gel prior to enzymatic DNA fragmentation.SKOV3 samples were sequenced using the SureSelectXT Low Input Human All Exon V7 48.2 Mb kit (Agilent, Santa Clara, CA), whereas IGROV-1 samples were sequenced by All Exon V6, on NextSeq 500 (Illumina, San Diego, CA) in paired-end mode (2 × 100 bp).Sequencing reads were then analysed with the Alissa Align & Call Software (version 1.0, Agilent Technologies, Santa Clara, CA).Specifically, reads were trimmed and aligned to the human genomic reference hg19 (GRCh37) with default parameters.After reads de-duplication, aligned reads were used for somatic variant calling with Alissa SNPPET according to default threshold values.Single Nucleotide Variant (SNV) annotation and selection was performed using the Alissa Interpret module (version 1.0, Agilent Technologies, Santa Clara, CA).Briefly, exonic SNVs indicated as confident calls (PASS) were considered for further investigation and filtered considering a variant allele frequency cut-off value of 20%, excluding common polymorphisms (MAF > 1% in major population databases), synonymous and intronic SNVs (30,31). Mitochondrial DNA sequencing The mitochondrial genome sequencing was carried out using an NGS approach [24].Briefly, the entire mtDNA molecule was amplified with two long PCR amplicons (9.1 kb and 11.2 kb), the library was constructed by Nextera XT DNA Library Preparation Kit (Illumina, San Diego, CA) and sequenced on MiSeq System (Illumina, San Diego, CA), using the 300-cycle reagent kit.Reads were aligned to mtDNA reference sequence (NC_012920.1).Population frequencies and annotation of SNVs (single nucleotide variant) were recovered from public database Mitomap (http:// www.mitom ap.org). Transcriptome analysis Microarray experiments were performed in both IGROV-1 and SKOV3 cell lines.For each cell line, a total of 30 samples, corresponding to 10 clones (5 GDR, 5 GDS) at basal (0 h) and at two time points of glucose deprivation (6 h, 24 h), were collected.RNA quality and purity control was assessed with the Agilent Bioanalyzer 2100 (Agilent Technologies, Waldbronn, Germany) and Eukaryote total RNA Nano Assay (Agilent).For microarray expression experiments, only total RNA of high quality was used (RIN > 7).RNA samples that passed the high quality controls were diluted to 100 ng in a total volume of 3 µl DEPC treated water.In vitro transcription and biotin labelling were performed according to GeneChip 3'IVT Express kit protocol (Affymetrix, Santa Clara, CA).Following fragmentation, biotinylated cRNA was hybridized for 16 h at 45 °C to GeneChip ™ PrimeView ™ Human Gene Expression Arrays in an Affymetrix Gene-Chip Hybridization Oven 645.Affymetrix Fluidics Station 450 was used to stain and wash the chips.Arrays were then scanned on Affymetrix GeneChip Scanner GCS3000 and the image (*.DAT) files were processed using the Affymetrix GeneChip Command Console (AGCC) software v.5.0 to generate cell intensity (*.CEL) files.Prior to transcriptional data analysis, chip quality assessment was executed using the Affymetrix Expression Console software v.1.4and for every array all quality metrics were found within boundaries.Bioinformatic analysis was carried out in the R statistical environment using Bioconductor packages [25].Data were preprocessed using the RMA algorithm [26].Given the specific design of the experiments, thought to obtain comparisons both between independent groups of clones (GDR vs. GDS comparisons at fixed time point) and between paired clones (over time comparisons for GDR or GDS clones), the clone variable was treated as a random effect [27].Differential expression analysis was performed using the limma package, by linear modelling, moderating the t-statistics by empirical Bayes shrinkage [28].To correct for multiple testing, the Benjamini and Hochberg's method was applied.Gene Set Enrichment Analysis (GSEA) was performed to evaluate the functional significance of curated sets of genes.Genes were ranked by moderated t-statistics and fast pre-ranked GSEA, as implemented in [29], was run against the KEGG and Reactome canonical pathways present in the "c2.cp.kegg" and "c2.cp.reactome" sub-collections of the Molecular Signatures Database.Gene sets with a Benjamini-Hochberg adjusted p-value < 0.10 were considered significantly enriched. The day after, membrane was washed and incubated for 60 min with a horseradish conjugated secondary antibody (goat anti-rabbit, IgG-HRP, 1:5000, Santa Cruz Biotechnology; horse anti-mouse, IgG-HRP, 1:5000, Cell Signalling Technology).The detection was attained using Western Lightning plus ECL reagents (PerkinElmer) and the signal for each of the proteins of interest was then acquired using UVITEC Alliance (Cambridge, UK), and standardized on the signal of a housekeeping protein (Tubulin or Actin). Reverse transcription-PCR and quantitative PCR Total RNA was isolated using RNAeasy ® Mini Kit (Qiagen, Venlo, Netherlands) according to manufacturer's instructions.cDNA was synthesized from 0.3 to 1 µg of total RNA using High Capacity RNA-to-cDNA Kit (Thermo Fisher).Real-time PCRs for SLC16A1 (MCT1) (Forward 5'-TGT TGT TGC AAA TGG AGT GT-3' , Reverse 5'-AAG TCG ATA ATT GAT GCC CAT GCC AA-3') were performed using Platinum ® SYBR ® Green (Thermo Fisher) in an ABI Prism 7900 HT Sequence Detection System (Thermo Fisher).Results were analysed using the ΔΔCt method with normalization against HMBS gene expression. Mitochondrial mass Mitochondrial mass of GDS and GDR clones under normal conditions or upon glucose deprivation was measured using MitoTracker ™ Deep Red FM (Invitrogen, Carlsbad, CA) following manufacturer's instructions.Verapamil (1µL/mL) was used to block probe leaking. Statistical analysis Results were expressed as mean value ± SD.Statistical comparison between two sets of data was performed using the unpaired Student's t test (two-tailed).Statistical significance is indicated as follows: *P < 0.05; **P < 0.01; ***P < 0.001.For in vivo experiments with PDX, survival was evaluated using the Kaplan-Meier method (log-rank test) with SigmaPlot software. Selection of ovarian cancer cell clones based on their response to glucose limitation To investigate cellular sensitivity to glucose starvation at the clonal level, we obtained populations of clones from several established ovarian cancer cell lines (IGROV-1, OC316, SKOV3, A2780, OAW42 and A2774).Single tumor cells were isolated from each cell line and different clones (range n = 20-35) were derived and expanded in vitro (Fig. 1A).Clones were subjected to glucose limitation regimen and, after 72 h, it was possible to discern two types of clones based on their different morphology and degree of confluency.GDR clones survived under low glucose concentrations (0.2 mM) by conserving their typical adhesive morphology and maintaining a monolayer pattern.On the contrary, GDS clones showed reduced confluency at microscopical observation (Fig. 1B) and disclosed a compromised viability at the same time point (Fig. 1C).The proportion of GDR and GDS clones varied depending on the cancer cell line: A2774 and A2780 cell lines contained an excess (> 80%) of GDR clones, whereas others, such as OC316 were composed entirely by GDS clones (Fig. 1D).IGROV-1 and SKOV3 cell lines were selected for further investigations because of their balanced proportions of GDS and GDR clones (GDR/GDS ratio close to 1) (Fig. 1D).These findings demonstrated an unexpected clonal diversity in ovarian cancer cell lines, with different cell populations that responded oppositely to glucose starvation. Anti-VEGF therapy selects GDR clones endowed with a more tumorigenic potential in vivo Anti-VEGF therapy caused dramatic depletion of glucose and ATP in ovarian cancer xenografts [6].To investigate whether anti-VEGF therapy perturbed the GDR/GDS ratio in tumor xenograft models, we generated IGROV-1 and SKOV3 tumor xenografts and treated them with bevacizumab (BEVA) for 4 weeks.At sacrifice, we derived ex vivo cell cultures from tumors and performed analysis of GDR and GDS clones (Fig. 2A).The mean percentages of GDR and GDS clones (50% ± 9.56%) in IGROV-1 control tumors were similar to the proportion found in IGROV-1 parental cell line (55% GDR vs. 45% GDS), whereas the average proportion of GDS clones obtained from control SKOV3 tumors (64% ± 32.8% GDS vs. 34% ± 32.5% GDR) was slightly higher than the percentage found in the parental cell line (56% GDR vs. 44% GDS), due to two control tumors (C3, C4) being enriched for GDS clones (Fig. 2B, black bars).A remarkable imbalance was found among GDR and GDS clones in ex vivo cultures from BEVA-treated tumors for IGROV-1 xenografts, with a significant excess of GDR clones (73% ± 15.97%) respect to GDS clones (27% ± 15.7%) in most tumors analyzed and an almost total enrichment for GDR (97% ± 3.6%) respect to GDS (3% ± 3.6%) clones in SKOV3 xenografts (Fig. 2B, Suppl.Table 2).The tumorigenic potential of GDR and GDS derived from control tumors clones was further investigated.The in vivo growth rate of GDR clones was significantly higher compared with GDS clones both in IGROV-1 and SKOV3 xenografts (Fig. 2C).The increased capability of GDR clones to form tumors in vivo was likely not due to intrinsic differences in cell proliferation because similar numbers of pHH3 positive cells were observed in tumors (Suppl.Figure 1 A).Notably, microvessel density (MVD), which is often used as a surrogate for intratumoral angiogenic activity [30], was quantified in GDS and GDR tumor xenografts by IHC.The number of CD31-positive endothelial cells per area unit was higher in GDRderived tumors when compared with GDS-derived tumors (Fig. 2D).Given the well-established role of angiogenesis in promoting tumor growth, it is tempting to speculate that the increased in vivo growth rate of GDR clones could in part be accounted for by their increased angiogenic capacity.No difference was found in apoptosis between GDR and GDS tumors, whereas SKOV3 tumors were less apoptotic than IGROV-1 ones (Suppl.Figure 1B).Finally, GDS-derived tumors presented larger necrotic areas respect to GDR tumors (Suppl.Figure 1C).These results suggested that anti-VEGF therapy led to marked enrichment of GDR clones that were fit to form tumors under the unfavourable nutrient conditions of the subcutaneous xenograft tumor microenvironment.Finally, we investigated whether therapy-associated enrichment of a glucose deprivation-resistant population could occur in a patient-derived xenograft (PDOVCA62) model of ovarian cancer.Bevacizumab significantly extended the survival of the mice from 29 ± 4 to 47 ± 6 days (mean ± SD) in control and treated groups, respectively (Suppl.Figure 2A).As it was not possible to derive single-cell clones from PDOVCA62 cells, viability of tumor cells from the ascitic fluid sample obtained at sacrifice was analysed under glucose deprivation in vitro.An enrichment of GDR cell populations in PDOVCA62 tumors from bevacizumab-treated mice compared with control mice (Suppl.Figure 2B) was observed, in line with results obtained in s.c.tumor xenografts models. GDR clones are endowed with an OXPHOS-dependent metabolic phenotype The striking effects of bevacizumab treatment on the GDS/GDR ratio in tumors prompted us to investigate metabolic features associated with the different phenotypes.To functionally investigate metabolic activity, measurements of OXPHOS and glycolysis were obtained in IGROV-1 and SKOV3 clones by using Seahorse technology.At the basal level, IGROV-1 GDR clones were characterized by higher OCR compared with GDS clones whereas SKOV3 GDR clones were similar to GDS clones (Fig. 3A, B; Suppl. Figure 3A, B).With regard to ECAR, no significant differences were found between IGROV-1 GDR and GDS clones, whereas SKOV3 GDS clones were characterized by higher ECAR compared with GDR clones (Fig. 3C, D; Suppl. Figure 3C, D).In response to mitochondrial activity perturbations induced by FCCP, OCR significantly increased in IGROV-1 GDR compared with GDS clones, suggesting that IGROV-1 GDR clones are endowed with increased spare respiratory capacity (Fig. 3B), whereas SKOV3 GDR clones did not respond by increasing OCR (Suppl.Figure 3B).Oligomycin (a mitochondrial inhibitor of ATP synthase) triggered an increase in ECAR in both IGROV-1 and SKOV3 GDR and GDS clones (Fig. 3C, Suppl.Figure 3C) but, interestingly, GDR clones showed higher maximal glycolytic capacity compared to GDS clones (Fig. 3D, Suppl.Figure 3D).Altogether, these observations suggested a metabolic heterogeneity between IGROV-1 and SKOV3 clones at the basal level and, in particular, increased OXPHOS dependent metabolism in IGROV-1 GDR compared with GDS clones, which was not shared by SKOV3 GDR clones.To measure the extent of secreted acid from GDR and GDS clones, we performed single-cell acidification assay by using tumor cells compartmentalized in picoliter droplets [21].In the presence of glucose at the basal level, IGROV-1 GDR clones showed higher pH values compared to GDS clones, by suggesting a lower acidification rate (Fig. 3E).Oligomycin treatment was able to induce a significant decrease in the pH value in GDR clones when compared to non-treated clones.However, the pH value of GDS clones was not significantly affected by oligomycin treatment (Fig. 3E). Untargeted metabolic profiling was performed to further investigate the metabolic differences between GDR and GDS clones under basal growth conditions.The major metabolic differences between GDR and GDS IGROV-1 clones were alterations in metabolites abundance belonging to the Warburg effect (28 metabolites) such as glutamate metabolism, ammonia recycling, and urea cycle (Suppl.File 1).In agreement with what previously observed, all the mapped metabolites were involved in biochemical reactions involving the mitochondria. Glucose consumption and lactate production were measured in IGROV-1 and SKOV3 clones.IGROV-1 GDS and GDR clones demonstrated comparable levels of glucose consumption and lactate production (Suppl.Figure 3E) whereas SKOV3 GDS clones showed slightly increased glucose consumption and produced more lactate compared to GDR clones (Suppl.Figure 3 F).Finally, we sought to investigate if the metabolic differences observed in GDS/GDR clones from IGROV-1 and SKOV3 cells existed also in the ovarian cancer cell lines with an almost total enrichment towards GDS (OC316 cell line) or GDR clones (A2774 cell line).As previously reported [6], OC316 cells were endowed with both higher basal ECAR and glycolytic capacity values respect to A2774 cells by indicating a more glycolytic metabolism upon glucose availability (Suppl.Figure 4 A).Upon glucose deprivation, a higher percentage of cell death was observed in OC316 respect to A2774 cells, in line with their GDS phenotype (Suppl.Figure 4B).Pyruvate deprivation induced a significant increase in the percentage of cell death only in GDR-enriched A2774 cell line, but not in the GDS-enriched OC316 cell line, thus suggesting the dependence of A2774 cells from this nutrient, as previously observed for IGROV-1 GDR clones (Suppl.Figure 4 C).Altogether, these findings suggest that certain metabolic traits of GDR/GDS clones can be found across different ovarian cancer cell lines. The WASHC2A missense variant has been observed in various cancers as neutral; while the MFSD4B variant has been previously described in gastric and colorectal cancer [31][31] [31][31] [32], and is likely to cause loss of function of the protein, which is predicted to enable glucose transmembrane transporter activity. By merging IGROV-1 and SKOV3 WES data, we observed a higher number of SNVs carried by SKOV3 clones with respect to IGROV1 clones, possibly due to the different enrichment performance of the two exome panels versions.In any case, we did not detect any shared variants within GDS or GDR groups, except for one variant common to all 12 samples (Suppl.Figure 6), SPIN2B c.136 C > T p.Arg46Cys, with an overall VAF from 36.7 to 58.4%. In a previous study, mitochondrial DNA mutations accounted for limited resistance of cancer cell lines to glucose limitation [16].Mitochondrial DNA analysis of IGROV-1 GDR and GDS clones showed 32 variants: 28 homoplastic changes shared by all samples, 1 heteroplasmic change shared by all samples, and 3 heteroplasmic changes present in single samples (Suppl.Table 5).Two of them are coding variants, both variants absent in Mitomap database (https:// www.mitom ap.org/ MITOM AP), detected respectively in MT-ND5 gene of IGROV-1 GDR_7 at an heteroplasmy level of 9% and in MT-RNR2 gene of IGROV-1 GDS_9 at an heteroplasmy level of 21%.These mutations however were present at low levels and are therefore not likely to affect cell metabolism, thus indicating the lack of a peculiar genetic signature at the mitochondrial level that could have an impact on metabolic features.Altogether, these results indicate that genetic alterations are not likely to account for the marked difference in tolerance to glucose starvation observed in GDR versus GDS clones. GDR clones exacerbate an OXPHOS-related transcriptional program to overcome glucose deprivation stressful stimulus To investigate whether the metabolic differences observed in GDR and GDS clones were associated with distinct signatures at the transcriptional level, a differential expression (DE) analysis of GDR and GDS clones cultivated under either normal-glucose or low-glucose condition was performed.Based on previous studies [32], two time points (6 and 24 h) were selected to investigate both early and late transcriptional effects of glucose deprivation in IGROV-1 and SKOV3 models.Glucose deprivation caused massive modulation of genes in all clones (Suppl.Table 6) and, to characterize its action from a functional point of view, GSEA on KEGG pathways was conducted.The analysis over time, comparing cells at different time points during glucose starvation (6 or 24 h) with the basal condition (0 h), disclosed several altered KEGG pathways in both GDR -glu vs. GDR + glu and GDS -glu vs. GDS + glu comparisons (Suppl.Figure 7 and Suppl.Figure 8, Suppl.File 2).DE analysis between GDR and GDS clones at the same time point did not show differentially expressed genes between the two populations, except for few tens of probes for IGROV-1 at 24 h of glucose deprivation (Suppl.Table 6).However, the GSEA method showed several significantly altered KEGG pathways in GDR with respect to GDS clones (Suppl.File 2).We selected the common perturbed pathways between the two cell models in order to identify the functional processes characterizing GDR clones.Specifically, after 24 h of glucose deprivation, two downregulated pathways (KEGG N-Glycan biosynthesis and KEGG protein export) and, more interestingly, seven upregulated pathways were shared (Fig. 4A).To compare the behaviour of IGROV-1 and SKOV3 GDR clones, the seven upregulated pathways were analyzed over time, highlighting a common modulation pattern of the same gene sets but with different timing (Fig. 4B).Specifically, IGROV-1 GDR clones disclosed a basal (0 h) upregulation of KEGG pathways strictly related to metabolic changes such as OXPHOS, Parkinsons disease, Huntingtons disease, Alzheimers disease, compared to GDS clones, thus confirming their previously demonstrated OXPHOS metabolic propensity.Moreover, after 6 h of glucose deprivation, IGROV-1 GDR clones showed an even enhanced upregulation of OXPHOS-related pathways that was maintained after 24 h.Differently, SKOV3 GDR clones showed significant activation of the same processes of IGROV-1 clones only after 24 h of glucose deprivation and not at the other time points analyzed.To better characterize the common transcriptional response between IGROV-1 and SKOV3 GDR clones, that were able to survive the stressful glucose deprivation stimulus, we selected the four pathways most strongly upregulated in GDR clones of both models (adjusted p < 0.0005) after 24 h of glucose deprivation and visualized the complex networks of interconnections among core enrichment genes (Suppl.Figure 9).Finally, to further explore the pathways crosstalk between the two models, gene expression distribution of core enrichment genes was displayed Fig. 4 Activation of OXPHOS pathway characterizes GDR clones after 24 h of glucose deprivation.A GSEA enrichment plots showing the seven significantly upregulated pathways in GDR vs. GDS clones at 24 h of glucose deprivation in IGROV-1 (left panel) and SKOV 3 (right panel).B Dot plot displaying the activation profiles of the shared pathways over time, x-axis represents the time course (hours of glucose deprivation), the adjusted p-value (p.adjust) indicates the significance of each pathway while GeneRatio the number of core enrichment genes/total number of genes in the pathway.C UpSet plots for the 4 most upregulated KEGG pathways shared between IGROV-1 and SKOV3 models.This visualization allows discerning core genes that uniquely belong to a pathway or are shared between two or more pathways.On y-axis the fold change distribution of core enrichment genes in the different groups is displayed.D List of the 37 genes belonging to the core enrichment of KEGG_OXIDATIVE_ pathway for both the IGROV-1 and the SKOV 3 model.Genes in bold are in the core enrichment of 4 KEGG pathways: OXIDATIVE PHOSPHORYLATION, ALZHEIMERS DISEASE, HUNTINGTONS DISEASE and PARKINSONS DISEASE by UpSet plot (Fig. 4C).Given the high overlap of core genes involved in OXPHOS pathway and the other strongly upregulated KEGG pathways (Parkinsons disease, Huntingtons disease and Alzheimers disease; Suppl.Figure 9, Fig. 4C), the parallel activation of these pathways both in IGROV-1 and SKOV3 models was expected (Fig. 4B).Core genes sustaining the significant activation of OXPHOS pathway in GDR vs. GDS clones are listed in Fig. 4D and represent a promising starting point for the identification of a GDR signature in ovarian cancer. SKOV3 GDR clones accumulate and metabolize external lipids, whereas GDS clones rely on accumulation and synthesis of new FAs Glucose deprivation triggered the significant upregulation of common processes implicated in OXPHOS both in IGROV-1 and SKOV3 GDR clones but, interestingly, fatty acid metabolism was found to be a specifically upregulated process in SKOV3 GDR clones under glucose deprivation (Fig. 5A).To functionally validate this transcriptional modulation in detail, we examined whether survival of the SKOV3 GDR clones could rely on the different regulation of FA import from the extracellular medium.CD36 protein expression (a membrane receptor involved in import of lipids) was heterogeneously expressed among GDR/GDS clones but it was not modulated by glucose limitation (Fig. 5B).Interestingly, increased expression of CD36 in GDR compared with GDS clones was observed (Fig. 5B), suggesting that GDR clones might accumulate external FAs independently from glucose concentrations in the medium.CD36 expression is positively regulated by AMPK [33] and phosphorylated (Ser79) pACC is an established AMPK target [34].Glucose deprivation activated the AMPK pathway in SKOV3 clones, as expected, but increased CD36 expression was not associated to higher basal AMPK activation in GDR clones (Suppl.Figure 10).In view of the well-known contribution of CD36 to lipid import, we speculated that GDR cells could import more FAs from the exogenous environment, compared to GDS clones.Although no differences were found in the number of BODIPY + cells between GDS and GDR clones in presence of glucose (Fig. 5C), glucose deprivation induced a significant increase in BODIPY + cells in GDR clones respect to GDS clones, indicating an increment in lipid droplet (LD) formation (Fig. 5C).FA β-oxidation could be an important source of ATP in the absence of glucose and, in particular, we evaluated CPT1A as a key enzyme implicated in controlling fatty acid oxidation (FAO) [18].Interestingly, a strong increase in CPT1A protein expression in GDR respect to GDS clones both in presence and absence of glucose was observed (Fig. 5D), suggesting sustained FAs β-oxidation.To investigate the potential ability of GDR clones to better exploit extracellular lipids such as serum lipoproteins and FAs contained in FBS [35], GDR clones were cultured in glucose-rich medium or glucose-poor medium supplemented with either 10% or 1% FBS.Interestingly, the reduction of FBS from 10 to 1% lowered viability (40% less) of GDR clones, as expected, but supplementation of medium with oleic acid (1.8 mM) was able to rescue this decrement (Fig. 5E).Finally, the inclination of clones to exploit de novo FA biosynthesis to fuel their metabolism upon glucose limitation was assessed.FAS protein expression was reduced in GDR compared to GDS clones, particularly upon glucose deprivation (Fig. 5F). Altogether, these results suggest that GDS and GDR SKOV3 clones preferred different processes related to FA metabolism when glucose was limited in the extracellular environment, with GDR clones more prone to accumulate and metabolize external lipids, whereas GDS clones relied on accumulation and synthesis of new FAs. IGROV-1 GDR clones upregulate monocarboxylate transporter 1 (MCT1) and increase pyruvate import from the extracellular microenvironment upon glucose deprivation To deepen the OXPHOS dependency of GDR clones upon glucose deprivation (Fig. 6A), we investigated the role of MCT1 in GDS and GDR clones.MCT1 controls the reversible exchange of pyruvate and lactate between the cytosol and extracellular space in cancer [36], and it is involved in pyruvate import in the mitochondria by resulting in increased OXPHOS activity [37].A significant upregulation of MCT1 transcript expression both in SKOV3 and IGROV-1 clones under glucose starvation was observed (Suppl.Figure 11), but the highest amount of MCT1 transcript was measured in IGROV-1 GDR clones under glucose deprivation (Fig. 6B).Although upregulation of MCT1 protein expression was induced both in GDR and GDS clones upon glucose starvation, a significant increase of MCT1 protein was observed in glucose deprived GDR clones respect to GDS ones (Fig. 6C).Pyruvate can putatively be used by GDR clones to maintain cell viability under glucose starvation.Although pyruvate deprivation affected viability of IGROV-1 GDS clones, GDR clones suffered more in terms of viability when pyruvate was removed from the culture medium (Fig. 6D).In contrast, pyruvate depletion had marginal effects on SKOV3 GDR clones (Suppl.Figure 12).A further functional analysis using GSEA on Reactome canonical pathways, was useful to finely inspect the OXPHOS dependency exhibited by GDR clones upon glucose deprivation.Upregulation of mitochondrial biogenesis and complex I biogenesis pathways in IGROV-1 model was demonstrated (Fig. 6E) and, to better characterize the underlying mechanism, mitochondrial mass was used as a marker of mitochondria biogenesis [38].Glucose deprivation was associated with an increment in mitochondrial mass both in GDS and GDR clones, but glucose-deprived GDR clones were endowed with highest mass (Fig. 6F), suggesting increased biogenesis of mitochondria as a response to glucose deprivation [39]. Finally, to target the OXPHOS metabolic dependency of GDR clones upon glucose deprivation, we employed a respiratory chain complex I inhibitor, metformin, which is known to decrease mitochondria respiration and reduce glucose metabolism through the citric acid cycle [40].Strikingly, upon glucose starvation GDR clones were more susceptible to metformin-induced cell death when compared to GDS clones (Fig. 6G).These results confirmed the OXPHOS dependency of IGROV-1 GDR clones when glucose is limited in the tumor environment and indicate a potential therapeutic strategy to target GDR clones. Discussion Low glucose and high lactate levels are common in the microenvironment of solid tumors [12] and can be exacerbated by anti-angiogenic therapy [6].Previous studies have shown that among cancer cell lines marked differences can be found in their capability to survive glucose limitation, and recognized OXPHOS as the major pathway required for optimal proliferation under low glucose conditions [16].In this study, we performed a comprehensive analysis of adaptation mechanisms used by clones derived from ovarian cancer cell lines to glucose limitation.Established cancer cell lines were preferred due to the feasibility to obtain cell clones, at variance with patient-derived cancer cells, whose survival and growth capacity in vitro are still limited.We identified two clonal populations, GDR and GDS, with different patterns of response to glucose limitation in vitro.We hypothesized that the GDR population could be enriched after anti-angiogenic treatment, which was known to exacerbate glucose starvation in tumors.As expected, this behaviour was observed in tumor xenografts formed by IGROV-1 and SKOV3 cells treated with bevacizumab, and also confirmed in a PDX ovarian cancer model, indicating that anti-angiogenic treatment favours survival and expansion of a glucose limitation-resistant phenotype in the tumor microenvironment. GDR clones disclosed OXPHOS-dependent metabolic traits, compared with GDS clones and exhibited increased spare respiratory capacities, indicating higher mitochondrial reserve.These features resulted in a more adaptive metabolic phenotype enabling them to overcome glucose deprivation-derived stress and celldeath stimuli, which were instead detrimental for GDS clones [41,42].Several studies demonstrated that the switch from glycolysis to OXPHOS is sufficient to induce acquired resistance to drugs [43][44][45].We speculate that GDR clones could be more resistant to chemotherapy, although this was not directly investigated in our study. The higher metabolic plasticity of GDR clones compared with GDS clones emerged also from Seahorse analysis.In line with this, we observed that OXPHOS perturbations induced by oligomycin treatment triggered an increment in the extracellular amount of lactate in GDR clones, by suggesting an efficient adaptive metabolic apparatus in resisting to nutrient availability.Conversely, GDS cell metabolism appears to be less adaptive and OXPHOS blockade has relatively less impact on lactate production.The observation that OXPHOS inhibition was able to trigger a decrease in pH values in GDR clones highlighted their metabolic plasticity respect to GDS clones, by suggesting their ability to increase extracellular acidification rate in response to oligomycin treatment.This metabolic adaptation is commonly observed also in other contexts and offers a therapeutic opportunity for specifically targeting resistant cells that survive glucose deprivation.In line with previous studies [46][47][48], we demonstrated that GDR clones exhibited stronger sensitivity to metformin-induced cell death upon glucose deprivation when compared to GDS clones (Fig. 6G), by confirming the metabolic OXPHOS dependency observed in GDR clones when glucose is limited in the environment.These findings suggest that a combinatorial therapy by using metformin and bevacizumab could delay outgrowth of GDR clones and tumor growth.In this context, sporadic clinical observations suggest that these two drugs could indeed cooperate to cause tumor cell death [49].The phenotypic features of GDR clones seemed not to be caused by underlying genetic alterations because we did not find relevant mitochondrial DNA mutations in these clones, at variance with Sabatini et al. who reported the presence of pathogenic mutations in mitochondrial DNA in cancer cell lines sensitive to glucose limitation [16].Although IGROV-1 GDS clones shared a missense variant in the nuclear-encoded gene MTG1 which was involved in mitochondrial ribosome associated GTPase1, we lack sufficient elements to attribute a functional impact of this variant on the two different GDS and GDR phenotypes and their different metabolic behaviors.Further characterization of GDR and GDS populations was conducted at the transcriptional level, in IGROV-1 and SKOV3 models.Transcriptome experiments were designed to perform comparisons over time (6 h vs. 0 h, 24 h vs. 0 h) in both clones, and comparisons between GDR and GDS at fixed time point (0 h, 6 or 24 h).Longitudinal analysis revealed massive modulation of genes and pathways under glucose deprivation for 6 or 24 h in both clonal populations.Comparative transcriptome analysis (GDR vs. GDS) confirmed the activation of an OXPHOS-related transcriptional program in GDR clones as glucose levels decreased, thus reinforcing the finding obtained at the metabolic level.Moreover, our experiments showed different patterns of activation of the same OXPHOS-related pathways in IGROV-1 and SKOV3 GDR clones, with the former characterized by rapid activation (6 h) maintained over time (24 h), while the latter by delayed activation (24 h).Despite the similar behavior noticed at the transcriptional level in IGROV-1 and SKOV3-derived GDR clones, heterogeneous putative mechanisms of metabolic adaptation to glucose limitation were observed.In particular, SKOV3 GDR clones were more prone to accumulate and metabolize external lipids when glucose was deprived, as suggested by increased expression level of CD36, a protein involved in the uptake of nutrients from the extracellular fluids [50].On the contrary, SKOV3 GDS clones demonstrated higher propension for the synthesis of new fatty acids, as suggested by increased FAS expression levels.Although these findings might reflect cellular adaptations to in vitro experimental conditions, they corroborated the recent observation that lipid uptake and storage were increased in tumor xenografts treated with bevacizumab [51].Along this line, Hodakoski et al. recently reported that lung cancer cells exploit Rac-mediated micropinocytosis of extracellular proteins to survive states of glucose deprivation [52], and another study reported that human pancreatic carcinoma tumor samples, which are poorly vascularized and have decreased glucose levels, undergo high rates of micropinocytosis to maintain adequate intracellular amino acid levels [13]. On the other hand, IGROV-1 GDR clones hypothetically fueled OXPHOS upon glucose deprivation by upregulating MCT1 transcript and protein, thus suggesting that pyruvate import from the extracellular microenvironment could be a fundamental process for adaptation to glucose deprivation.We indirectly confirmed the importance of this uptake in GDR clones, by demonstrating that pyruvate depletion affected viability of IGROV-1 clones under glucose deprivation.All these results suggest that adaptation to glucose starvation followed both common and different routes in SKOV3 and IGROV-1 cells, ultimately converging in the increased expression of peculiar transporters to facilitate the import of essential substrates. The response of cancer cells to glucose starvation has been extensively investigated by several previous studies, which highlighted multiple intrinsic mechanisms, in line with results of our study [53][54][55][56][57].It is also important to underline that although here we focused on clones derived from established cancer cell lines, our group previously demonstrated that GDR/GDS populations exist also in OC patient-derived samples [58], thus underscoring the translational relevance of our findings. Finally, the underlying mechanism by which angiogenic inhibitors lead to enrichment of a clonal population resistant to glucose deprivation deserves further investigations.The possibility of targeting this population by OXPHOS inhibitors as novel therapeutic strategy could be useful to improve the efficacy of anti-angiogenic therapy in ovarian cancer. Conclusion In this study, we demonstrated a metabolic heterogeneity at the clonal level in ovarian cancer cell lines by isolating different clonal populations which disclosed strikingly different viability following in vitro glucose starvation.GDR clones were enriched in experimental tumors by anti-VEGF therapy suggesting their partial involvement in the onset of resistance to glucose deprivation after anti-angiogenic drug treatment.This therapy-induced clonal population was endowed with potentially druggable OXPHOS-dependent metabolic traits and could be targeted for counteracting adaptive resistance to antiangiogenic therapy in ovarian cancer. Fig. 1 Fig. 2 Fig.1Generation of glucose deprivation resistant (GDR) or sensitive (GDS) clones.A Experimental workflow describing the protocol to generate GDR or GDS clones from ovarian cancer cell lines (IGROV-1, OC316, SKOV3, A2780, OAW42 and A2774).Clonal populations of cancer cells were obtained by following a protocol based on serial cell dilutions: 0.5, 1, 2 cells per well were seeded in 96-well plates and plates where clones developed in < 50% of wells were selected for further expansion and cultured either in standard (11mM) or low (0.2mM) glucose medium.B) Representative images of SKOV3 clones cultured in standard or low glucose conditions for 72 h and then classified in two arbitrary categories including GDR and GDS clones by microscope observation.C SKOV3 GDR and GDS clones were cultured either in standard or low glucose medium and after 72 h cell death was measured by using Annexin V/PI staining.(* p < 0.05).D Proportions of GDR and GDS clones in ovarian cancer cell lines: total clones analyzed were subjected to glucose limitation regimen (0.2 mM) and percentage (%) of GDR clones and GDS clones were calculated on the GDR/total number of clones ratio and GDS/total number of clones ratio, respectively.The GDR/GDS ratio was calculated as GDR percentage/ GDS percentage ratio Fig. 5 Fig. 5 Lipid uptake and catabolism in SKOV3 GDR and GDS clones.A GSEA enrichment plot showing the significant upregulation of KEGG FATTY ACID METABOLISM in GDR respect to GDS clones at 24 h of glucose deprivation in the SKOV3 model.On the y-axis is showed the running enrichment score (ES) and on the x-axis the genes (vertical black lines) in the pre-ranked list belonging to the gene set.The colored band at the bottom denotes the degree of correlation of genes with the GDR phenotype (red) or the GDS phenotype (blue).B Western blot representing CD36 protein expression levels in SKOV3 GDS and GDR clones with or without glucose deprivation and relative quantification of CD36 abundance.Tubulin was used as loading control (4 GDS vs. 4 GDR representative clones).(**p < 0.01).C Lipid droplets (LD) amount in SKOV3 GDS and GDR clones measured with BODIPY staining, with or without glucose deprivation.Normalization was done on standard cell culture conditions (+ glu) (5 GDS vs. 5 GDR representative clones) (*p < 0.05).D Western blot representing CPT1A protein expression levels in SKOV3 GDS and GDR clones with or without glucose deprivation and relative quantification of CPT1A abundance.Tubulin was used as control (4 GDS vs. 4 GDR representative clones).(**p < 0.01).E Cell viability of GDR clones cultured in standard medium (10% FBS), serum-deprived medium (1% FBS) and serum-deprived medium supplemented with oleic acid (OA, 1.8 mM).Normalization was done on enriched medium culture condition (***p < 0.001).F Western blot representing FAS protein expression levels in SKOV3 GDS and GDR clones with or without glucose deprivation and relative quantification of FAS abundance.Tubulin was used as loading control (4 GDS vs. 4 GDR representative clones) (**p < 0.01) Fig. 6 Fig. 6 Evaluation of OXPHOS dependency in IGROV-1 GDS and GDR clones.A GSEA enrichment plots showing the significant upregulation of KEGG OXIDATIVE PHOSPHORYLATION PATHWAY at 24 h of glucose deprivation in IGROV-1 GDR versus GDS clones.On the y-axis the running enrichment score (ES) is showed and on the x-axis the genes (vertical black lines) in the pre-ranked list belonging to the gene set.The colored band at the bottom denotes the degree of correlation of genes with the GDR phenotype (red) or the GDS phenotype (blue).B MCT1 mRNA levels in IGROV-1 GDR and GDS clones under normal and glucose deprivation conditions (4 GDR and 5 GDS clones).Columns show mean values ± SD of two replicates.β2 micro globulin was used as housekeeping.(* p < 0.05).C Western blot representing MCT1 protein expression levels in SKOV3 GDS and GDR clones with or without glucose deprivation and relative quantification of MCT1 abundance.Tubulin was used as control (5 GDS vs. 5 GDR representative clones) (**p < 0.01).D Representative images of IGROV-1 GDS and GDR clones cultured under pyruvate and glucose starvation (left panel) and related cell viability (right panel).The graph represents the mean values ± SD of 3 GDS and 6 GDR clones after 72 h of glucose starvation (-G + P) or glucose/pyruvate starvation (-G -P) (*p < 0.05).(E) Gene set enrichment analysis was performed on the Reactome canonical pathways present in the Molecular Signatures Database.The enrichment plots show the significant upregulation of two Reactome pathways related to mitochondrial and complex I biogenesis in GDR vs. GDS at 24 h of glucose deprivation in the IGROV-1 model.(F) Mitochondrial mass of IGROV-1 GDS and GDR clones with or without glucose deprivation.Graph represents mitochondrial mass normalized on normal conditions (+ glucose).(4 GDS vs. 4 GDR clones) (*p < 0.05).(G) Effect of Metformin treatment on IGROV-1 GDS and GDR clones cell death under glucose starvation (5 GDR vs. 4 GDS clones).Results were represented as fold change by normalizing the values obtained with metformin treatment (1mM) for 48 h on glucose starvation condition (-G + Met/-G).(*p < 0.05)
v3-fos-license
2021-08-19T06:16:38.646Z
2021-08-17T00:00:00.000
237199479
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/s41393-021-00672-y.pdf", "pdf_hash": "f7d1b3204e40f645770f91cb6e8bbf900ff42653", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2565", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "f3261bbd69db233c10c51db9c62ec9d7130fdbac", "year": 2021 }
pes2o/s2orc
Implementation of multilingual support of the European Multicenter Study about Spinal Cord Injury (EMSCI) ISNCSCI calculator Objectives Since their introduction, electronic International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) calculators have evolved to powerful tools providing error-free ISNCSCI classifications in education, research and clinical practice. For increased accessibility and dissemination, a multilingual support is mandatory. The aim of this work was to setup a general multilingual framework for the freely available ISNCSCI calculator (https://ais.emsci.org) of the European Multicenter Study about Spinal Cord Injury (EMSCI). Methods The graphical user interface (GUI) and PDF export of the ISNCSCI worksheet were adapted for multilingual implementations. Their language-dependent content was identified. These two steps called internationalization have to be performed by a programmer in preparation of the translations of the English terms into the target language. This step following the internationalization is called localization and needs input by a bi-lingual clinical expert. Two EMSCI partners provided Standard Mandarin Chinese and Czech translations. Finally, the translations are made available in the application. Results The GUI and PDF export of the ISNCSCI worksheet were internationalized. The default language of the calculator is set according to the user’s preferences with the additional possibility for manual language selection. The Chinese as well as a Czech translation were provided freely to the SCI community. Conclusions The possibility of multilingual implementations independent from software developers opens the use of ISNCSCI computer algorithms as an efficient training tool on a larger scale. INTRODUCTION The International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) [1,2] published by the American Spinal Injury Association (ASIA) and the International Spinal Cord Society (ISCoS) is based on a standardized clinical segmental examination of sensory and motor function. It provides classification procedures for determining the level and the severity of a spinal cord injury (SCI). The ISNCSCI examination consists of a bilateral manual muscle test of 5 myotomes on the arms (C5-T1) and 5 myotomes on the legs (L2-S1), a bilateral sensory test for light touch appreciation as well as pin prick discrimination in 28 dermatomes (C2-S4-5) and an anorectal examination of voluntary anal contraction of the external sphincter and deep anal pressure sensation. Intensive training has been strongly recommended to ensure a reliable examination [3,4]. Based on this clinical examination, several classification variables have been defined describing the location of the SCI (neurological level of injury, motor and sensory levels), the severity of the SCI (complete versus incomplete, ASIA Impairment Scale) and the zones of partial preservation in cases with missing sensory or motor function in the most sacral segments. The classification steps are relatively complex, and the rules have undergone several revisions and updates [1]. Thus, appropriate training is recommended to ensure a correct determination of the ISNCSCI classification variables, particularly of motor levels and ASIA Impairment Scale [5][6][7][8]. Misclassification rates of manually determined motor levels range between 15% [6] and 38% [9] for clinicians not specifically trained in classification. Due to the complexity of the classification rules and the difficulties in manual classification, the need for electronic ISNCSCI calculators has been identified during the last decade [10]. Since then, ISNCSCI calculators have evolved into powerful tools. Two of them, the ISNCSCI computer algorithm [11] of the Rick Hansen Institute (https://www.isncscialgorithm.com/) and the ISNCSCI calculator of the European Multicenter Study about Spinal Cord Injury (EMSCI) [8] (https://ais.emsci.org) have been validated using large datasets and are freely available online. Both calculators contain a sophisticated inference logic for the consistent classification of datasets containing not-testable scores, which can be very challenging and time-consuming for clinicians to classify manually. In general, calculators assist clinicians and researchers in data management, in improving data quality and with up-to-date ISNCSCI classifications based on the most current ISNCSCI [8]. Considering the relevance of ISNCSCI as an internationally accepted and widely used assessment tool for clinical decisionmaking and research, there is a need of multilingual versions of the calculator beyond the established English version. In computing, multilingual support is typically divided into the processes "internationalization" and "localization", which are methods to adapt computer software to different languages, regional differences and technical requirements [12]. Internationalization (frequently abbreviated to the numeronym I18N) describes the process of designing a software application in a way that allows potential adaption to various languages and regions without changes to its core algorithmical parts [13]. Although this process needs to be performed only once, it is most often very time-consuming. This applies particularly to small software projects, when it was not considered by the software developers from the beginning, and might result in considerable restructuring of the software. Localization (numeronym L10N) is the process of adapting internationalized software for a specific region or language by adding locale-specific components and translating text [13]. This task is typically less time-consuming than the I18N task and translational skills are needed rather than developer skills. Whenever a user interacts with the ISNCSCI calculator in a language other than English, I18N and L10N are involved. This includes the web-form to enter the ISNCSCI scores, the web-site in which the ISNCSCI web-form is embedded and the generated reports, e.g., the PDF worksheet based on the entered scores. Only if all these components of the calculator are internationalized and localized, the user can choose among different languages. The aim of this work was (1) to internationalize the ISNCSCI calculator provided by EMSCI with fully freely available tools for support of multiple language versions, and (2) to localize the calculator with two languages. With the help of two EMSCI partners (Department of Rehabilitation Medicine, Peking University Third Hospital, Beijing, China; Spinal Cord Unit, Department of Rehabilitation and Sports Medicine, 2nd Faculty of Medicine, Charles University and University Hospital Motol, Prague, Czech Republic) the freely available EMSCI ISNCSCI calculator web application was translated to Standard Mandarin Chinese [14] and to Czech. METHODS From a technical perspective, the calculator consists of a front end containing the graphical user interface (GUI) and a back end for data storage and management and for performing all ISNCSCI classifications. Details on the ISNCSCI calculation algorithms used in the back end are published elsewhere [8]. For this work, multilingual support was implemented for the GUI and the ISNCSCI worksheet export, because all other parts were developed in English as the internationally accepted language of informatics. The ISNCSCI booklet updated 2015, revised 2011 [15] was used as basis for translations of GUI and worksheet (REV 04/15). The "GNU gettext" toolkit [16] and the tinygettext toolkit [17] were used for I18N and L10N, respectively. The "GNU gettext"-toolkit provides the general internationalization framework, file formats and tools, whereas tinygettext is used for the actual localization of the application. Internationalization prerequisites The first step in internationalization of software is the selection of a suitable character encoding used throughout the application. For some languages, e.g., English, a condensed character set like the ASCII (abbreviated from American Standard Code for Information Interchange) [18] encoding is sufficient. ASCII represents a fixed length (7-bit) encoding originally developed from telegraph code and consists of 95 printable characters (letters, digits, punctuation marks and a few miscellaneous symbols). However, this widely used encoding is not suitable for any internationalization project, because the number of available characters is optimized for Latin-based languages and especially for English. Unicode is an industry standard for the consistent encoding, representation and handling of text expressed in most of the world's writing systems. The latest (May 2019) Unicode standard 12.1 contains nearly 138.000 characters (https://unicode.org/versions/Unicode12.1.0/). UTF-8 (abbreviated from Universal Coded Character Set Transformation Format -8bit) [19] is the most frequently used Unicode character encoding and was chosen to replace the already implemented ASCII encoding because of its backward compatibility with ASCII, which minimizes changes in the source code. Internationalization of the ISNCSCI PDF worksheet As the electronic ISNCSCI worksheet (Fig. 1A) is only available as Portable Document File (PDF) [20], the first processing step for internationalizing was the conversion from the PDF format into a more editable file format. In general, PDF is a file format independent of the application software, hardware and operating system mainly intended for storage, viewing and printing, but not for editing [20]. To generate an editable worksheet representation we chose to convert the PDF format into the Scalable Vector Graphics (SVG) file format. SVG is a text-based open vector image format for two-dimensional graphics with support for interactivity and animation [21]. INKSCAPE (free and open source software licensed under the GNU General public license (GPL), https://www.inkscape.org) was used to convert the PDF-version of the worksheet into SVG. The SVG worksheet representation was divided into two layers: The first layer contains all static, language-independent content. This includes mainly the dermatome map, the grids and all boxes for the examination scores. In the second layer, all translatable content is bundled. In general, typical word processing tasks like text selection or copy & paste might not be possible in PDFdocuments, because words might not be represented as grouped characters (called "string" in computing), but rather as a bunch of single, not linked characters. In particular, the original ISNCSCI worksheet PDF contained only unlinked characters. Accordingly, the most labor-intensive step was to build strings from the single characters. For every string, the exact horizontal and vertical position on the worksheet as well as the font properties (type, size, and weight) were determined to assure the same worksheet design in the SVG document as in the original PDF document. For quality control and to identify layout errors, the new SVG document was overlaid to the original PDF document. In an iterative process, erroneous items and differences in the SVG document were adjusted by the authors CS and JS until the senior author RR was not able to detect any differences by visual inspection. In a last step, every string representing an English word in the SVG document was marked as translatable to allow for replacement during the localization process. Internationalization of the EMSCI ISNCSCI calculator GUI The EMSCI ISNCSCI calculator GUI (Fig. 1B) can be divided into more or less static content, which is served from hypertext markup language (HTML) files (disclaimer, welcome, history, team and manual text pages), and dynamically created content (main calculator user interface and examples page). These two groups of content are mapped into two processes in I18N (Fig. 1B): The HTML files with static content are translated en bloc. An international language code suffix is added to the filename containing the translated text, e.g., the Chinese translated welcome page is saved as "Welcome_zh.html" where "zh" is the international (ISO 639-1) language code for Sino-Tibetan/Chinese languages. The international language code for Czech is "cs". Accordingly, the Czech welcome page is named "Welcome_cs.html". For the dynamically created content like in the ISNCSCI worksheet, all translatable content is marked for the subsequent localization process. Internationalization at run time of the EMSCI ISNCSCI calculator When a user visits the calculator website for the first time, the default language is set according to the user's web-browser settings. Technically, web-browsers send a configurable weighted list of preferred languages upon requesting a website [22], e.g., in the list "cs;q=0.8,en-US;q=0.6,en; q=0.4." Czech ("cs") is preferred over American English ("en-US") and over English in general ("en"). The calculator selects the supported language having the highest weight. If none of the browser's announced languages is supported, English is chosen as default language. The selected language is stored as a cookie in the web browser and used for subsequent visits. Additionally, links to manual language selection are provided in the user interface (Fig. 2). A click on one of these links sets the language and immediately reloads the site to apply the language change. The PDF of the ISNCSCI worksheet is produced during runtime of the web application (Fig. 1C) by converting the translated SVG document to a PDF document using a non-interactive command line call to INKSCAPE on The first layer contains all static, language-independent elements (grids, dermatome charts, etc.). The second layer contains all translatable items and is dynamically created for the selected language. The third layer is also dynamically created and contains the examination scores as well as the calculated ISNCSCI classification results, which however are not translated at this time (e.g., "AIS B" stays "AIS B" in every language). Internationalization -message catalog The English words and phrases marked as translatable in the languagedependent parts of the GUI and the ISNCSCI PDF (Table 1) were automatically collected from the source code by the "xgettext" tool. This message catalog is a major outcome of the internationalization process and provides the basis for the subsequent localization processes. Localization of the EMSCI ISNCSCI calculator For localization, this catalog was translated using Poedit (Copyright 2016 Václav Slavík, https://poedit.net), which is a free cross-platform graphical gettext translation editor. For each supported language, a so-called portable object (*.po) file is derived from message catalog's portable object template (*.pot) containing all above mentioned English words and phrases. The editable files are named using the international language code as prefix, e.g., zh_CN.po for the Chinese portable object file. For the translation of the static HTML-files, any text editor with Unicode (UTF-8) support can be used. Translation The Spinal Cord Unit at the Department of Rehabilitation and Sports Medicine of the 2nd Medical Faculty and University Hospital in Motol was responsible for the translation into Czech language. The process of the translation followed recommendations from Biering-Sørensen et al. [23] and a back translation into the original Czech language was performed by an independent person. The translated 2013 ISNCSCI revision was published in peer-reviewed Czech journal [24]. All subsequent changes, as well as the translation of the document for this manuscript, have been regularly re-checked and adapted using the same translation procedure. The Chinese translation was first conducted for the 2015 update of the ISNCSCI booklet [25] including the worksheet. This translation was reviewed by a native speaker from an independent institution in the field of SCI medicine. All additional translations for the GUI of this project have been conducted by three experienced experts, but no formal back translation or independent review was conducted. Author NL managed the Chinese translation, authors JK and RH the Czech translation. None of them had a professional software development education. Several reviewing rounds between the programmers (CS and JS) and the translators were needed, until all spelling issues, grammar issues and language issues (phrases like "sensory level", "motor level" versus three single words "sensory", "motor" and "level") have been resolved. In each round either the programmers had to solve I18N problems in the application and/or the translators had to solve L10N problems. Eventually, the translators have to approve the correctness of the translations in the application which concluded the reviews. RESULTS The GUI of the EMSCI ISNCSCI calculator and the PDF export of the ISNCSCI worksheet were successfully internationalized. As a proof-of-concept of the multi-language support, two non-English versions of EMSCI ISNCSCI calculator were implemented. First, a Chinese version was launched at the annual meeting of the American Spinal Injury Association (ASIA) 2016 in Philadelphia, USA [26] and is freely available online (https://ais.emsci. org). The collaboration for the Czech translation was established at the annual meeting of the International Spinal Cord Society (ISCoS) 2016 in Vienna, Austria [27]. This translation will be incorporated into the next major update of the EMSCI ISNCSCI calculator. The Chinese translation covers the complete website including the GUI as well as the PDF export of the front side of the ISNCSCI worksheet. At the current stage, the Czech translation covers only the ISNCSCI calculator GUI and the PDF export. Screenshots of the Chinese and Czech calculator GUIs are shown in Fig. 2A, B. The corresponding PDF exports of ISNCSCI worksheet are depicted in Figs. 3 and 4, respectively. The I18N message catalog consists of 182 English words and phrases, which need to be translated once into the respective language ( Table 1). The time needed once to translate these 182 terms is estimated to be less than 1 h. This step was performed so quickly, because the Chinese as well as Czech ISNCSCI relevant translations were already available from other projects [24]. There is already a Chinese translation of the ISNCSCI worksheet officially endorsed by ASIA and ISCoS available [25]. After the translation of the message catalog, there were only small adaptions necessary to the SVG worksheet to avoid overlapping of neighboring elements. The Chinese I18N needed some special processing, e.g., smaller and only normal weighted fonts. This is in contrast to the original English version, which contains a mix of oblique, normal, italic and bold weighted fonts. For the Czech I18N, only a smaller font size for the total light touch, pin prick and motor scores was necessary together with slight modifications of the position of some headings. DISCUSSION With this work, the technical basis for multilingual versions of the electronic ISNCSCI calculator has been established. As a proof-ofconcept, two versions in Chinese and Czech language were implemented. The concepts of internationalization and localization allow for adding a new language to the calculator in the most effective way, because the translators can operate mostly independent of software developers. All software tools needed for internationalization of the EMSCI ISNCSCI calculator are free and open source ensuring that the calculator can be offered to the SCI community for free as well as on a long-term basis. A Chinese localization fosters the use of ISNCSCI calculators for education of the Chinese speaking clinicians among the approximately 900 million first-language Chinese speaking people. A website available in the users' native language does not only increase the perceived usability [28], but also enables users having problems correctly understanding English terms and phrases or even language anxiety [29] to make full use of this website. Although English is the internationally accepted world language in particular in research [30], there is still a need for translations of assessments into local languages even in countries where English represents the second language [14]. The main goal of the EMSCI ISNCSCI calculator is to support clinicians and researchers in the correct classification of difficult ISNCSCI cases. It has to be clearly stated that it is not intended as a replacement of the manual classification. It is essential that clinicians maintain their classification skills to identify those elements of the examination that are most crucial for a correct classification of an individual with SCI [7]. As an additional benefit of the internationalization of the EMSCI ISNCSCI calculator, high quality, electronic ISNCSCI worksheets in SVG format in the respected language can be created very easily. These translated ISNCSCI worksheets represent an important part in the translation process of the ISNCSCI documents such as the ISNCSCI booklet [15]. The I18N has a technical limitation regarding text directionalities, i.e., text is written right-to-left (or dextrosinistral), left-to-right (or sinistrodextral) or boustrophedon (changes of text directionality after each row). At this time, only sinistrodextral languages written from left-to-right are supported. The translators cannot upload their translations directly into calculator, which needs to done manually by a system administrator. Future work This work is based on the ISNCSCI version revised 2011 and updated in 2015 [15]. For the next release of the calculator, it is planned upgrading to the 2019 current ISNCSCI revision [1]. However, the upgrade will not only include work on the internationalization part but also major changes to the algorithmical/computational framework, e.g., regarding the non-SCI taxonomy [31] and the revised Zones of Partial Preservation. The EMSCI ISNCSCI calculator represents a major component in a wider context to improve the quality of ISNCSCI under the umbrella of the EMSCI network [7][8][9][10][32][33][34][35]. The calculator has been actively maintained and developed since 2003. The long history as well as the membership of two authors (CS and RR) in the ASIA International Standards Committee shows that there is an intention for long-term continuation of this work. With the implementation of the multilingual support, it is planned to introduce more languages in the future into the EMSCI ISNCSCI calculator so that more people can get access to educational tools supporting the correct classification of ISNCSCI examinations. We believe that not only the front side of the ISNCSCI worksheet should be part of the translation process, but also the back side, containing a very condensed set of ISNCSCI examination and classification rules. The implementation of this feature is planned for a future release. CONCLUSION The EMSCI ISNCSCI calculator has been internationalized and successfully localized to Chinese and Czech versions. As part of the internationalization process, a framework for implementation of multilingual versions based on static content and message catalogs used for dynamic website content has been set up, which does not depend on the support of software developers. This contributes to an increased dissemination of electronic ISNCSCI computer algorithms. With the introduced version, the algorithms are easier to access for better training of examiners and for correct classification of ISNCSCI datasets.
v3-fos-license
2018-04-03T03:30:55.772Z
2016-01-01T00:00:00.000
4986010
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1038/emi.2016.60?needAccess=true", "pdf_hash": "3bb00b16dc5777926ab9286fdbe120ecb09dd331", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2568", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "sha1": "3bb00b16dc5777926ab9286fdbe120ecb09dd331", "year": 2016 }
pes2o/s2orc
Serological and molecular detection of bovine leukemia virus in cattle in Iraq Bovine leukemia virus (BLV) is highly endemic in many countries, including Iraq, and it impacts the beef and dairy industries. The current study sought to determine the percentage of BLV infection and persistent lymphocytosis (PL) in cattle in central Iraq. Hematological, serological, and molecular observations in cross breeds and local breeds of Iraqi cattle naturally infected with BLV were conducted in the peripheral blood mononuclear cells of 400 cattle (340 cross breed and 60 local breed) using enzyme-linked immunosorbent assay and polymerase chain reaction (PCR). On the basis of the absolute number of lymphocytes, five of the 31 positive PCR cases had PL. Among these leukemic cattle, one case exhibited overt neutrophilia. Serum samples were used to detect BLV antibodies, which were observed in 28 (7%) samples. PCR detected BLV provirus in 31 samples (7.75%). All 28 of the seropositive samples and the 3 seronegative samples were positive using PCR. Associations were observed between bovine leukosis and cattle breed, age and sex. Age-specific analysis showed that the BLV percentage increased with age in both breeds. Female cattle (29 animals; 7.34%) exhibited significantly higher infectivity than male cattle (two animals; 4.34%). In conclusion, comprehensive screening for all affected animals is needed in Iraq; programs that segregate cattle can be an effective and important method to control and/or eliminate the BLV. INTRODUCTION Enzootic bovine leukosis (EBL) is the most important type of bovine lymphotropic retrovirus infection, and this disease is caused by bovine leukemia virus (BLV). 1,2 According to the International Committee for Taxonomy of Viruses classification scheme, BLV is a deltaretrovirus genus of the retroviridae family. 3 BLV has many structural and functional characteristics common with human T-lymphotropic viruses. 4 BLV causes chronic infection in cattle and develops into three possible pathological forms. Most BLV-infected animals appear asymptomatic (clinically healthy). 5 Approximately one-third of infected animals develop persistent lymphocytosis (PL) due to polyclonal proliferation of B lymphocytes, and 0.1%-10% develop lymphoid tumors, primarily B-cell lymphosarcomas. 6,7 BLV affects the health of infected animals and impacts the beef and dairy industries. 8 BLV infects approximately one-third of the adult dairy cattle in the United States, and is a major cause of the loss of export markets for breeding cattle. 9 Direct economic losses are incurred because of death, decreases in milk productivity, fertility and life span, and condemnation at slaughter. 8,10 BLV is widespread in populations of cattle worldwide, and infection remains endemic in many countries. 11 BLV has high prevalence in South America, some Asiatic and Middle Eastern countries, and Eastern Europe. [12][13][14][15] The most important source of BLV transmission from infected cattle is via blood lymphocytes and other tissue products. 16 Close contact transmission, hematophagous flies and iatrogenic transfer through the use of contaminated veterinary instruments are all well-documented sources of BLV transmission between infected and non-infected cattle. [17][18][19] Although in utero transmission of BLV from a cow to its fetus can occur, it is relatively rare. 20 Colostrum, milk-borne and artificial insemination BLV transmission have also been reported. [20][21][22] Agar gel immunodiffusion is the prescribed diagnostic test for international trade and the most common method used to detect BLV-specific antibodies. Several enzyme-linked immunosorbent assays (ELISA) are also available to diagnose BLV. 23,24 Agar gel immunodiffusion is as sensitive as the routinely used indirect ELISA test; it is highly specific, reliable and easy to perform. 25 PCR has also been described as a diagnostic test. 26 PCR is a useful method to detect recently infected animals before seroconversion. It can also be used to confirm neonatal BLV infection because serological tests cannot differentiate between antibodies produced de novo in response to infection and those that are transferred passively in colostrum. 27,28 The prevalence of BLV worldwide varies widely between countries; prevalence has been found to be as low as 5% in Cambodia and Taiwan 19 and 17% in Turkey 29 or as high as 83.9% in the US and 25.7% in Canada. 30,31 Although BLV impacts the Iraqi economy because it is neglected in Iraq, this study sought to identify a comprehensive molecular and seroepidemiological screening for infected animals to establish a provision for disease control and eradication. Ethics statement The Baghdad University College of Veterinary Medicine Review Board and Institutional Review Board of the Iraqi Center for Cancer and Medical Genetic research approved this study. Consent was obtained from the farm owners before animal sampling. Animals The animals examined were dairy cattle raised on private dairy farms located in two governorates, Al Qadisiyah and Al Mouthanna, as well as animals of one station dairy. A total of 400 cross-breed cattle (Friesian with native cattle) and local-breed cattle (native cattle) were investigated. The samples were divided into 227, 78 and 95 cattle from the Al Qadisiyah, Al Mouthanna and station dairy herds, respectively. The cattle were older than six months and were selected based on clinical signs indicating that they may had BLV. The cattle were divided into two age groups: ⩾ two years old and otwo years old. The study began in March 2014 and was completed in December 2015. Sample collection Registered veterinarians obtained peripheral blood aseptically from the jugular vein with and without anticoagulant using a vacutainer system in two sterile vacuum tubes. The samples were then refrigerated and stored until they arrived at the laboratory. Hematological examination Fresh peripheral blood samples were used for total leukocyte and differential leukocyte counts, and obtained by standard veterinary procedures. 32 The leukocyte count was performed manually using a hemocytometer counting chamber to determine the number of leukocytes per 1 μL of blood. The differential leukocyte count was performed using a thin blood smear stained with a Diff-Quik stain kit (Syrbio, Switzerland), and a count of 200 atypical leukocytes was considered as being a positive case. Serum preparation and serological test The blood samples were centrifuged for 10 min at 2000 rpm, and then, serum was collected in 1.5 mL Eppendorf tubes and stored at − 80°C until further analysis. All 400 serum samples were tested for BLV using a Svanovir BLV-gp51-Ab ELISA test kit (Svanova Biotech AB, Uppsala, Sweden). The procedures were performed according to the manufacturer's instructions. Polymerase chain reaction (PCR) Genomic DNA was extracted from peripheral blood mononuclear cells (PBMCs) after isolation from whole blood using the Histopaque 1077 density gradient technique (Sigma-Aldrich, Steinheim, Germany). DNA extraction was performed using a Magnesia Genomic DNA Whole Blood Kit and Magnesia Automated DNA Extraction machine (Anatolia Geneworks, İstanbul, Turkey). All DNA extraction steps were performed according to the manufacturer's instructions. The extracted DNA samples were quantified and stored at − 86°C until used. The extracted DNA samples were used as a template to detect BLV proviral DNA by single PCR using two sets of primers, a pol1 primer set and an env1 primer set. In the pol1 primer, the forward primer was 5′-CGG GAT TGA TCA CCC CGG AA-3 (546-565), and the reverse primer was 5′-GGA CTC CGT CGG GAA GGT T-3 (1033-1052). These were based on conserved regions of the 3′ end of the pol gene using an online program (OligoAnalyzer 3.1, Integrated DNA Technologies, Inc., Coralville, IA, USA) to amplify a 507-bp fragment. The reaction final volume was 25 μL, which consisted of 12.5 μL KAPA2G of Robust HotStart ReadyMix 1 × (Kapa Biosystems, Cape Town, South Africa) containing 0.2 mM/L of each dNTP, 3 mM/L of MgCl 2 , 1 unit of Robust HotStart DNA polymerase, 1.5 μL (0.6 μM) of each primer, 4 μL of the extracted DNA samples and 5.5 μL of PCR-grade water. Using a SureCycler 8800 Thermo Cycler (Agilent technologies, Santa Clara, CA, USA), the theromocycling conditions were as follows: initial denaturation at 95°C for 5 min, followed by 35 cycles of denaturation at 95°C for 20 s, annealing at 61°C for 20 s and extension at 69°C for 1 min, with a final extension at 69°C for 3 min. The second primer set for the env1 gene was based on previous publications. 33,34 In the env1 set, the forward primer was 5-CCC ACA AGG GCG GCG CCG GTT T-3 (5099-5120), and the reverse primer 5-GCG AGG CCG CGT CCA GAG CTG G-3 (5521-5542). The PCR mixture (25 μL) consisted of 12.5 μL of 1 μ polymerase buffer, 2 mM MgCl 2 , 0.2 of each mM dNTP, 1 μL of 0.5 mM of each primer, 4 μL of 0.5 mg of DNA and 7.5 μL of PCR-grade water. Amplification conditions started with initial denaturation at 94°C for 3 min, followed by 35 cycles of denaturation at 94°C for 1 min, primer annealing at 65°C for 20 s and elongation at 72°C for 30 s. The final elongation of amplification was 5 min. The final amplified products were detected by electrophoresis through a 1% agarose gel containing ethidium bromide in Tris/Borate/EDTA buffer (90 mM Tris-borate and 2 mM EDTA). DNA was visualized using VISION Gel Documentation (Scie-Plas, Cambridge, UK). Clinical findings Most BLV-infected cattle were clinically asymptomatic during examination and exhibited only nonspecific findings, such as emaciation, rough hair coat and pale mucus membrane. Hematological findings According to cattle age, an absolute lymphocyte count of more than 8000 lymphocytes/μL was diagnosed as leukemic leukosis (persistent lymphocytosis: PL). Of the 31 BLV-positive cattle, the total leukocyte counts of five animals (29.0%) were 16 950, 10 900, 15 000, 29 250 and 15 950 lymphocytes/μL. The absolute lymphocytes count for these samples were 8390, 8271, 9636, 21 090 and 10 148 lymphocytes/μL, respectively; thus, PL was diagnosed. Based on the European hematological diagnostic guidelines (Table 1) one sample with a serological reaction had neutrophilia with increased band cells, whereas the remaining samples (22 of 31; 70.9%) were within normal limits. A small number of atypical lymphocytes were observed in the some BLV-positive cattle. No hematologic evidence of bovine leukosis was observed in the BLV-negative cattle, but cattle samples with leukocytosis with lymphocytosis or neutrophilia and whose serological and molecular results were negative were excluded. Leukosis in these animals might have been due to other diseases, such as blood parasites (theileriosis and anaplasmosis), which had appeared in the blood smears of some of these cattle. Serum antibodies against BLV were detected in 28 (7%) of the 400 samples. The seroprevalence rates of BLV in the three location areas were 8.8% for the Al Qadisiyah herd, 2.6% for the Al Mouthanna herd and 6.3% for the station dairy herd (Table 2). These sample locations were significantly different (P = 0.01) (χ 2 test). The serum samples exhibited varying degrees of reactivity in the ELISA; of the 28 seropositive samples, seven exhibited a strong positive reaction, and the remaining 21 samples reacted weakly. Polymerase chain reaction (PCR) Using two sets of specific BLV primers, the PCR results confirmed the BLV serological results. Of the 400 examined DNA samples, 31 (7.75%) were as positive, and all seropositive samples that underwent PCR analysis tested positive for both primer sets. In addition, the three (0.8%) samples seronegative with ELISA were positive using PCR ( Table 3). The PCR results confirmed the presence of 444-bp and 507-bp fragments of the pol1 and env1 genes, which were amplified to the standard ladder bands (Figure 1). The epidemiological results are summarized in Table 4. The highest percentage (8.3%) of infection was observed among the local breed of cattle (five animals); in the cross-breed cattle, the percentage of infection (7.6%) was slightly lower (26 animals) (P ⩽ 0.05). Among the two age groups, only two of the 70 cattle in the otwo years old group were positive, whereas 29 of the 330 cattle (8.8%) in the ⩾ two yearsold-age group were positive; this difference was significant (P ⩾ 0.05). The females exhibited a higher percentage of infection (7.34%; 29 animals) compared with males (4.34%, two animals). DISCUSSION This study is the first in the middle Euphrates region in Iraq to confirm the presence of BLV in all cattle breeds and to describe its epidemiology. The percentage of BLV infection was 7.75% in the Iraqi cattle examined here, as measured by PCR. In neighboring countries, the percentage of BLV infection is 17% in Iran, 35 11% in Turkey 36 and Jordon. 37 The present study demonstrated that the percentage of BLV infection in Iraqi cattle is lower than other countries, such as Korea (35%), 38 Tanzania (36%), 39 China (21.24%) 40 and Thailand (32.5%), 41 whereas the percentage in Iraq was higher than the 3.67% reported in some parts of Turkey. 42 Differences in the percentage of BLV infection are likely to occur between countries and locations within the same country. A previous serological study in Baghdad, Iraq, reported BLV infection frequency very similar to that reported here (7%). 43 The prevalence of infection observed here does not necessarily represent the actual prevalence of BLV infection in the studied areas because most samples were not randomly collected; instead, they were selected based on clinical signs that might indicate that the animal most likely infected with BLV. The low occurrence of infection among Iraqi cattle might be related to certain conditions and management practices in the dairies investigated here because herd size has an important role in BLV prevalence. 44 High prevalence of BLV in cattle is mostly related to high-density cattle populations and poor sanitation conditions; close physical contact and contaminated biological materials appear to be required for BLV transmission. 45 Recent studies have shown that BLV sequences, which can be classified into seven distinct genotypes, are circulating in cattle from the US and South America. 46 Weak reactions using ELISA indicated a high percentage (67.8%) of seropositive samples, and most weak reactivity is likely due to the variable specificity of the serum antibodies or the presence of preexisting low levels of antibodies due to a latent natural infection in these animals. 47 The low reactivity could also be because the serum antibodies possess low-affinity constants, which can lead to dissociation of the antigen-antibody complex during the multiple washing steps of the assay and consequently result in a weak positive reactivity in some cases. In addition, the probability of different antigenic strains of BLV with relative immunogenicity is characteristic of weak reaction using serological tests. 48 Serological tests have been used more extensively to identify BLVinfected cattle worldwide due to their rapidity, cost-effectiveness and easy interpretation. 46 BLV status conversion is detected via PCR assay more rapidly than via ELISA in recent infections, before the development of antibodies, in doubtful reactions, and weak positive reactions in ELISA. 26 Thus, PCR amplification was used to examine seropositive The provirus integration of BLV has been investigated in PBMCs isolated from cattle blood. 49 In the present study, PCR use in the preliminary field screen further demonstrated the advantages of PCR as a BLV detection technique. PCR was performed based on primer sites within conserved regions of the pol and env genes that flank a region of variability. We hypothesized that by basing the primer design on the conserved regions, the assay would be able to detect a variety of serologically different BLV strains. 50 BLV infection in cattle results in a strong permanent antibody response to the BLV antigens weeks after infection, and some infected cattle may carry the provirus and not have detectable antibody titers. 32 All serologically positive samples and the three negative samples were positive using PCR; thus, our results are in close agreement to those reported in Brandon et al., 51 which noted that the sensitivity of PCR permitted the detection of bovine leukemia provirus in 6.8% of serologically negative BLV-exposed cattle. Leukocyte counts in some of the BLV-infected cows were significantly higher due to significantly higher lymphocyte counts, which has also been described in a study that found 8800 (56%) to 9595 (67.5%) of the BLV-infected cows had PL. 52 Weight loss, poor hair coat and weakness are the clinical signs most commonly associated with BLV infection in cattle. In the present study, these abnormal clinical signs were recorded during the animal examination and sample collection, and our findings were similar to observations by others. 52 The present study also demonstrated an association between age and BLV infection. Infection most commonly occurred in animals 4two years old, and the incidence of infection increased with age based on virus detection in colostrum and milk, suggesting a role of both colostrum and milk in disease transmission. 52,53 The higher prevalence of BLV in older animals could be caused by several different factors. When animals are housed in the same free-stall barns, close contact among animals could increase transmission. As a cow ages, the likelihood of sufficient contact with an infected animal and transmission of the infection from infected herd mates increases. In addition, older animals are more susceptible to infections. 4 These results were similar to those of other studies. 35,51 The present study provides evidence that BLV infection in Iraqi cattle is endemic. More attention to this disease is required to establish effective prevention and control measures. A comprehensive screening of all animals should be conducted on all Iraq cattle farms. The programs to segregate infected animals and eliminate transmission can be effective and particularly important for controlling the spread of BLV because no effective vaccines are available. 54,55
v3-fos-license
2022-01-14T16:30:02.904Z
2022-01-12T00:00:00.000
245923892
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4441/14/2/210/pdf?version=1641956338", "pdf_hash": "fca43f8d683591a7d4bda383a3f1fb0c2336edd1", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2570", "s2fieldsofstudy": [ "Geography" ], "sha1": "05c84c7a180c3b41f24d13a209d99e42c505e894", "year": 2022 }
pes2o/s2orc
Investigating the Attitude of Domestic Water Use in Urban and Rural Households in South Africa South Africa is a semi-arid, water-stressed country. Adequate measures should be put in place to prevent water wastage. This paper aims to assess domestic water wastage and determine the proper attitude towards household water management in rural and urban communities in South Africa. This study was conceptualised in two stages. Firstly, critical observations were used to examine the attitude of households towards water usage in both urban and rural communities (Durban and Thohoyandou, respectively). Secondly, structured questionnaires and interviews were used to identify the factors that influenced the participants’ attitudes towards domestic water usage. This study concludes that, irrespective of the literacy level, accessibility to limited water supply, information available through advertisements about water scarcity, and better water management in an urban community, the rural community has a better attitude towards domestic water usage and water management. The result (83.3%) also indicated that the rural community strongly agreed to be water savers in their homes. However, in the urban community, the results from the participants were somewhat evenly distributed; the participants strongly agreed and disagreed at 36.2% and 32.2%, respectively. Other results of the study also showed that variables such as family upbringing, inaccessibility of domestic water, and advertisement play a major role in influencing the attitude of the rural community to water usage. These variables were statistically significant at p < 0.001. However, the immediate environment was shown to be not statistically significant at p < 0.911. Based on the study results, it is recommended that households should be encouraged to generate greywater collection systems to reduce water use and improve water reuse. The government could introduce a rationed allocation (shedding) of domestic water in urban communities to draw attention to the prevalence of water scarcity in the nation. Introduction Freshwater is an essential natural resource and is vital for sustaining life and supporting the development of ecosystems and recreational sources. Therefore, water should be used sustainably. The ideology of the sustainability of freshwater resources is using the resource as a means to meet the needs of the present generation without compromising the needs of the future generation [1]. However, while water is a renewable resource, it is finite; thus, there are certain consequences of meeting the needs of the present and future generations since water demand often far exceeds its availability [2]. Globally, the demand for domestic water has continued to increase due to the increase in population growth, urban migration, food demand, standard of living, and global wealth. There have been several solutions to this issue, for instance, on the macro scale through major desalination of wastewater treatment plants, construction of more dams, tapping of underground water supplies, and recycling industrial wastewater, and on the micro or domestic scale, through the installation of water tanks, recycling household greywater, and other domestic practices. There are also several initiatives to mitigate the consumption of domestic water by using efficient fittings within homes and by encouraging changes in gardening practices. These practices have had little or no impact on the widespread shift in water use attitude and behaviour [3]. The increase in global water use aggravates the water scarcity conditions, especially in arid nations such as South Africa, where precipitation is lower, which limits available surface water and affects the attitudes of individuals towards the use of available water resources. As the global household demand for water usage continues to rise, a question arises concerning the scarcity and the attitudes of households towards water use. South Africa is a water-scarce country and the 30th driest in the world and is already feeling the pressure of the prediction that the country's water demand will outweigh its availability by 2030 [4]. It is therefore important not to misuse the available domestic water. The focus on domestic water management and ways of reducing water wastage should be reinforced from the household level. Inadequate rainfall, climate change, rapidly growing population, a growing economy, and other issues stem from the unavailability of water resources [5]. This has increased the rationing of domestic water in several places, and it is a more common practice in rural areas. The rationing of domestic water is likely to become a fact of life as time goes on, affecting the urban centres likewise. Moreover, water conservation measures must be implemented in all aspects of life as a matter of urgency [6]. Government and relevant stakeholders can facilitate sustainable water management practices in so many ways by supply augmentation (water recycling and desalination) and demand management practices aimed at reducing consumption. The change in the attitude and behaviour of individuals towards the use of water is also a very vital and essential tool for water management [1][2][3]. South Africa is currently exploring 98% of its available freshwater resources, and about 45% of its domestic water cannot be accounted for [7]. An average South African household uses about 250 L/day of water, which is more than the average amount of water (173 L/day) recommended globally [8]. Despite South Africa's severe drought with most metropolitan areas instituting water restrictions, many South Africans still consume more water than the global average. The need for domestic freshwater by households includes drinking, hygiene, washing, cooking, gardening, and productive purposes such as farming, livestock, forestry, fisheries, and small-scale industries. However, only 24%, if not less, of rural households in South Africa have access to piped water [9]. Wasteful water practices continue to loom in both urban and rural communities, especially in South African households. A study by the Council for Scientific and Industrial Research (CSIR) showed that increasing access to water leads to increased water usage. This is particularly predominant in an urban household; nonetheless, this wastage is caused by non-maintenance of household infrastructure, especially in poor households [10]. Therefore, rather than focusing on increasing water supply, it is critical to focus on the behaviour and attitude of individuals towards the consumption of water, as well as on how to mitigate high water usage in both urban and rural communities, in turn reducing water demand and drastic high water use across the country. Water use and the demand side policy responses to mitigate future domestic water variability will benefit from a deeper understanding of current household water use, perception of water use, attitude towards water use, and factors influencing household water use. Such a deeper understanding will address key questions such as the following: 1. Do South Africans understand that the country is semi-arid with limited water resources? 2. When asked to reduce the amount of water being used, would people know which behavioural changes are more effective than others? 3. What makes people use water in the manner that they do, and what will motivate them to change how they use water? 4. Are people conscious of the amount of actual water used, and do their perceptions of water use correspond with actual water use? It is on the aforementioned premise that this paper seeks to understand the behaviour and attitude towards water use in the surveyed communities. The results of the study will enable relevant stakeholders to inculcate a proper attitude towards water management for South Africans living in rural and urban communities to minimise high water usage. Recommendations will be given on how to save water. This paper will also help proffer possible solutions to save water and a positive step towards an instrumental water management strategy in South Africa. Location of Study The study was conducted in two distinct locations in South Africa: Thohoyandou (Vhembe District) in Limpopo Province and Durban (eThekwini District) in KwaZulu-Natal Province ( Figure 1). 3. What makes people use water in the manner that they do, and what will motivate them to change how they use water? 4. Are people conscious of the amount of actual water used, and do their perceptions of water use correspond with actual water use? It is on the aforementioned premise that this paper seeks to understand the behaviour and attitude towards water use in the surveyed communities. The results of the study will enable relevant stakeholders to inculcate a proper attitude towards water management for South Africans living in rural and urban communities to minimise high water usage. Recommendations will be given on how to save water. This paper will also help proffer possible solutions to save water and a positive step towards an instrumental water management strategy in South Africa. Location of Study The study was conducted in two distinct locations in South Africa: Thohoyandou (Vhembe District) in Limpopo Province and Durban (eThekwini District) in KwaZulu-Natal Province ( Figure 1). Xitsonga (27%) [11]. It is mostly a rural-based district, and households are headed by females while the males migrate to cities for work [12]. According to Vhembe District Municipality [11], approximately 128,372 households in the district have access to water under the Reconstruction and Development Program (RDP), which aims to improve the standard of living for South Africans. Furthermore, 22,835 households have access to water via boreholes, springs, rivers, streams, dams, rainwater, and water vendors. The district has about 11 dams, which are the major sources of domestic water. The district strives to provide free basic water to low-income households. The province is a typical developing area and is regarded as one of the poorest regions of South Africa, with an immense gap between poor and rich residents, especially in the rural areas. Most of the households in the rural areas, which comprise much of the population of the province, depend on pension grants, government grants, and remittances from family members who migrated to other provinces to work. The household wealth is relatively lower compared to other municipalities in South Africa [13]. Thohoyandou is the largest town in the Vhembe district, and it is mainly made up of rural townships and is further surrounded by several rural villages situated on the outskirts of the built-up community. The underdeveloped rural area experiences periodic droughts and environmental degeneration due to poor water supply and infrastructural development. Despite recent improvements, rural people still use water directly from streams for household purposes, which include washing of laundry, car washing, and bathing directly in the river. The area consists of approximately Black African 95.5%, Coloured 0.2%, Indian/Asian 4.1%, White 0.2% and other races 0.1% [14]. The Durban community is a coastal urban city located in eThekwini Municipality in the eastern part of South Africa. The city is the third most popular city in South Africa and the largest city in KwaZulu-Natal Province. According to the last census Statistics South Africa [15], Durban has a population of approximately 3.44 million people, with a race makeup of 51% Black Africans, 8.6% coloured, 24% Indians, 15.3% Whites, and 0.9% other races. Durban Metropolitan City covers an area of 1370 km 2 and stretches 72 km along the Indian Ocean and 52 km inland. The Durban Metro Water Service (DMWS) is in charge of water supply, sanitation and solid waste and currently serves 360,000 households with metered water connections. However, approximately 43% of these households lack household connections. Stolen water is also very common in Durban; approximately 35% of the city water is stolen or given out through illegal means. The households, therefore, depend on stand posts, many of which are inherited by the DMWS from the previous administration. Meanwhile, there are approximately 10,000 to 20,000 illegal water connections in the city's piped system [16]. Research Approach This study employed a quantitative and qualitative approach which was conducted in two stages. Stage 1: A reconnaissance survey of two weeks (4 July to 18 July 2019) was conducted to identify and understand the demography of the participants to be included in the study. During the reconnaissance survey, two researchers went to both communities to survey the area. Upon the ground survey, a simple random sampling technique was best fit to identify the participants for this study [17]. Due to the similar water management and challenges experienced in both communities, the participants were randomly selected from both communities. Ten households were selected in the rural community and ten households in the urban community that were relevant observatory participants for the study, as shown in Table 1. Durban is an urban community, while Thohoyandou is a rural community. Furthermore, data collection was done through the observatory technique and structured interview; this is to understand which of the two communities (urban and rural) have better attitudes towards water management [18]. Stage 2: A reconnaissance survey for one week (22-29 July 2019) was conducted to identify possible participants. Reconnaissance survey is the site investigation and is carried out at the preliminary stage before other stages are begun. Reconnaissance involves a field trip to the site where further investigation is to be carried out. It gives details of landforms and other structures that are above ground, which may form an obstacle for an installation. Raosoft software was manufactured by the Raosoft, Inc. group (Seattle, WA, USA) was used to arrive at a sample size [19]. The statistical software is used for calculating sample size, which comprises a database management system of great strength and reliability that also communicates with other proprietary formats. The Raosoft database is an extremely robust, proven system with high data integrity and security. The software assumes a marginal error of 5% and a confidence level of 92% to ascertain the amount of uncertainty to be tolerated. Then the combined population size of both communities is inputted into the software. The sample size n and margin of error E are given by where N is the population size, r is the fraction of responses that you are interested in, and Z(c/100) is the critical value for the confidence level c [19]. After running Raosoft software for this study, a total of 307 participants were drawn for the study, which included both rural and urban communities. However, 229 respondents participated fully, with 102 and 127 participants from rural and urban communities, respectively. A random sampling technique was used to identify the participants for the study. The participants were selected based on their geographical location; all participants currently lived in Durban or Thohoyandou. Structured questionnaires and interviews were administered in the English language, but where unavoidable, the local dialects (mostly Tshivenda and IsiZulu) of the participants were used. To find out the factors that influenced their attitude towards domestic water wastage, a quantitative approach was used. Data Collection An in-depth, conscious observation process was conducted with the participants to collect data from their homes. The observatory process was conducted for 30 days (4 August to 3 September 2019). Daily, the participants were asked to identify which water variables were applicable in their households during the data collection process. The water variables guide used for stage 1 includes water reuses, water-saving attitude, water availability, water management and frequency of water use. In stage 2, five students were trained to facilitate the administration of the questionnaires. The variables guide that was used for this stage includes family upbringing, immediate environment, education, advertisements and inaccessibility/scarcity of water. To ensure quality assurance and quality control regarding the reliability and validity of the data to be collected, the questionnaires were pretested before the main survey, with adjustments and corrections made based on the received responses to improve clarity. Data Analysis Thematic and content analyses were used to analyse the qualitative data obtained [20]. Moreover, the IBM Statistical Package for Social Science (SPSS) for Windows, version 26 (IBM Corp., Armonk, NY, USA) was used to analyse the quantitative data obtained. The data derived from the participants were validated by checking if all questionnaires were correctly filled and checked for errors and by ensuring that the data were enough to generalise the findings of the study. Descriptive statistics analysis using percentage and frequency was performed to describe and conceptualise the data. A normality test was conducted to understand the type of statistical analysis that best fits the data derived. Subsequently, after considering the assumptions for analysing the variables from two independent groups, an independent sample t-test analysis was performed to compare the difference in means derived from the two groups. To find the correlation between the different variables, Spearman's rho correlation analysis was conducted. Ethical Clearance This study was approved by the ethical committee of the University of Venda (certificate number: SES/21/ERM/06/0306). To ascertain the avoidance of harm to the participants, a consent form was handed to them, explained to them and signed by them before data collection. The participants were guaranteed strict confidentiality and anonymity of the data they provided. The participation was voluntary, and where a household declined to participate, another household was randomly selected. Stage 1 The notion of domestic water running out and the sense of water as a precious commodity was the main concern for some of the participants in the rural community. It was observed that most participants in rural households reuse water compared to the participants living in urban areas. A grandmother in the rural community showed her concerns; she said, "Water is scarce and is very precious to us, and we never know when the taps are going to run again." One of the mothers also said, "We have to save as much water as possible whenever the municipality gives us water." A child from the rural household noted, "We do not waste water in our house; otherwise we will be in trouble with Mama." A study in Cape Town, South Africa, shows that the residential water consumption trend found that washing of clothes was perceived to be the highest water use activity compared to other water use activities in informal settlements [21]. The result of a study in rural southwest Victoria in Australia indicated that water usage coupled with water-saving devices and multiple saving behaviours using (water-saving tanks in their homes) reduced water wastage. This is unlike residents who had a high supply of water and did not have water tanks but still believed that they did not waste water [22]. With regard to washing clothes, this study explored and found that, in most of the rural households, the water used for washing clothes in their homes is either stored or reused to flush their toilet or pit system. This was because the participants in the rural community did not have easy access to water for domestic use, so they had to reuse water for different purposes. However, all participants in the urban households pour away the water used for washing clothes. One of the participants in the rural community explained that they do not have enough water supply given to them by the municipality, "so we have to manage the water we have." The urban community was more relaxed and less conscious of water wastage, although parents sometimes complained about water wastage because of compounding water tariffs and bills. A household of four living in the urban community was observed, and it was found that the inhabitants showered at least twice a day, once in the morning and once in the evening. Moreover, while brushing their teeth, the children left the bathroom sink tap running. Washing of clothes was done with ease, with the tap running at all times. According to one of the children, "There is always water in our house, this is why I leave the tap running while washing my teeth." Another participant said that the "tap in the bathroom is broken and the water comes out with much pressure, so I have to lower the tap and let it flow slowly till I finish with my laundry." The used water was poured into the bathroom sink and not reused. The urban community was observed to be high water users compared to the rural community. Water leakage from broken taps, frequent showering and washing of clothes by family members, despite the rapid increase in water scarcity, were some of the observations. The urban community perceived domestic water as a right and the responsibility of the government to provide. However, the households that saved water in the urban community did that because of accumulating water bills and not because of the looming presence of water scarcity in the country. On the contrary, Gilbertson et al. [2] explained that households in Australia with a good supply of water could also conserve water by making sure that taps do not drip, having a dual flush toilet, using the washing machine only when it is full and using minimal water for cleaning. It should be noted that the rural community had major problems with the easy access and regular availability of domestic water supply compared to the urban community. It is clear that the easy access and ready availability of domestic water highly influence both communities on water conservation. The participants living in the rural community had a better water-saving attitude than the participants living in the urban community. This is mostly a result of the high scarcity/inaccessibility of domestic water in the rural community. Meanwhile, the participants living in the urban community expressed their water culture based on the premise of continuous flow and regular availability of water. The government has maintained a high delivery standard for water supply in the urban community. However, key steps to water management and water usage were also limited in the urban community, for example, regulated tap mouth, reuse of greywater after washing, prompt fixing of leaking pipes and fines to households that flout the water restriction rules by the government. The fear of the community running out or not receiving water was also prominent in the minds of most participants living in the rural community. It was observed that most of the participants living in urban areas have domestic water readily available compared to the participants from rural households. For example, some homes in the rural community receive water only on Friday (for one hour), Saturday (10 a.m.-12 p.m.) and Sunday (10 a.m.-12 p.m.). This makes the families very conscious about saving enough water in the event when the community taps stop running. Bathing, washing, cooking, among other activities, are carried out with all consciousness because of poor access to water. Furthermore, a household living in the rural community receives water only on Mondays. Sometimes, throughout the whole week, water is not received at all. This has made some wealthy families drill personal boreholes for easy access to water. However, the water from these boreholes is salty and unsuitable for some domestic purposes such as drinking. One of the participants in the rural community explained that as a household chore that is often directed to the females in the family, due to domestic water needs, the females have to walk long distances to get to the community borehole for clean water. They ensure that the water tanks are always filled with water at all times from the community tap. A household of six members (one grandparent, two parents and three grandchildren) living in the rural community was observed and it was discovered that the grandchildren bathed only in the morning with a small amount of water before going to school. Water is so precious to them that any form of wastage by the grandchildren will result in scolding by the parents. Grandma says, "Water is very scarce in our community, and it takes much effort to get clean water. Sometimes I fear this water will stop running in our community." Overall, the rural community were seen to be water savers and more conscious about their water use than the participants from the urban community. Similarly to the above result, a study conducted in Nepal, Kathmandu Valley, which suffers from poor water management, showed that the households in the Valley engage in five main types of water-coping behaviours, which are collecting, pumping, treating, storing and purchasing [23]. These coping strategies include walking far distances for water, construction of private wells and boreholes, rainwater harvesting and purchase of water from vendors and neighbours. The study further showed that these coping behaviours were converted into monetary value and that the poorest households incurred four times less coping cost than the top 20% households with higher earnings [23]. With minimal availability of water for bathing, washing, rinsing of clothes and cooking, life still goes on for the participants living in the rural community. Unlike the participants observed in the urban community, they have a high number of running taps in almost every room of the house and can afford to shower twice a day, rinse their clothes with two or three full buckets of water, and even have lawns and gardens to water. The urban area participants were observed to have various water source outlets in their homes such as kitchen sink taps, toilet sink taps, showers, water closets and backyard taps. If only the urban community could learn and emulate some water-saving culture from the rural community, water problems would be reduced to a minimum in the country. Some water conservation strategies include installing storage tanks, rainwater harvesting, reduced number of taps, reduced use of water in homes, and reducing the amount of water being poured away. Urban areas across the world with good water supply have shown that household water use will continue to rise as long as global urbanisation continues to increase. Better water supply drives up demand. Moglia et al.'s [24] review shows to what extent water conservation strategies have influenced water use in urban areas with references from the US, the UK, Australia and Spain. These countries sometimes experience drought/water scarcity but enjoy better water management. The study showed that rainwater harvesting is the most effective among the conservation strategies; however, rainwater harvesting comes with significant investment cost and requires ongoing maintenance and operation by the households. Meanwhile, public awareness and media campaigns were the most consistent and effective water conservation mechanism, especially during water crisis periods such as drought. Moving from fixed pricing to volumetric pricing was also a considerable strategy for impact in terms of water savings. The study also calls for a more effective water pricing whereby households that use larger amounts of water are billed higher. However, this has been criticised as unfair because the amount of water consumed is heavily influenced by the size of the household, and larger households are often associated with lower socioeconomic position. The strategies of Moglia et al. [24] are similar to the results derived from this study; yet it is argued that water conservation strategies are unique and specific to different locations due to varying unique water challenges. Stage 2 Experiment The experiment looked into certain variables such as family upbringing, advertisements, the immediate environment and inability to understand why there are discrepancies in rural and urban communities towards domestic water usage. The demographic characteristics of the participants were analysed to understand the social characteristics of the participants used in this study. Table 2 reveals the results obtained from the study. Table 2 shows that there was no statistically significant difference between the two communities (p-value at 0.980) for gender. This implies that gender type in both communities (Thohoyandou and Durban) does not influence water wastage as water is essential to all genders. This finding aligns with the result obtained by Graymore and Wallis [22]. However, more female respondents were observed in both communities because of their availability and readiness to participate in the study. The statistical difference between the two communities for age was not significant (p-value at 0.513); participants aged 21-30 years old (44.1%) were the most dominant age group in the rural community that participated in this study. However, participants aged 16-20 years old (39.4%) were the most dominant in the urban community. Although most of the participants in the rural community were in the age group of 21-30, the majority of them had attained their highest educational qualification at the secondary level. On the contrary, most of the urban participants attained their highest education qualification at the tertiary level. Consequently, the statistical difference between the two communities for educational attainment was statistically significant (p < 0.001). Moreover, most of the rural respondents have lived more than 15 years in the community; however, the urban participants have lived more than six years or above in the community. The statistical difference between the communities for how long the respondents have lived in the area was statistically significant (p < 0.001). This implies that the level of education plays a significant role in the water consumption in their various communities. However, the rural community participants were less educated than those in the urban community but were more prudent with water usage (more water-wise). This could be as a result of irregular supply of water, old water infrastructures, lack of proper maintenance of water facilities, among other factors. Water Saver As indicated in Table 3, the participants' perceptions regarding household watersaving show a statistical significant between the two communities (p < 0.001), rejecting the null hypothesis. This implies that there is a significant difference in the mean of the participants' perception towards water saving among the two communities. The results of the survey in the rural community show that 83.3% strongly agreed to be water savers in their homes. This could be a result of the inaccessibility to or scarcity of water in their community. However, in the urban community, the results from the participants were somewhat evenly distributed; the participants strongly agreed and disagreed, at 36.2% and 32.2%, respectively. This may be attributed to their households receiving a constant water supply and paying some of the municipality's water charges. At a significant level of 0.05 (p < 0.05). Family Upbringing Family upbringing shows how the participants are influenced by other family members (especially parents) towards the behaviour and perception of water issues. The role of parents in the family influences the behaviour and perception of other family members. Table 3 indicates that the statistical difference between the communities for family upbringing was statistically significant (p < 0.001). This implies that there is a significant difference in the mean of the participants' perception towards family upbringing and water use among the two communities. Consequently, most of the participants in the rural and urban communities (74.5% and 56.7%, respectively) strongly agreed that their family members were influenced by their attitude towards water use. The rural community was more influenced by family upbringing than the individuals in the urban community. This result is in line with the findings of [25], which shows that parenting and its effect on children are clear evidence that parents influence the behaviour/perception of their children. Although individuals seemed to be influenced by their immediate environment, parenting still plays a significant role in an individual's attitude towards water use. Table 3 shows that there was no statistically significant difference in the immediate environment between both communities (p < 0.911). This implies that there is no significant difference in the mean of the participants' perception towards their immediate environment and water use among the two communities. Most participants from both communities strongly agreed that the immediate environment influences the way water is used. Moreover, from the stage 1 experiment, the immediate environment showed a strong influence on the attitude of individuals towards water use. From the stage 1 experiment, a household of six people living in the rural community did not receive water at all in their community and had to walk long distances to get clean water. The grandma stated how the neighbour influenced her: "My neighbour wakes up very early in the morning to fetch clean water from the other streets. Also, my neighbour installed a big water tank to save water whenever tankers selling water come around to sell water." Ellen and Turner [26] showed that one's immediate environment (neighbourhood) plays a vital role in the development of one's behaviour in the different stages of one's life. Moreover, Larson and Brumand [27], in their study of paradoxes in landscape management and water conservation ("Examining Neighbourhood Norms and Institutional Forces"), showed a significant relationship between water conservation and neighbourhood pressure. Immediate Environment Larson and Brumand [27] showed that the neighbourhood positively influences the attitude towards water use in the community. However, this study agreed with the reviewed literature and showed that in the rural community, the immediate environment strongly influences the attitude towards water use. Furthermore, similar results were obtained in the urban community. Therefore, the highly significant impact of the immediate environment on water use contributes to the reason why the participants living in the rural community are better water savers. Moreover, the immediate environment and the availability of water play a significant role in explaining why the participants living in the urban community are heavy water users. Inaccessibility/Scarcity Scarcity/inaccessibility shows the accessibility of water in both communities. The statistical difference between the communities in inaccessibility was statistically significant at p < 0.001. This implies that there is a significant difference in the mean of the participants' perception towards scarcity/inaccessibility among the two communities. As indicated in Table 3, 83.3% of the rural participants strongly agreed with the issues of water scarcity in their community. However, 78% of the participants in the urban community strongly disagreed with water scarcity in their community. The participants identified that water sometimes stops running when the municipality wants to perform or undertake a water purification routine. Overall, the urban community was not ignorant of the water scarcity in the country (Table 3). Advertisement Advertisement highlights the rate of awareness/education of water scarcity in South Africa. The statistical difference between the communities in advertisement was statistically significant at p < 0.001. This implies that there is a significant difference in the mean of the participants' perception towards advertisement and water use among the two communities. As shown in Table 3, the participants from the urban community were more aware of water scarcity (76.4%) than the participants in the rural community (23.5%). Furthermore, scholars have indicated that the provision of information improves water conservation habits [28]. However, despite the constant advertisements on television, workshops and billboards by the government and other water stakeholders concerning water wastage and water management, the participants living in the urban community were observed to have a poor attitude towards water wastage compared to the participants living in the rural community. Moreover, Spearman's rho correlation (Table 4) was run to determine the relationship between advertisements and water savers. This was done to understand how perceived water savers are associated with advertisements of water-related issues. The result was statistically significant (p < 0.001), and there was a low negative correlation between advertisement and water saving (0.306). The more the advertisements on water issues, the more likely it is that water will be saved in communities. Sarkar et al. [29] emphasised that the introduction of environmental education would be an effective tool for water resource management. Publications of leaflets, books, posters and workshops in regional language focusing on environmental strategies will be a viable tool for water management. Nieswiadomy [30] researched price structure, conservation and education to estimate urban residential water demand. It was discovered that public education had a significant impact on reducing water wastage. However, in this study, the participants living in the rural community had a lower level of education but still had a better attitude towards water saving and wastage than those from the urban community. Therefore, this study contradicts Nieswiadomy's study [30] by arguing that water wastage and water management cannot be entirely reduced by one's level of education, but rather by experience, the environment and other major drivers. Moreover, Spearman's rho correlation (Table 5) was tested to determine the relationship between scarcity/inaccessibility of water and water use. The result was statistically significant (p < 0.0001), and there was a low positive correlation between inaccessibility of water and water use (0.342). The more the participants who have access to water, the more likely it is that less water will be saved in the communities. A study by Jacobs-Mata [10] reviewed several studies investigating the interrelationship between individuals' attitudes towards water use and social-demographic factors such as income, education, political affiliation, family size, type of dwelling, water inaccessibility, advertisement and homeowners. Some of the results showed that income and water conservation have a positive correlation. Another study described the opposite for income alongside an inverse relationship between education levels and water conservation [24]. Moreover, some other studies reported that, in general, water conservation activities are normally associated with higher-income groups. Furthermore, the review also highlighted that individuals who are more educated, have smaller families, have smaller properties and own their own homes conserve more water than others [5]. To understand the dynamics of water use and how best water conservation problems can be solved, several factors can be considered such as income, water tariffs, family size, family upbringing, among others. However, this problem in context is site-specific; that is, the water use problem is unique to different locations and at different times. This is because different countries have their unique water problems and coping mechanisms in place to face water scarcity or reduce water use. Conclusions The results of the study indicated that the participants from rural households have a better water-saving attitude compared to the participants from urban households. This was a result of the scarcity of water resources in the rural community. This study shows that the availability of water does influence water use attitude. The urban community needs to learn and inculcate the water-saving culture from the rural community. This will reduce water use holistically in the country. It is, therefore, necessary for the government and other stakeholders to place a greater emphasis on decision implementation campaigns to encourage water conservation, especially in urban households. By understanding what drives attitude and what incentivises better water conservation practices, government officials can implement more suitable and targeted water conservation management interventions. It is also argued that while the results of this study may be true, water conservation remains context-specific and is dependent on some other underlying factors not mentioned in this study. For instance, higher-income households may consume more water than lower-income households with restricted water access. Increasing water tariffs and bills, especially in the urban community, was a major determinant of the participants' water conservation attitudes. However, the rural com-munity enjoyed free water supply from the community taps and did not need to worry about water bills. Knowing this, it could be ideal to point out that altering the attitudes of South Africans towards water conservation can be achieved on a micro scale based on an understanding of the problems associated with each specific community, especially issues on water bills. Furthermore, the study shows that variables such as family upbringing, the immediate environment and water scarcity had a positive influence on both communities' responses towards water use issues. However, advertisement did influence the water consumption of the participants in the urban community but had no significant influence on the participants in the rural community. The rural participants highlighted that they did not have enough advertisements about water scarcity. Based on the findings of the study, it is recommended that the government should implement a policy on regulated water outlets, such as taps, showers and bathing outlets, with a standardised minimum water flow. Households should be encouraged to install greywater collection systems to reduce water wastage and improve water reuse. The government should also provide proper water management and distribution systems, especially in rural communities where piped water can reach every household. More workshops, advertisements and seminars should be introduced as a way of conducting educational and enlightenment campaigns regarding water scarcity, water wastage and greywater collection and reuse. The government can introduce a rationed allocation (shedding) of domestic water in urban communities to draw attention to the prevalence of water scarcity in the nation. Research should be conducted on cheap and easy purification and distribution of other sources of water, for example, seawater, due to the rapid loss of global freshwater. Water quality sustainable programs should be implemented in rural communities to combat high salinity and fluorides in water bodies. More of these programs should be put in place for the purification and sustenance of the nation's dams. The number of water outlets in households can also be reduced to save water. Informed Consent Statement: The authors state that they have obtained an appropriate institutional review board outlined in the Declaration of Helsinki for all human or animal experimental investigations. A signed informed consent document has been obtained from all participants included in the study. Data Availability Statement: Available on request from the corresponding author.
v3-fos-license
2014-10-01T00:00:00.000Z
2006-05-18T00:00:00.000
6145759
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/1476-4598-5-20", "pdf_hash": "c7e336ce6d900cee84b149d75028adc787c2e502", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2571", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "7886bb476b6966aca87b413e1b847e8a10877246", "year": 2006 }
pes2o/s2orc
Cellular response to 5-fluorouracil (5-FU) in 5-FU-resistant colon cancer cell lines during treatment and recovery Background Treatment of cells with the anti-cancer drug 5-fluorouracil (5-FU) causes DNA damage, which in turn affects cell proliferation and survival. Two stable wild-type TP53 5-FU-resistant cell lines, ContinB and ContinD, generated from the HCT116 colon cancer cell line, demonstrate moderate and strong resistance to 5-FU, respectively, markedly-reduced levels of 5-FU-induced apoptosis, and alterations in expression levels of a number of key cell cycle- and apoptosis-regulatory genes as a result of resistance development. The aim of the present study was to determine potential differential responses to 8 and 24-hour 5-FU treatment in these resistant cell lines. We assessed levels of 5-FU uptake into DNA, cell cycle effects and apoptosis induction throughout treatment and recovery periods for each cell line, and alterations in expression levels of DNA damage response-, cell cycle- and apoptosis-regulatory genes in response to short-term drug exposure. Results 5-FU treatment for 24 hours resulted in S phase arrests, p53 accumulation, up-regulation of p53-target genes on DNA damage response (ATF3, GADD34, GADD45A, PCNA), cell cycle-regulatory (CDKN1A), and apoptosis-regulatory pathways (FAS), and apoptosis induction in the parental and resistant cell lines. Levels of 5-FU incorporation into DNA were similar for the cell lines. The pattern of cell cycle progression during recovery demonstrated consistently that the 5-FU-resistant cell lines had the smallest S phase fractions and the largest G2(/M) fractions. The strongly 5-FU-resistant ContinD cell line had the smallest S phase arrests, the lowest CDKN1A levels, and the lowest levels of 5-FU-induced apoptosis throughout the treatment and recovery periods, and the fastest recovery of exponential growth (10 days) compared to the other two cell lines. The moderately 5-FU-resistant ContinB cell line had comparatively lower apoptotic levels than the parental cells during treatment and recovery periods and a recovery time of 22 days. Mitotic activity ceased in response to drug treatment for all cell lines, consistent with down-regulation of mitosis-regulatory genes. Differential expression in response to 5-FU treatment was demonstrated for genes involved in regulation of nucleotide binding/metabolism (ATAD2, GNL2, GNL3, MATR3), amino acid metabolism (AHCY, GSS, IVD, OAT), cytoskeleton organization (KRT7, KRT8, KRT19, MAST1), transport (MTCH1, NCBP1, SNAPAP, VPS52), and oxygen metabolism (COX5A, COX7C). Conclusion Our gene expression data suggest that altered regulation of nucleotide metabolism, amino acid metabolism, cytoskeleton organization, transport, and oxygen metabolism may underlie the differential resistance to 5-FU seen in these cell lines. The contributory roles to 5-FU resistance of some of the affected genes on these pathways will be assessed in future studies. Background 5-fluorouracil is a chemotherapeutic drug used worldwide in the treatment of metastatic colorectal cancer, either alone or in combination with irinotecan, a topoisomerase I inhibitor. 5-FU is considered to be purely an S phaseactive chemotherapeutic agent, with no activity when cells are in G 0 or G 1 [1]. It is well-established that treatment of cells with 5-FU causes DNA damage, specifically doublestrand (and single-strand) breaks, during S phase due to the misincorporation of FdUTP into DNA [2,3]. However, damage to DNA can occur in all cell cycle phases in proliferating cells, and the repair mechanisms involved vary in the different phases of the cell cycle [4,5]. DNA damage checkpoint pathways in G 1 , S, and G 2 couple DNA damage detection to inhibition of cell cycle progression, activation of DNA repair, maintenance of genomic stability, and when damage is beyond repair, to initiation of cellular senescence [6]. The position of tumor cells in the cell cycle and the ability to undergo apoptosis in response to drug treatment together play an important role in the sensitivity of tumor cells to chemotherapy. 5-FU has a complicated mechanism of action with several enzymes involved in its metabolic activation [7]. It inhibits thymidylate synthase as its main mechanism of action, leading to depletion of dTTP. Overexpression of thymidylate synthase has been shown to be associated with 5-FU resistance in colorectal cancer [8,9], but it is also likely that other alterations, for example, to crucial genes on cell cycle and apoptotic regulatory pathways, underlie the development of resistance. Two independent 5-FU-resistant cell lines, designated ContinB and ContinD, were recently generated from parental HCT116 colon cancer cells via continuous exposure to 5-FU, and characterized for genotypes, phenotypes, and gene expression associated with the generation of 5-FU resistance [10]. Compared to parental HCT116 cells, the resistant cell lines demonstrated moderate (ContinB) to strong (ContinD) resistance to 5-FU and up-regulation of TYMS. Cellular phenotypes such as reduced apoptosis and more aggressive growth relative to the parental HCT116 cell line characterized both resistant cell lines. This was consistent with up-regulation of apoptosis-inhibitory genes (IRAK1, MALT1, BIRC5), positive growth-regulatory genes (CCND3, CCNE2), DNA repair genes (FEN1, FANCG), and metastasis signature genes (LMNB1, F3 TMSNB), and down-regulation apoptosis-promoting genes (BNIP3, BNIP3L, FOXO3A) and negative growthregulatory genes (AREG, CDKN1A, CCNG2, GADD45A) in one or both resistant cell lines. Both 5-FU-resistant cell lines retained the wild-type TP53 genotype characteristic of the parental HCT116 cells [10]. In the present work, the cellular responses of HCT116 parental and 5-FU-resistant cell lines to short-term 5-FU treatment were characterized and compared. Given the fact that the 5-FU-resistant cell lines displayed reduced apoptosis and more aggressive growth phenotypes compared to the parental cells as a consequence of resistance development, it was of interest to determine potential differential responses to 5-FU during short-term 5-FU challenge. We investigated cell cycle effects and apoptosis induction throughout treatment and recovery periods for each cell line, as well as changes in expression levels of DNA damage response-, cell cycleand apoptosis-regulatory genes (among others) that occur within the first 24 hours in response to 5-FU treatment. Characterizations of the cellular responses to short-term drug treatment in resistant colorectal cancer cells will facilitate a better understanding of the multiple mechanisms involved in drug response and development of 5-FU resistance. Cell proliferation, cell cycle distribution, and apoptosis during recovery from 5-FU treatment Cells were treated with 5-FU for 24 hours and followed for up to 40 days in drug-free medium to determine cell counts, cell cycle distributions, and apoptotic fractions ( Figure 1). Cell counts decreased during the early period of recovery following drug removal (up to and including day 6) for all cell lines, after which point they flattened out for the parental and ContinB cells (Figure 1). After day 7, the cell counts for ContinD increased steadily. The cell counts for ContinB and parental cultures began to increase after about 15 and 20 days, respectively. Cell cycle analyses were performed to elucidate the patterns and timeframes of cell cycle progression during recovery in each 5-FU-treated cell line ( Figure 2). After an initial accumulation in S phase during the first 24 hours with drug treatment (see later), the S phase fractions decreased in all cell lines during the early period of recovery following drug removal, concomitant with increases in the G 1 and G 2 /M fractions. Overall, the 5-FU-resistant cell lines had the smallest S phase fractions (Figure 2b), and in the case of the ContinD cell line, the largest G 2 /M fractions (Figure 2c). The S phase fractions increased again at 8 and 15 days for ContinD and ContinB cells, respectively. The cell cultures were eventually allowed to reach full confluence, evidenced by an increase in the G 1 fraction and decreases in the S and G 2 /M phase fractions in all the cell lines. 5-FU-induced DNA damage resulted in large differences in apoptosis induction in the HCT116 cell lines during treatment and recovery ( Figure 3), with the highest levels of apoptosis observed in the parental cells and the lowest in the ContinD cells. Following removal of 5-FU at 24 hours, apoptosis levels increased in the parental and Con-tinB cells, until they peaked at over 80% on day 10, after which they decreased to 30%. On day 15 the apoptotic fractions began to increase again, but only for the parental cells, peaking at about 80% on day 20, and then the levels gradually decreased to control levels at day 24. There was no 5-FU induced increase in the fraction of apoptotic ContinD cells (compared to the levels of spontaneous apoptosis in the untreated cells). 5-FU incorporation and cell cycle progression during the initial 24 hour treatment period The large differences in cell growth and apoptosis during recovery suggested that there might be differential responses to 5-FU in the cell lines during the first 24 hours of treatment. The reduced apoptosis of ContinB/D cells to 5-FU compared to the parental cells could have been due to decreased incorporation of 5-FU into DNA. At 8 hours, the parental cell line incorporated more [6-3 H]5-FU into DNA than did either of the resistant cell lines, but the differences were not significant (Figure 4). At 24 hours, the ContinD cell line showed the highest levels of 5-FU incorporation into DNA, whereas the ContinB cell line had the lowest levels (p < 0.05). However, neither of the two resistant cell lines showed significant differences in incorporation relative to the parental cells. We further investigated whether there were differences in growth or cell cycle progression during the first 24 hours of 5-FU treatment. The growth of HCT116 parental cells was completely inhibited at 24 hours ( Figure 5). The cell number increased after 24 hours of 5-FU treatment for the ContinD and to a smaller degree for the ContinB cell lines, but less than in the corresponding controls. However, since the fraction of apoptotic cells was increased at 24 hours for the parental (and to a smaller degree ContinB cells; Figure 3), some cells in these cultures may still have divided during the 24 hour period, in agreement with the non-zero mitotic fractions observed at 8 hours ( Figure 6). No mitotic cells were observed at 24 hours. The distribution of cells in the G 1 , S, and G 2 (/M) phases of the cell cycle was measured by staining for DNA content (Figure 7). A G 1 (/S) arrest occurred in the parental and ContinB cells at 8 hours after 5-FU addition, evidenced by a larger fraction of cells in the G 1 phase. At 8 hours, the size of the G 1 fraction in 5-FU-treated ContinD cells was similar to that measured for its untreated control. S phase fractions in all 5-FU-treated cell lines were equivalent in size and similar to those measured in the respective untreated controls at the 8 hour timepoint. The sizes of the G 2 (/M) frac-Cell counts during recovery periods following drug removal Figure 1 Cell counts during recovery periods following drug removal: cell counts were measured throughout the respective recovery periods for each cell line following a shift to drug-free medium at 24 hours (Day 1). The dashed (---) line shows the number of viable cells in untreated exponentially-growing cultures. Cell cycle distributions during recovery periods following drug removal c. tions in 5-FU-treated parental and ContinB cell lines were smaller than their respective controls at 8 hours, but in ContinD cells the size of the G 2 fraction was similar to that measured for the untreated controls. At 24 hours, the G 1 fractions in all 5-FU-treated cell lines were smaller and the S phase fractions were larger compared to their respective untreated controls, indicating release of the arrested cells at the G 1 /S boundary and movement into S phase. The cell cycle histograms show directly synchronized populations of cells in S phase caused initially by the G 1 /S arrest and subsequent release of these cells into S phase ( Figure 7). Parental HCT116 cells had the largest S phase accumulation (80% S phase cells), whereas ContinB and ContinD cells had smaller S phase accumulations (70% and 52%, respectively) compared to 25% in the respective untreated controls (Figure 7c). The G 2 fractions in the 5-FU-treated cells at 24 hours were smaller relative to those measured for untreated control cells, probably reflecting the slow movement of cells through S phase. ContinD cells had the largest G 2 fraction compared to the other 5-FU-treated cell lines. Expression of DNA damage response, cell-cycle regulatory, and apoptosis-regulatory genes Since neither the incorporation of 5-FU nor differences in cell cycle arrest could explain the large differences in 5-FU resistance and induction of apoptosis, we investigated the gene expression patterns of the cell lines in response to 5-FU challenge. Table 1 summarizes the microarray gene expression data for altered genes localized to DNA damage response, cell cycle-regulatory and apoptosis-regulatory pathways. The alterations in gene expression levels are in response to 5-FU treatment, but information about whether these genes were altered as a consequence of resistance development [10] are also included. For some genes, protein levels were measured in addition to gene transcript levels at 8 and 24 hours (Figure 8). At 8 hours, p53 protein levels were 2.0, 1.8, and 1.4 fold higher in 5-FU-treated parental, ContinB and ContinD cells respectively relative to their respective untreated controls ( Figure 8a). At 24 hours, p53 protein levels had increased further relative to control levels; levels were 2.3, 3.0, and 2.1-fold higher in 5-FU-treated parental, ContinB and ContinD cells respectively. A number of important genes located on DNA damage response pathways were scored as altered in the 5-FU-treated cells relative to the untreated control cells following exposure to 5-FU. The ATF3, GADD34, GADD45A, PCNA, and TP53I3 genes were all up-regulated at 8 and/or 24 hours in response to 5-FU treatment in all HCT116 cell lines relative to untreated controls. GADD45A transcript levels were highest in 5-FU-treated ContinB cells at 24 hours, nearly 10-fold higher than in the untreated control. (GADD45A expression levels measured by real-time RT-PCR correlated well with those measured using the 13 K microarrays (r = 0.83, p < 0.05)). However, GADD45A protein levels increased only 10% in the treated parental and ContinB cell lines at 8 hours, and in ContinD cells they had actually decreased about 10%; Apoptotic fractions during recovery periods following drug removal Cell cycle alterations at 8 and 24 hours after drug addition were reflected in altered expression patterns of genes involved in cell cycle progression in the treated HCT116 cell lines compared to untreated controls. Cell cycle-and growth-regulatory genes such as AREG, CCND3 and CDKN1A were up-regulated in all 5-FU-treated cell lines compared to untreated controls at either 8 or 24 hours or both, whereas down-regulation of CCNB1 was detected in ContinD cells only. CCNC, CCNG1, and CDC25B were down-regulated in parental and ContinB cells, whereas RAN was down-regulated in ContinB cells (Table 1). There was good correlation between CDKN1A expression measured by real-time RT-PCR and that measured using the 13 K microarrays (r = 0.7, p < 0.05). CDKN1A protein levels at 8 hours were 1.9, 2.6, and 1.7-fold higher in the treated parental, ContinB, and ContinD cells respectively relative to their untreated controls, whereas the corresponding levels were 2.8, 3.8 and 2.0-fold higher at 24 hours ( Figure 8c). MYC was also induced in response to 5-FU treatment in all HCT116 cell lines; protein levels at 8 hours were 2.3, 2.4, and 3.2 fold higher in the treated parental, ContinB and ContinD cells, respectively compared to their untreated controls, and at 24 hours, these levels had increased 2.9, 2.4 and 4.8-fold compared to their respective controls ( Figure 8d). MYC protein levels did not correlate with transcript levels, since the MYC transcript was down-regulated in parental and ContinB cell lines (slight down-regulation in ContinD cells) ( Table 1). The cell cycle-regulatory genes CDC6, CDCA5, PDAP1, PDXP, PVT1, and RARRES2 were altered in the parental HCT116 cell line but not in either of the 5-FU-resistant cell lines in response to short-term drug treatment. The Sphase regulatory gene PPP2CB was up-regulated in all cell lines, whereas RRM2 was up-regulated in parental and ContinB cells. MCM3 was down-regulated in all cell lines, consistent with reduction or cessation of replication activity. In agreement with the reduced entry into mitosis, spindle-checkpoint and mitosis-regulatory genes such as BUB1, BUB1B, NEK4, PLK and STK6 were all down-regulated at 8 and/or 24 hours in these cell lines in response to 5-FU. b. c. Table 1 summarizes the expression levels of apoptosis-regulatory genes that were altered at 8 and 24 hours following drug addition in the HCT116 parental and resistant cell lines. The apoptosis-inhibiting genes AVEN and SERPINB2 were up-regulated at both 8 and 24 hours. The p53-regulated apoptosis-promoting gene FAS was up-regulated in each cell line, but lowest FAS levels were seen in ContinD cells. The apoptosis-promoting gene BNIP3L was down-regulated in all cell lines, while the apoptosispromoting gene CASP3 was down-regulated in ContinB cells only. Some of the genes whose expression levels had been altered as a consequence of resistance development were further altered in response to short-term 5-FU treatment (8 or 24 hrs.) (Table 1), e.g. AREG, ATF3, BNIP3L, CCND3, CCNG1, CDKN1A, CHC1, GADD45A, MCM3, PLK, and STK6. Interestingly, some genes that were initially down-regulated as a result of resistance development (AREG, CDKN1A, GADD45A) were up-regulated in response to short-term 5-FU treatment. The opposite was also true; some genes that were initially up-regulated as a result of resistance development (CHC1, MCM3, PLK, and STK6) were down-regulated in response to short-term drug treatment. Differences in 5-FU-induced gene expression in 5-FUresistant cell lines Having discussed genes specifically involved in DNA damage response, cell cycle and apoptosis regulation, we next focused on the genes that showed the largest differences in 5-FU-induced expression in the cell lines with different resistance levels ( Table 2). A set of genes coding for guanine nucleotide-binding proteins (G proteins), which integrate signals between membrane receptors and downstream effector proteins, showed marked differential expression after 5-FU treatment in the 5-FU-resistant cell lines. GNL3 was only down-regulated in ContinD cells, while GNL2 was only down-regulated in ContinB cells. Neither of them were altered in the parental cell line in response to 5-FU. Other genes involved in nucleoside/ nucleotide metabolism were also differentially expressed. were altered in ContinD cells. VPS52 was down-regulated in parental HCT116 cells. Some of the genes whose expression levels were altered in response to short-term 5-FU treatment had also been altered as a consequence of resistance development, e.g. IVD and TAX1BP1. Discussion Cell cycle progression after DNA damage is regulated by checkpoint controls in the G 1 or G 2 phase of the cell cycle. Additionally, S phase progression is reduced, but not entirely halted, after DNA damage [11]. Arrest in G 1 and G 2 allows repair prior to replication and mitosis, respectively. Failure to repair can result in apoptosis, mitotic catastrophe, or senescence [6]. In the present work, we wanted to elucidate potential differential responses to 8 and 24-hour 5-FU treatment in the HCT116 parental cell line and its 5-FU-resistant derivatives. We assessed several cellular phenotypes in an effort to clarify potential differences: levels of 5-FU uptake into DNA, cell cycle effects and apoptosis induction throughout treatment and recovery periods for each cell line. Each cell line incorporated 5-FU into DNA, but levels of incorporation were not significantly different between the cell lines at either 8 or 24 hours. 5-FU led to a G 1 (/S) arrest at 8 and 24 hours, consistent with the results of previous studies [7,12,13]. The G 1 arrest was most pronounced in ContinD cells at 24 hours, whereas a Log 2 ratios from 13 K cDNA microarrays (DNR) and b log 2 ratios from 8.5 K oligonucleotide microarrays (Affymetrix). na = gene not on array; nd = not detected; genes scored as up-regulated (log 2 ratio ≥ 1) or down-regulated (log 2 ratio ≤ -1) are in bold print. c Information from NCBI Entrez Gene. DNA damage response and cell cycle-regulatory protein and transcript levels in 5-FU-treated parental and resistant HCT116 cell lines a Log 2 ratios from 13 K cDNA microarrays (DNR) and b log 2 ratios from 8.5 K oligonucleotide microarrays (Affymetrix). na = gene not on array; nd = not detected; genes scored as up-regulated (log 2 ratio ≥ 1) or down-regulated (log 2 ratio ≤ -1) are in bold print. c Information from NCBI Entrez Gene. the S phase arrest was most pronounced in parental HCT116 cells at 24 hours. It also appeared that ContinD cells had a higher tendency to arrest in G 2 . Cell counts began to decrease immediately for parental and ContinB cells following 5-FU removal from the media, whereas this decrease was delayed by 24 hours for ContinD cells. Decreases in cell numbers following drug removal were consistent with cessation of mitotic activity at 24 hours and with subsequent high levels of apoptosis (for parental and ContinB cell lines) during recovery. The pattern of cell cycle progression during recovery demonstrated consistently that the smallest S phase fractions and the largest G 2 (/M) fractions were measured in the 5-FU-resistant cell lines. The levels of apoptosis were dramatically lower in the ContinD cell line relative to the other two cell lines, a pattern that persisted throughout the recovery period. Since this cell line also experiences a dramatic cell loss (>95%, Figure 1), which is not the result of apoptosis, it may be that cell death in this cell line occurs via necrosis. In any event, this cell line had the fastest turnaround time, in that it recovered exponential growth within 10 days, compared to 20 days for the ContinB cell line and closer to 30 days for the parental cell line. The G 1 (/S) arrest in these cell lines was accompanied by increases in p53 protein levels and induction of CDKN1A transcripts and CDKN1A, suggesting that the arrest could be p53-mediated. p53 is known to play a central role as a mediator of the DNA damage response/cell cycle arrest and in apoptosis induction [1,14,15]. There was little agreement between p53 protein levels and TP53 transcript levels, since the latter were either unchanged or down-regulated at 8 and 24 hours in each cell line. However, the mechanism of p53 protein activation is by protein stabilization (and phosphorylation) rather than by increased transcription [4], and since these cell lines have wild-type TP53 [10], and CDKN1A transcript and protein is induced after irradiation with ionizing radiation (unpublished results), the p53 response appears to be normal in the resistant cell lines as well as in the parental cell line. 5-FU treatment for 24 hours resulted in up-regulation of p53target genes on DNA damage response/repair (GADD45A, XPC [16]PCNA [17], TP53I3, and ATF3), cell cycle-regulatory (CDKN1A), and apoptosis-regulatory pathways (FAS) in the parental and resistant cell lines. Differential down-regulation of cell-cycle regulatory genes known to be repressed by p53, e.g. PLK, CCNB1, CCNB2 and TOP1 [18] was also demonstrated in these cell lines. Successful detection of known p53-target genes by the microarrays used in the present work indicated that we had a good system for identifying p53-responsive genes. Apoptosis induction also appeared to be p53-mediated, as the p53dependent apoptotic promoter FAS was up-regulated [19,20] in these cell lines in response to 5-FU treatment. Furthermore, induction of apoptosis is substantially reduced in these cell lines following knockdown of p53 (manuscript in preparation). Alterations in gene expression levels on cell cycle-, apoptosis-, and DNA damage response-regulatory pathways in the present study provided little explanation for the differential resistance to 5-FU seen in the three cell lines, especially that seen in the ContinD cell line compared to the other two cell lines. Many of the same genes were altered in response to 5-FU in all three cell lines, with only small differences in expression levels measured. Additionally, some of the genes that were up-regulated in response to short-term drug treatment had originally been down-regulated as a consequence of resistance development [10], e.g. CDKN1A, GADD45A, and AREG (Table 1), underscoring the difficulty in elucidating their role in/contribution to an overall resistance phenotype and the intricacy of drug resistance generally. However, when we considered other cellular regulatory pathways that were affected in response to short-term drug treatment, we found that genes involved in nucleotide binding and nucleotide metabolism, mRNA processing, cytoskeletal organization, amino acid metabolism, signal transduction/transport, and oxygen metabolism were differentially altered in the three cell lines ( Table 2). Some of the affected genes were altered in the parental cell line but not in the resistant cell lines (mRNA processing genes), or in one or both resistant cell lines but not in the parental cell line (amino acid and nucleotide metabolism genes). Such gene alterations may provide important information about pathways that are activated in response to 5-FU in cells that are already resistant to the drug, information which may have useful implications for the design and modification of current chemotherapeutic regimens. Conclusion Our gene expression data suggest that altered regulation of nucleotide metabolism, amino acid metabolism, cytoskeleton organization, transport, and oxygen metabolism may underlie the differential resistance to 5-FU seen in these cell lines. Future work will involve RNA interference studies to assess the contributory roles and importance of some of the altered genes to 5-FU resistance. Cell counts Trypsinized cell suspensions were counted using a standard Trypan Blue viability assay. Cell counts were performed at 0, 8, and 24 hours following addition of 5-FU to the medium. For recovery assays, cell counts were also done at successive 24-hour intervals until the cells had regained exponential growth. After cell counting, the same cell suspensions were then fixed in 80% ethanol for subsequent cell cycle analyses. Cell cycle analyses and quantification of apoptosis Trypsinized cell suspensions were fixed in 80% ethanol. The samples were then placed at -20°C until cell cycle analysis. Nuclei were isolated from fixed cell suspensions, stained with propidium iodide (50 μg/ml), and samples analyzed for DNA content using a FACSCalibur laser flow cytometer (Becton Dickinson Immunocytometry Systems, San Jose, CA). Pulse-processed fluorescence signals were used to exclude doublets and aggregates from analyses. Ten thousand events were acquired for each sample. Percentages of cells in the G 1 , S, and G 2 M phases of the cell cycle were quantified using WinCycle software (Phoenix Flow Systems, San Diego, CA). Quantification of 5-FUinduced apoptosis during treatment and recovery periods in each cell line was done using the sub-G 1 peaks from the cell cycle analyses measured during these periods. Mitotic cell discrimination Percentages of mitotic cells in control and 5-FU-treated cell cultures were determined using a flow cytometric method to discriminate mitotic cells as described previously [21]. Trypsinized cell suspensions were centrifuged at 1000 rpm for 5 minutes, washed once with PBS, and resuspended in 750 μl of a cooled detergent buffer (0.1% NP40, 6.5 mM Na 2 PO 4 , 1.5 mM KH 2 PO 4 , 2.7 mM KCl, 137 mM NaCl, 0.5 mM EDTA, pH7.2). After 5 min. on ice, the cells were fixed by adding 250 μl 4% formaldehyde to give a final concentration of 1%, mixed well, and allowed to fix for a minimum of 1 hr. on ice. Samples were then centrifuged at 1200 rpm for 5 minutes and the pellets resuspended in the detergent buffer containing 5 μg/ml propidium iodide and 100 μg/ml RNaseA. Samples were analyzed on a FACSCalibur laser flow cytometer and percentages of mitotic cells were measured using correlated DNA content/forward scatter distributions. Microarray hybridization and data analysis RNA isolation, preparation of Cy3-and Cy5-fluorescently-labeled cDNA samples, and subsequent hybridization to 13 K microarrays were done as described previously [22]. Thirty micrograms total RNA of control and drug-treated cells were used for the Cy3-and Cy5labeled samples, respectively. The 13 K cDNA microarrays were prepared at the Radiumhospital microarray core facility, and information about them can be found at their website [23]. Hybridized slides were scanned using a Scan Array 4000 laser scanner at 10 μm resolution (Packard BioChip Technologies, Billerica, MA). Spot and background intensities, and the standard deviations of these, were quantified using Quantarray software (Packard Bio-Chip Technologies). Bad spots and regions with high unspecific binding of dye were manually flagged and excluded from the analysis. Background-subtracted intensities less than two times the standard deviation of the local background were assigned this value to avoid zero or negative values in the ratio calculations. Weak spots with background-subtracted intensity less than two times the standard deviation of the local background in both channels were excluded. Total intensity normalization of the data was performed [24]. Genes in the 5-FU-treated HCT116 cell lines that had two-fold expression level changes (signal log 2 ratios ≥ 1 or ≤ -1) relative to corre-sponding untreated controls at 8 hours or 24 hours were scored as up-regulated or down-regulated respectively as a result of drug treatment. At the 8 hour timepoint following 5-FU addition, a total of 88, 99, and 10 genes were scored as altered in HCT116 parental, ContinB, and Con-tinD cells, respectively. At 24 hours, these numbers had increased to 218, 323, and 89 for the ratios) for the same cell lines. A text-tab-delimited file of all gene expression data (log 2 ratios) for the 5-FU-treated parental and drugresistant HCT116 cell lines (relative to their respective untreated controls) is available upon request. Gene expression data was sifted using GenMapp version 2.0 (Gene MicroArray Pathway Profiler) software [25]. Use of this program facilitated an immediate and comparative overview of genes scored as up-regulated or downregulated (signal log 2 ratios ≥ 1 or ≤ -1, respectively) on specific pathways in response to 5-FU treatment in parental and 5-FU-resistant HCT116 cells. We focused on altered genes located on DNA damage stimulus response, cell cycle (general regulation, S phase and M phase regulation) pathways and apoptosis regulatory pathways. Real-time RT-PCR Expression levels for 2 genes, GADD45A and CDKN1A were determined by real-time RT-PCR for HCT116 parental, ContinB, and ContinD treated and untreated control cells at all treatment timepoints using Taqman Gene Expression Assays (Applied Biosystems, Foster City, CA). 200 ng of total RNA was subjected to real time RT-PCR using an ABI PRISM Sequence Detection System following manufacturer protocols, in order to confirm the 13 K microarray results. Primers are available upon request. The 18S gene was used as an endogenous control for equal amounts of RNA used. Western analyses Scraped cell suspensions including floating cells that had loosened from the monolayer during the course of 5-FU treatment were centrifuged and the pellets heated in standard Laemmli buffer containing PMSF. Protein concentrations were quantified (BioRad, Hercules, CA), and protein samples (15 μg) and Precision Protein molecular weight standards (6.5 μg, BioRad) were separated by SDS-PAGE (10% or 12% gels) and transferred to PVDF membranes (BioRad). Western blotting was performed using mouse monoclonals against human p21WAF1 and p53 (clones EA10 and Pab1801, respectively, Calbiochem, San Diego, CA), MYC (clone 6E10, Cambridge Research Biochemicals, USA), GADD45A (C-4, Santa Cruz Biotechnology, San Diego, CA) and actin (C-2, Santa Cruz Biotechnology -used as a loading control). An amplified alkaline phosphatase staining procedure (BioRad) was used to detect the separated proteins. Expression levels were quantified using UnScanIt gel software version 5.1 for Windows (Silk Scientific Inc., Orem, Utah).
v3-fos-license
2020-07-09T15:02:05.470Z
2020-07-09T00:00:00.000
220417779
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-020-05216-y", "pdf_hash": "1b4ea95ee27f6cd4600bc352f06981927ee5f2ca", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2573", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "1b4ea95ee27f6cd4600bc352f06981927ee5f2ca", "year": 2020 }
pes2o/s2orc
Influence of blood group, Glucose-6-phosphate dehydrogenase and Haemoglobin genotype on Falciparum malaria in children in Vihiga highland of Western Kenya Background Genetic diversity of ABO blood, glucose-6-phosphate dehydrogenase (G6PD) deficiency and haemoglobin type and their ability to protect against malaria vary geographically, ethnically and racially. No study has been carried out in populations resident in malaria regions in western Kenya. Method A total of 574 malaria cases (severe malaria anaemia, SMA = 137 and non-SMA = 437) seeking treatment at Vihiga County and Referral Hospital in western Kenya, were enrolled and screened for ABO blood group, G6PD deficiency and haemoglobin genotyped in a hospital-based cross-sectional study. Result When compared to blood group O, blood groups A, AB and B were not associated with SMA (P = 0.380, P = 0.183 and P = 0.464, respectively). Further regression analysis revealed that the carriage of the intermediate status of G6PD was associated with risk to SMA (OR = 1.52, 95%CI = 1.029–2.266, P = 0.035). There was, however, no association between AS and SS with severe malaria anaemia. Co-occurrence of both haemoglobin type and G6PD i.e. the AA/intermediate was associated with risk to SMA (OR = 1.536, 95%CI = 1.007–2.343, P = 0.046) while the carriage of the AS/normal G6PD was associated with protection against SMA (OR = 0.337, 95%CI = 0.156–0.915, P = 0.031). Conclusion Results demonstrate that blood group genotypes do not have influence on malaria disease outcome in this region. Children in Vihiga with blood group O have some protection against malaria. However, the intermediate status of G6PD is associated with risk of SMA. Further, co-inheritance of sickle cell and G6PD status are important predictors of malaria disease outcome. This implies combinatorial gene function in influencing disease outcome. Background Despite the expanded use of proven malaria control strategies, over 216 million malaria cases and 0.5 million malaria related deaths have been reported worldwide [1]. Africa continues to account for about 90% of all malaria cases and deaths worldwide [1]. This high rates of malaria related morbidities is greatly driven by the malaria vaccine escape and drug resistant mutants resulting from host immune and drug pressure [2]. Therefore, many investigations have been conducted to find out whether or not ABO blood group antigens, glucose-6-phosphate dehydrogenase (G6PD) deficiency and hemoglobin genotypes are associated with susceptibility, resistance, or severity of P. falciparum malaria [3]. Rosetting is characterized by the binding of P. falciparum-infected red blood cells to uninfected red blood cells to form clusters of cells that are thought to contribute to the pathology of falciparum malaria by obstructing blood flow in small blood vessels [4]. Rosetting parasites such as P. falciparum form larger, stronger rosettes in non-O (A, B or AB) blood groups than in group O red blood cells [5,6] suggesting that rosetting phenotypes correlates with severe falciparum malaria [7]. On the other hand, it has been reported that rosettes form better depending on the blood cell types, with the blood cell type A and B having higher chances of forming rosettes [5,6]. Some studies have reported the absent of significant association between ABO blood group and malaria [8] while others have reported high frequency of malaria episodes in blood group A, AB, and B compared with other blood group individuals [9]. Impaired rosette formation due to increased sickling or reduced expression of erythrocyte surface adherence proteins in P. falciparum-infected HbAS and HbSS red blood cells contribute to protection against malaria [10,11]. Cyto-adherance enables parasites to sequester in the vasculature and avoid clearance by the spleen, leading to endothelial activation and associated inflammation in the brain and other organs, important in the progression to severe malaria anaemia [12]. However, the link between haemoglobin AA, AS, and SS type and the incidence of malaria parasitaemia or immunity to malaria is still unclear in Kenya. Although some studies have reported that haemoglobin S genotype plays a significant role in modulating malaria in children, others have not [13,14]. Increased oxidative stress impairs P. falciparum growth as well as accelerating ring stage erythrocyte senescence promoting phagocytic clearance and eryptosis of parasitized G6PD deficient cells [15][16][17] supporting the protection hypothesis. However, there is conflicting information on the effect of G6PD variant on falciparum malaria. Some studies have shown that G6PD normal are more vulnerable to Plasmodium falciparum malaria than the G6PD deficiency and heterozygous individuals [18], whereas others have reported an equal vulnerability among the various G6PD types [14]. Protection by AS haemoglobin genotype and G6PD deficiency against falciparum malaria is usually thought to act independently [17,19]. In Mali, hemizygous G6PD (A−) condition in the male while sickle cell trait in female children is associated with protection against severe malaria anaemia [20]. Heterozygous G6PD (A−) interfered with the protective effect of haemoglobin AS in females while no evidence of negative epistasis between sickle trait and G6PD (A−) heterozygosity in males of the same population [20]. To our knowledge, however, no study has reported on the concurrent effect of haemoglobin and G6PD variant on malaria. Taken together, variations in reports in the association between red blood cell, haemoglobin, and G6PD type with malaria disease progression shows the complexity of interaction between parasite and host genetics and immunity factors [21,22]. Moreover, the acquisition of relative immunity with age greatly confounds the influence of ABO blood group, G6PD and haemoglobin genotype on malaria [22]. Since young children have under-developed immunity against malaria and genetic diversity of G6PD, ABO blood groups and sickle cell trait and their ability to protect against malaria vary by region [23][24][25][26], it is important to determine the association between G6PD, ABO blood group and haemoglobin genotype with malaria among children in Kenya. As such, the present study determined effect of blood group, glucose-6-phosphate dehydrogenase and haemoglobin genotypes on P. falciparum malaria in children in Vihiga highland of western Kenya. Study area and design A cross-sectional study targeting children less than 3 years seeking treatment at Vihiga County Referral hospital, Vihiga, Western Kenya was carried out. Study participants were categorised as severe malaria anaemia (SMA; Hb < 5.0 g/dL, with any parasite density) and non-severe malaria anaemia (Non-SMA; Hb ≥ 5.0 g/dL, with any parasite density) There has been a marked increase in malaria in the Vihiga highland, nearly 1.3 times the overall rate in Kenya, largely due to the rise of drug-resistant strains of P. falciparum parasites [27,28]. The ecology of the Vihiga highlands of Kenya supports stable transmission (thus is holoendemic) and increasing population pressure has led to agricultural changes creating ideal conditions for malaria vector proliferation [29]. Finally, Anopheles mosquitoes are generally highly zoophilic, rather than anthropophilic, thus becoming efficient human malaria vectors in Vihiga highland [30]. Sample size determination The sample size was determined using the formula n = Z 2 pq/d 2 [31]. The optimum sample size estimated was n = 384. Therefore, this study required at least 384 study participants. Study participants and sample collection Children were recruited via random sampling method and stratified into either severe malaria anaemia (SMA) and non-severe malaria anaemia (non-SMA) [32] or uninfected healthy controls. Socio-demographic information such as age, sex and anti-malarial therapy were collected using a questionnaire (See Supplementary I). Plasmodium falciparum malaria positive children who had received anti-malarial treatment within 48 h prior to the microscopical confirmation of their blood slides for malaria parasites and children co-infected with P. falciparum and other species of plasmodium, and Human Immunodeficiency virus type 1 (HIV-1), Hepatitis B virus (HBV) and Hepatitis C virus (HCV) were excluded from the study. Approximately, 2.0 mL of blood was collected in anticoagulant tube from each study participant and used for HIV-1/2, HBV and HCV serological testing, [33] haemoglobin measurement, and microscopy malaria diagnosis. Haemoglobin measurements were determined using Hb Hemocue 301 (Kuvettgattan 1,SE-26271 Angelholm Sweden) within 10 min from the time of blood collection to minimize variability in the measurements. The system was calibrated every morning before sample analysis. Malaria diagnosis Thick and thin blood films were prepared from venous blood, stained with 10% Giemsa stain for 10 min and examined under a microscope. Parasite densities were calculated using the thick films by the WHO method (parasite count × 8000 divided by the number of WBCs counted which was 200) [32], and the thin films were used to establish the species of the parasites present. Films were classified as negative when no parasites are seen after two hundred microscopic fields had been examined. For purposes of quality control, slides were cross-examined independently by a senior microscopist and the results compared. ABO blood group determination ABO blood groups typing was performed by forward grouping using commercial antisera (Biotech laboratories Ltd., Ipswich, Suffolk, UK) according to manufacturer's protocol. Briefly, one drop of whole blood was placed in three different places on a grease-free clean glass slide labelled A, B and D. A drop of antisera A was added on the area labelled A, anti-B on B and anti-D on D. The blood cells and the antisera were mixed using a wooden applicator stick. The slide was then tilted to check for agglutination and result recorded accordingly. Hemoglobin genotyping Haemoglobin genotypes were determined by cellulose acetate electrophoresis with Titan III plates according to the manufacturer's protocols (Helena Bio-Sciences, Oxford, United Kingdom). Haemolysates prepared from blood samples and Hemo AFSC controls was dispensed onto the acetate paper, and haemoglobin variants separated by electrophoresis with an alkaline buffer at pH 8.6. The plates were then stained using Ponceau S stain, and haemoglobin genotypes scored using the Hemo AFSC control. G6PD genotyping Glucose-6-phosphate dehydrogenase (G6PD) deficiency was determined by a fluorescent spot test (Trinity Biotech Plc., Bray, Ireland) as per the manufacturer's protocol. Blood was hemolyzed and spotted onto a filter paper. Assay solution containing glucose-6-phosphate and oxidized Nicotinamide adenine dinucleotide phosphate (NADP + ) was added, and samples excited with ultraviolet (UV) light at 340 nm. Based on the presence or absence of fluorescence emissions, the samples were scored as normal (high emission), intermediate (moderate emission), or deficient (no emission). Statistical analysis Statistical analysis was performed using SPSS version 19.0 (IBM Corp., NY, USA). Proportions of sex between groups were determined by Chi-square test. Age, haemoglobin, parasitaemia, white blood cells (WBC), glucose and red blood cells (RBCs) levels between groups were determined using Mann-Whitney U test. Odds ratios were calculated with 95% confidence interval (CI) using logistic regression analyses. Statistical significance was set at P ≤ 0.05. Demographic and laboratory parameters The demographic and laboratory measurements of the study participants are presented in Table 1. A total of 574 children were enrolled into this study comprising of 137 with Severe Malaria Anaemia (SMA, Hb < 5.0 g/dL with any parasite density) and 437 non-severe malaria anaemia (Hb ≥ 5.0 g/dL with any density parasitaemia). Sex distribution was not significant across the study groups (P = 0.540). Those with the SMA, 11.3 (11.6) mos. Were comparatively younger than those with non-SMA 14.6 (10.5) mos., P = 0.004. Haemoglobin concentrations were lower in children with SMA 4.2 (1.3) relative to the non-SMA 8.5 (2.2), P < 0.001. The concentrations of the white blood cells were higher in the SMA group; 13.5 (8.8) relative the non-SMA group 12.1 (6.2), P < 0.001. Furthermore, RBC counts were lower in children with SMA; 2.4 (0.8) when compared to the non-SMA group 3.5 (0.7), P < 0.001. Glucose concentrations were however similar between the two groups, P = 0.584. Table 2. Influence of glucose-6-phosphate dehydrogenase on P. falciparum malaria infection in children under 3 years in Vihiga County, Kenya Before determining the influence of the G6PD on malaria infection, we performed prevalence analysis. The data show that normal G6PD genotype were 360/574 (62.7%), intermediate genotype 201 (35.0%) and deficiency in G6PD was 13/574 (2.3%). Glucose-6-phosphate deficiency showed a significant differences between the two study groups. The carriage of intermediate status of the G6PD was higher in the SMA group relative to the non-SMA. Moreover, there were more individuals who were deficient of G6PD in the non-SMA group compared to SM (P = 0.040). Further, regression analysis (Table 4). Importantly, the variant SS were 3 (2.2%) in SMA group and 1 (0.2%) in non-SMA group. These differences were statistically significant (P = 0.001). When multinomial logistics regression was done with the AA as the reference group, the AS showed marginal association with protection against SMA (OR = 0.116, 95% CI = 0.012-1.124, P = 0.063). The carriage of the SS group could not run in the model to give a significant statistical meaning ( Table 4). Influence of co-inheritance of haemoglobin and G6PD genotypes on P. falciparum infection in children under 3 years in Vihiga County, Kenya The distributions of the coinheritance of the Hb and G6PD variations are as presented in Table 5. Multinomial regression analysis of the co-inheritance of both the sickle cell trait and the G6PD variations revealed that the AA/Intermediate with reference to AA/normal G6PD had increased risk to SMA (OR = 1.536, 95%CI = 1.007-2.343, P = 0.046). In addition, the AS/normal showed protection against SMA (OR = 0.337, 95%CI = 0.156-0.915, P = 0.031). These findings show important influence of co-inheritance in malaria disease outcome. Discussion Malaria remains one of the most important causes of morbidity and mortality in children leaving in endemic areas [1]. Highest mortality occurs in young children with under-developed protective immune mechanisms against the parasite. It has been previously shown that severe malaria anaemia accounts for 12% of all malaria related deaths [34]. A minority of children with certain red blood cell variants have a natural biological advantage thought to partially hamper parasite growth [21]. Therefore, it was imperative to investigate the effect of ABO, G6PD and haemoglobin variants on highland malaria in Kenyan children resident in western Kenya. In the present study, we did not find a significant association between carriage of the blood group genotypes and severe malaria outcome in children. This finding is, however, inconsistent with previous studies involving Ghanaian children [9] in which blood group O showed partial protection. It has been shown that blood group O protects against malaria through reduced resetting [35] and by inducing high levels of anti-malarial IgG antibodies which directly inhibit parasite invasion or growth in erythrocytes, or indirectly by a mechanism involving cooperation between parasite-opsonising antibody and monocyte [36,37]. Certain studies have reported absence of association between the ABO blood group system and P. falciparum malaria infection among children in Nigeria [8]. Other previous in vitro erythrocyte preference assays demonstrated that P. falciparum parasites prefer type O over type A erythrocytes [9]. These findings countered the known protective effect of group O against severe malaria, but additionally emphasised the complexities of host-pathogen interactions, and the need for highly quantitative and scalable assays. However, recent meta-analyses data involving 1923 articles obtained from the databases, demonstrated that ABO blood group may not affect susceptibility to asymptomatic and/or uncomplicated P. falciparum infection but showed that blood group O primiparous women appeared to be more susceptible to active placental P. falciparum infection [38]. Additional meta-analyses demonstrated that the Prevalence in the G6PD genotypes in population studied are presented as n (percentage, %). Data shown are number (n) and proportions (%) of subjects. Odds ratios (OR) and 95% confidence intervals (95% CI) were determined using multivariate logistic regression controlling for age and sex. Non-severe malaria anaemia was used as the reference category. G6PD, glucose-6-phosphate dehydrogenase. Normal, normal G6PD. Intermediate, heterozygous G6PD. Deficient, homozygous G6PD. SMA, severe malaria aneamia; Non-SMA, non-severe malaria anaemia difference in the level of P. falciparum parasitaemia was not significant among individuals with blood group A or non-O compared with blood group O. Furthermore, the difference in haemoglobin level among P. falciparum infected individuals was also not significant between those with blood group A, B or AB versus those with blood group O [39]. These inconsistencies maybe be due to geographic and ethnic distribution of the different ABO genetic blood polymorphisms associated with malaria protection in the regions [25]. Our current study did not show a significant association between the haemoglobin genotypes and malaria disease outcome. The findings of this study are consistent with previous studies reporting no association between haemoglobin AA, AS and SS genotype with malaria infection among children in Nyando County, Kenya [14]. However, these findings are inconsistent with other previous findings in Kenyan coast and Siaya which showed that children with haemoglobin AA have increased risk of developing severe malarial disease relative to children with haemoglobin SS in Kenya [13,[40][41][42]. These differences may be attributed to the extreme variability of α-globin and β-globin malaria resistance genes attributed to high amounts of standing variation or high mutation rates rapidly producing new adaptive alleles [43]. However, this phenomenon was not investigated in the current study. The findings of this study showed that intermediate status of G6PD was associated with SMA. In addition the carriage of other G6PD variations did not show any association with malaria disease outcome. It is also important to note that genetic impact on infectious diseases is multigenic and not a single gene might reveal the actual effect. This is partly consistent with previous studies in Kilifi, Kenya, and Tanzania which showed protection with this particular variant in females but not males [18,25]. However, other studies in Mali involving children with severe malaria indicated that hemizygous males and possibly, homozygous but not heterozygous females are protected from malaria [20] suggesting that the distribution of G6PD polymorphism is influenced by geographic area, sex and ethnic group and that genetic impact on infectious diseases is multigenic and not a single gene might reveal the actual effect. We show in the current study that the co-inheritance of haemoglobin AA variations and G6PD intermediate Prevalence in the haemoglobin genotypes in population studied are presented as n (percentage, %). Data shown are number (n) and proportions (%) of subjects. Odds ratios (OR) and 95% confidence intervals (95% CI) were determined using multivariate logistic regression controlling for age and sex. Non-severe malaria anaemia was used as reference category. SMA, severe malaria, non-severe malaria anaemia. AA, normal hemoglobin. AS, heterozygote sickle cell. SS, homozygote sickle cell. XX, multivariate logistic regression was not performed because the numbers were too few to run in the model status is associated risk of severe malaria while the carriage of the sickle cell trait and normal G6PD is associated with protection against severe malaria. This reveal that the effect of one gene on the other and that genes act in combination. This is an important finding that its mechanisms are worth investigation. It is important to outline the limitations of our study. Several red blood cell polymorphisms, including those linked to pyruvate kinase, complement receptor-1 and haemoglobinopathies such as thalassaemia traits, have a role in the clinical outcome of malaria [42] and might be present among the population. Even though a prospective study would have been important in determining the effect of inherited blood disorders on malaria, this crosssectional study demonstrated association between ABO, G6PD and haemoglobin type on malaria in Kenyan children. Conclusion This study reveals that blood groups do not have a significant influence on malaria disease outcome. Haemoglobin AS offers rather a marginal protection against severe compared to the normal haemoglobin. Carriage of the heterozygous G6PD type compared to the normal G6PD is associated with risk of severe malaria anaemia. The co-inheritance of Hb variations and G6PD variation are important predictors of malaria disease outcome in this region. The findings of this study underline the potential of the Genome Wide Association approach to provide candidates for the development of control measures against malaria in humans.
v3-fos-license