text
string
predicted_class
string
confidence
float16
The functional generalized least squares regression (FGLS) model between two centered variables (E(y)=0, E(X)=0) states that y=⟨X,β⟩+ϵ=∫TX(t)β(t)dt+ϵ(2) where βϵL2(T) and ϵ is now a random vector with mean 0 and covariance matrix Ω=E(ϵϵ′). This model includes, as its special cases, many others models, all of them based on Ω = Ω(ϕ) = σ2 Σ(ϕ), where ϕ is the parameter associated with the dependence structure of Ω. Some classical examples are presented in the following models: Equi-correlated model: Var(ϵi)=σ2 and Cov(ϵi, ϵj) = σ2 ϕ, i ≠ j, ϕ ∈ (−1, 1)Heteroskedastic block model: Ω=diag(σ12In1|σ22In2|⋯|σp2Inp) with n1 + n2 + ⋯ + np = nAR(1) model: ϵi = ϕϵi−1 + εi with |ϕ| < 1, E(εi)=0, Var(εi)=τ2 and Cov(εi, εj) = 0, i ≠ j Ω=τ21−ϕ2(ϕ|i−j|)i,j=1nThe variance structure is also known for every ARMA(p,q) model.Spatial correlation model: Ω=σ2(ρ(d(si,sj))) where si,sj are, respectively, the locations for i, j; and ρ is the spatial correlation function.
study
99.94
The preceding equations can be expressed as matrix notation using the evaluation in a grid of the length M {a = t1 < ⋯ < tM = b} as X=CΨ,B=b′φ, where X is the matrix n × M with the evaluations of the curves in the grid, C is the matrix n × Kx with the coefficients of the representation in the basis and Ψ is the matrix Kx × M with the evaluations of the basis elements on the grid. Similarly, B is the matrix (1 × M) with the evaluation of the β parameter on the grid, φ is the matrix (Kβ × M) with the evaluations of the basis {φj} and b on the grid, is the vector of the coefficients of β in the basis.
other
99.7
With this notation, the terms {〈Xi,β〉}i=1n can be approximated by C Ψφ′ b = Z b which, in essence, is a reformulation of a classical multivariate linear model that approximates the functional model. Here, the matrix Z takes into account all the approximation steps done with the available information: the chosen basis for X and β with the selected components: Kx and Kβ.
other
99.25
The GLS criterion can be employed to jointly estimate all the parameters associated to the model and can be expressed as: minKx,Kβ,b,ϕGLS=minKx,Kβ,b,ϕ(y-Zb)′Σ(ϕ)-1(y-Zb), where the parameters Kx and Kβ related to the basis for X and β are typically chosen a priori taking into account, for instance, the quality of the data and its representation on the discretization grid or other considerations related to the data-generating process (smoothness, physical restrictions, interpretability,…). The direct minimization of GLS usually cannot be affordable even though we only consider the parameters b and ϕ. The generalized cross-validation (GCV) criterion has been widely used to this end despite not being the right criterion for dependent errors. We use the generalized correlated cross-validation (GCCV) as a better alternative. This suggested criterion is an extension to GCV within the context of correlated errors proposed by Carmack et al. . It is defined as follows: GCCV(Kx,Kβ,b,ϕ)=∑i=1n(yi-y^i,b)2(1-tr(G)n)2 where G = 2H Σ(ϕ) − H Σ(ϕ)H′ takes into account the effect of the dependence, the trace of G is an estimation of the degrees of freedom consumed by the model and H is the hat matrix. The important advantage of this criterion is that it is rather easy to compute because it avoids the need to compute the inverse of the matrix Σ. Even so, the complexity of the GLS criterion depends on the structure of Σ and it could sometimes be hard either to minimize or computationally expensive.
study
100.0
We implement the function fregre.gls (and predict.fregre.gls) that estimates (and predicts) the functional regression model with correlated errors, see S1 Appendix. The fregre.gls function calls the gls function of nlme package. Therefore, the correlation structures allowed are those programmed by the original authors of the package .
other
99.9
The above GLS criterion is employed to jointly estimate all the parameters associated with the model: Kx, Kβ, b and ϕ. One possibility to alleviate the computational burden is to separate the estimation of the dependence structure (ϕ) from the parameters associated to the regression (Kx, Kβ, b) in an iterative way (called iGLS) as it is done in multivariate regression. The iGLS proven to be equivalent to classical GLS (see, for instance, ). Additionally, the method could consider more flexible dependence models (for instance, selecting the order of an AR instead of fixing it in advance) that avoid the risk of misspecification in the dependence structure. We extend this procedure to the functional regression in the following iterative procedure (called functional iGLS): Begin with a preliminary estimation of ϕ^=ϕ0 (for instance, ϕ0 = 0). Compute W^.Estimate bΣ=(Z′W^Z)-1Z′W^yBased on the residuals, e^=(y-ZbΣ), update ϕ^=ρ(e^) (and consequently, W^) where ρ is subject to the dependence structure chosen.Repeat steps 2 and 3 until convergence (small changes in bΣ and/or ϕ^)
study
100.0
The estimation of functional β(t)^ by bΣ is done in step (2), and separated from the estimation of dependence structure ρ in step (3). This allows for the flexibility of including any type of dependence structures designed by the user (for instance, using particular restrictions) that are typically not included in the usual packages (like nlme).
other
99.9
We implement, the function fregre.igls (and predict.fregre.igls) that estimates (and predicts) the functional regression model with correlated errors using the iterative scheme (iGLS). We have developed the following two simple structures for Σ in fda.usc package for fit serial dependence structure: In iGLS-AR(p) scheme, the procedure automatically fits the autoregressive order p in each iteration of the errors defined by the equation ϵi=∑j=1pϕjϵi-j+εi where εi ∼ N(0, σ2).In iGLS-ARMA(p,q) scheme, the user must specify the parameters p and q of the autoregressive–moving–average (ARMA(p,q)) model, which fits the serial error dependence defined by equation: ϵi=∑j=1pϕjϵi-j+∑j=1qθjϵi-j+εi where εi ∼ N(0, σ2). This structure is provided by the nlme package but it has a restriction: all parameters of the AR side must be lower than one in absolute value. This rule clearly does not include all the possible stationary models of that order (this is only true for ARMA(1,q)).
study
99.9
In iGLS-ARMA(p,q) scheme, the user must specify the parameters p and q of the autoregressive–moving–average (ARMA(p,q)) model, which fits the serial error dependence defined by equation: ϵi=∑j=1pϕjϵi-j+∑j=1qθjϵi-j+εi where εi ∼ N(0, σ2). This structure is provided by the nlme package but it has a restriction: all parameters of the AR side must be lower than one in absolute value. This rule clearly does not include all the possible stationary models of that order (this is only true for ARMA(1,q)).
other
99.9
For these structures, we have used the basic functions ar and arima of the stats package to fit the AR(p) and ARMA(p,q) models, respectively. The users can define their own functions or use other well-known functions that exactly fit the situation at hand.
other
99.94
We have used two functional linear models (FLM) included in to compare the effect of the temporal dependence. Specifically, we have generated nB = 1000 replicas of size n = 100 from the FLM model y=〈X,β〉+ϵ, being X a Wiener process observed in a grid of length M = 100 in the interval [0, 1] and ϵ an AR(1) process with autoregressive parameter ϕ and variance Var(ϵ)=snrVar(〈X,β〉), where snr is the signal to noise ratio. For each sample, ten future values, denoted by (yn+h, h = 1, …, 10), were generated to check the predictive ability of the proposal.
study
100.0
The scenario (a) corresponds to a β parameter which has an exact representation respect to the first three theoretical principal components of the Wiener process. On the contrary, the β parameter for scenario (b) cannot be well represented using a small number of theoretical principal components. In both scenarios, we have used two types of basis for representing X and β: the empirical principal components basis derived from the sample (FPC) and the cubic B–splines (BSP) at equispaced knots in [0, 1]. The same basis was employed for both representations i.e. in this case Ψ = φ and Kx = Kβ. The optimal number of components (Kβ) was selected using the GCCV criterion in the range 1–8 for FPC and 5–11 for BSP.
study
100.0
Tables 1 to 4 summarize the results for the first model (a) to show, respectively, the average number of selected components chosen using GCCV criterion, the mean square error (MSE) for estimation of β, the MSE for estimation of ϕ and the mean square prediction errors (MSPE) for horizons 1, 5 and 10. In these results, LM denotes the estimation through a classical functional linear model whereas GLS and iGLS corresponds, respectively, to the functional GLS and functional iGLS methods (shown in Methodology section for AR(1) dependent errors).
study
100.0
Table 1 shows an average number of FPC selected components between 3 and 4 with a slight tendency to lower values as the snr grows. The average number of B–splines basis was between 6 and 7 although in this case we do not have a theoretical quantity to compare with. It seems that there are no trends with respect to the ϕ values. Table 2 clearly shows the advantage of the PC estimator over the B–splines because the estimation error using B–splines typically doubles the error using PCs. In this table, we can also see the improved estimates of the GLS and iGLS method over the LM, especially when ϕ grows. The same equivalence is shown in Table 3 for the mean square error (MSE) of the ϕ parameter, which shows better results as the dependence grows. Finally, Table 4 shows the mean square prediction errors (MSPE) for different lags showing a clear improvement of GLS procedures, specially for large ϕ and shorter lags. With respect to the prediction ability between PC or B–splines, the results show that both methods are almost equivalent with minor differences along the table.
study
100.0
Table 5 summarizes the results of the Model (a) but replaces the AR(1) by an AR(2) error process using the FPC estimation (the results with BSP are similar). In all these models, the minimum square prediction error is achieved with model iGLS-AR(2) in which an AR(2) is estimated in each iteration of the algorithm. This is followed very closely by model iGLS-AR(p), estimating an automatic choice of p at each iteration.
study
99.94
The first AR(2) process, (ϕ1 = 0.5, ϕ2 = 0.45), is roughly like an AR(1) process with ϕ ≈ 0.95. This can explain why the results of the iGLS-AR(1) model are so close to the optimum estimated by the iGLS-AR(2). The second AR(2) process, (ϕ1 = 1.4, ϕ2 = −0.45), was selected to assess the misspecification error. Although the use of an AR(1) process in the GLS and iGLS models improves the LM model, these results are far from the best using an AR(2) specification. The autocorrelation function of the AR(2) process shows a periodicity pattern that cannot be approximated by an AR(1) process. Finally, the third AR(2), (ϕ1 = 1.5, ϕ2 = −0.75), shows the effect of the misspecification in a later horizon h = 5, making the results at that horizon for an AR(1) specification even worse than the LM model. Again, this is motivated by the periodicity pattern of the AR(2) due to the negative sign of ϕ2. In all cases, the specification iGLS–AR(p) is rather close to the optimum. However, the important advantage is that it avoids a closed specification form of the dependence structure. Finally, the GLS-AR(2) scenario was not considered in this table because the gls function of nlme package does not allow the estimation of any parameter of an AR(2) greater than 1 in absolute value. This is an empirical rule in the package that avoids the use of non stationary processes although, in this case, the three AR(2) specifications are clearly stationary, but only the first specification can be estimated using the gls function.
study
100.0
Galicia is a region of 29, 574 km2 located in Northwest Spain with a population of 2.8 million people. We analyzed the weekly incidence of reported cases of influenza in Galicia between 2001 and 2011 for each of the 53 Galician counties: Raten,s=log(casesn,s×100000/popn,s) for county s and week n. The population (pop) was obtained from the Statistical Institute of Galicia (IGE, http://www.ige.eu) and the number of influenza cases (cases) from the Health Service of Galicia (www.sergas.es).
study
99.94
The influenza season in Galicia usually begins in week 40 and ends in week 20 of the following year. The goal is to predict the incidence of influenza for the following two weeks (n + 1 and n + 2) for each of the s regions with the available information: Raten,s(w): Weekly influenza rate for last 13 weeks, w ∈ [n − 12, n].Tempn,s(t): Daily temperature in Celsius degrees (°C) for last 14 days, t ∈ [n − i/7, n], for i = 14, …, 1.Dushoff et al. defined cold as the number of degrees below a threshold temperature: Temp.thn,s = min(Tempn,s − thres, 0) with thres = 10°C. The functional variable is defined as: Temp.thn,s(t) with t ∈ [n − i/7, n], for i = 14, …, 1.SRn,s(t): Daily solar radiation (W/m2) for the last 14 days, t ∈ [n − i/7, n], for i = 14, …, 1.Humn,s(t): Relative humidity for the last 14 days: t ∈ [n − i/7, n], for i = 14, …, 1.
other
96.3
For representing the above functional covariates, a B–spline basis of five components was used in all cases (based on the previous experience of the authors with this type of data). The prediction for the overall influenza rate is constructed by appropriately aggregating the predictions of the s regions that are made independently, i.e. the estimation of β and ϕ are made only with the data of that county. Fig 1 shows the overall influenza rate that normally grows in the late autumn and reaches a peak at the beginning of the calendar year. These plots clearly show the large difference between reported influenza cases in winter and summer. The influenza rate for each county shows a similar pattern but with small differences in the peak epidemic period. We downloaded meteorological data from the regional Weather Service of Galicia (http://www.meteogalicia.es/). S1 Appendix describes the supplementary material (functions, libraries, source data and code) and S1 File contains the code and dataset used in this study.
study
100.0
Distance correlation R is a measure of dependence between random vectors introduced by Székely et al. . The distance correlation satisfies 0≤R(X,Y)≤1 and its interpretation is similar to the squared Pearson’s correlation. However, the advantages of distance correlation over the Pearson correlation is that it defines R(X,Y) in arbitrary finite dimensions of X and Y and R characterises independence, i.e. R(X,Y)=0⇔X,Y are independent. Recently, Lyons provided conditions for the application of the distance correlation to functional spaces. So, this measure seems to be a good indicator of the correlations between functional and multivariate variables that may be useful for designing a functional linear model (for instance, avoiding variates with high collinearity). The empirical distance correlation Rn,s(X,Y) can be easily computed as Rn,s(X,Y)=Vn,s2(X,Y)Vn,s2(X)Vn,s2(Y). where Vn,s(X,Y) is the empirical distance covariance defined by Vn,s2(X,Y)=1n2∑k,l=1nAklBkl where Akl=akl-a¯k.-a¯.l+a¯.. and Bkl=bkl-b¯k.-b¯.l+b¯... with akl = ‖Xk − Xl‖, bkl = ‖Yk − Yl‖, k, l = 1, …, n, and the subscript. denotes that the mean is computed for the index that it replaces. Similarly, Vn,s(X) is the non-negative number defined by Vn,s2(X)=Vn,s2(X,X)=1n2∑k,l=1nAkl2.
study
100.0
The distance correlation R was used to select the information relevant to the prediction of influenza rate not only with respect to the response but also among the possible covariates to avoid collinearities. The results are shown in Table 6. Relative humidity, Humn,s(t), has the lowest correlation with the influenza rate {Raten+1,s, Raten+2,s} and therefore, it seems that its contribution to the response is negligible (a model with Humn,s(t) never improves one without the variate). Besides, the distance correlation values are useful for designing models avoiding closely related covariates (for instance, Tempn,s(t) and Temp.thn,s(t) share the same information). With these considerations, the number of possible different models to be tested is quite reduced.
study
100.0
A rolling analysis was employed to compare the models in a predictive scenario. Initially, a series of length j = 1, …, n = 150 weeks in s = 53 counties is used to predict the influenza rate in the next two weeks, n + 1 and n + 2. The rolling is then performed along the epidemic periods (J = 28 weeks, from week 40 to week 15 next year) by computing the mean square predictive error: MSPE=1J∑j=n+1n+J∑r=1swr(Ratej,r-Rate^j,r)2 where wr is the weight (in terms of pop) for county r. For ease of simplicity, the GLS setting is only considered with an AR(1) specification of the dependence structure, whereas the iGLS is combined with an AR(1), AR(2) and AR(p).
study
99.94
Table 7 summarises the MSPE for the influenza season. The best result for each set of covariates is shaded in light gray and the overall winner for each horizon is in bold font. In the models with the predictor Raten,s(w) (rows (a), (e), (f) and (g)) the gain, in terms of MSPE, of the functional GLS models (GLS–AR(1), iGLS–AR(1), iGLS–AR(2) and iGLS–AR(p)) is relatively small with respect to functional LM models because the Raten,s(w) partly accounts for the temporal dependence. Furthermore, in some sense, the inclusion of the predictor Raten,s(w) in the model is akin to the estimation of the dependence structure. The models without influenza rate (rows (b), (c), (d) and (h)) begin with a worse result in the LM setting, but their results become competitive (or even become the best ones) with the inclusion of the serial dependence. The difference between the GLS or iGLS setting is that the latter allows more flexibility, not only defining a different dependence structure in each county, but also in the estimation of that dependence. This is particularly useful when the forecast horizon increases. The GLS setting must fix the order of the AR in advance and, when the number of regions is high, it is a tough assumption to consider the order of the serial dependence model fixed for all of them. For n + 1 the best models are (b) and (c) with GLS–AR(1) and iGLS–AR(1) specifications, using the curve of temperature of last 14 days as the predictor and a simple AR(1) structure for the adjustment of the residuals. The best autoregressive model estimated by the iGLS–AR(p) model has been, in most cases, of order 1. For n + 2, in some regions, an AR(1) or AR(2) model may be insufficient; the best result is achieved with the iGLS–AR(p) procedure, which presents greater flexibility in estimating the different p order for each county.
study
99.94
Models (b) and (c), with GLS setting, present slight differences. Of course, it seems better to use the temperature than to only use the threshold respect to a level. Yet the differences between these two models suggest that the evolution of temperatures when it is cold is crucial to explaining the influenza rate. Model (h) makes no improvement on the results of models (b) and (c) in terms of MSPE. In fact, it worsens them; this is probably due to collinearity among Tempn,s(t) and SRn,s(t). Concerning models (b), (c) and (d), the first two are preferable because they are easier to apply and interpret. Besides, in model (d) the measures of solar radiation usually depend on specialised devices, whereas the covariates related to temperature are readily available using standard (and cheaper) equipment. Finally, for short horizons, it seems unnecessary to specify high order autoregressive models, even though the improvement can be about 5% for larger lags.
study
99.94
Indeed, it is possible to interpret the β^ parameter associated with models. To this end, we have computed for models (a) and (b), the quantities vi=〈Xi,β^〉, which are the contribution of every curve to the influenza rate. So, if we classify the curves in groups according to these values and average them, we can see the pattern of the curves that have the most (or least) influence with respect to the incidence rate. This is done in Fig 2, which shows the pattern of curves that most contributed to increasing (in red scale) and decreasing (in blue scale) the influenza rate. In particular, we have split the data with respect to the quartiles of vi and assigned (from bottom to top) the following colors: blue, sky blue, red and dark red. This assesses the evaluation of the contribution of these curves in the response. So, as expected, the contribution of an intense increasing pattern of the influenza rate in the last weeks is plotted in dark red (see left panel of Fig 2), which leads to predicting high influenza rates. On the other hand, a decreasing pattern is plotted in dark blue, meaning that this type of pattern corresponds with low influenza rates. The same reasoning can be applied to model (b)(see right panel of Fig 2). Curves of temperature below 7°C are plotted in dark red, meaning that this pattern provides high prediction rates. On the other hand, the curves around 19°C (plotted in dark blue) lead to almost zero influenza rates. The dark red line corresponds to the pattern of the curves that most contribute to increasing the estimated incidence rate. In the week w = 1 begins vq4≈3.3 that, if we undo the logarithmic transformation represents an incipient incidence of 27.1 cases per 100, 000 population and goes up monotonously until last register (w = 13), which takes the value vq4≈4.6, implying an increase of 99.5 cases per 100, 000 population.
study
100.0
Shape of rate curves (on left) and temperature threshold curves (on right) categorized by their projection value vX=〈X,β^〉. The groups are constructed as a function of the quantile of vX (q(vX)): q(vX) ∈ [0, .25] (dark blue line), q(vX) ∈ (0.25, 0.50] (blue line), q(vX) ∈ (0.50, 0.75] (red line) and q(vX) ∈ (0.75, 1] (dark red line).
other
98.0
Finally, as an illustration in Fig 3 the prediction of the raw rate (cases × 100000/pop) during the 2010–11 flu epidemic season is provided for two counties (Vigo and Santiago) as a result of reversing the log transform of the response in the preceding models. In both counties, the peak is achieved at week 2011–5 (first week of February). The two considered horizons (t + 1 and t + 2) are shown by rows. In each case, the raw rate is compared with the prediction obtained one or two weeks before with the models LM, Rate(w); GLS–AR(1), Rate(w); GLS–AR(1), Temp(t) and GLS–AR(p), Temp(t). Focusing on t + 1, the comparison among the two dependence structures (AR(1) and AR(p), lines green and blue, respectively) associated with Temp(t) shows a big difference for Vigo but no for Santiago. This suggests that for Santiago an AR(1) is enough whereas for Vigo it seems more adequate a general AR(p) specification. Respect to the models including the Rate(t) (lines red and gray), the model using GLS reacts faster than the LM model providing better predictions of the peak. Predictions for medium or low intensities (below 125) are quite similar. For t + 2, no clear patterns are shown, although the specification GLS–AR(p) seems to do slightly better.
study
100.0
Prediction of the raw rate (cases × 100000/pop) for two counties (Vigo and Santiago) in Galicia using four models: LM, Rate(w); GLS–AR(1), Rate(w); GLS-AR (1), Temp(t) and GLS-AR(p), Temp(t). In each case, the raw rate is compared with the prediction provided one week before (t + 1, first row) and two weeks before (t + 2, second row). The counties are separated by columns.
study
97.2
This paper extends the GLS model from a multivariate to a functional framework: it thereby allows us to estimate functional regression models with temporal or spatial covariance errors structure in a simple way. It proposes an iterative version of the GLS estimator, that can help to model very complicated dependence structures. This procedure (called iGLS) is much simpler than GLS in terms of the optimization function to be accomplished but, of course, it may take longer due to the iterations. However, iGLS may be the only option when the sample size or the dimension of the parameter increases and the joint optimization performed by GLS is not affordable (in terms of complexity or memory consumption).
study
99.8
A simulation study shows that the GLS estimators improve the classical approach because they provide better estimations of the parameters associated with the regression model and extremely good results from the predictive point of view, specially for short lags.
study
99.3
The GLS procedures have been applied to the prediction of the influenza rate using readily available functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance for an epidemic outbreak. The estimation of the dependence allows that simpler models can achieve good results maintaining nice interpretations of the model. In particular, the simple model (b) that only uses the easy-to-measure variate Tempn,s(t), shows that influenza may increase due to a cold wave with daily temperatures around 7°C for two weeks which is consistent with much of the literature on influenza. Also, the models show that the estimated temporal dependence of the influenza virus is strong and stable over time.
study
99.94
In our examples, we estimated the error structure with simple AR(p) models (mostly AR(1) or AR(2)) obtaining a good fit for time dependence. We also tried other ARMA models and obtained similar results. Our method can additionally be used to explore more complex dependence structures like heterogeneous covariances by counties or even spatio–temporal modelling. The iGLS procedure allows for more simplicity and flexibility in the estimation of the dependence structure at the cost of a light heavier computational work. Furthermore, in particular in the example provided, the iGLS allows us to specify a general dependence structure that can be adapted for every county rather than considering the same model for all counties or designing, by hand, the best structure for each county.
study
99.94
The prognosis of acute pulmonary embolism (PE) is highly variable [1, 2], and various clinical and radiological parameters have been evaluated to help physicians to risk stratify patients with acute PE [3–5]. Given that computed tomography pulmonary angiography (CTPA) has become the method of choice for diagnosing acute PE, risk stratification based on initial imaging would combine diagnosis and prognostic assessment into a single test. To assess short-term prognosis in patients with PE, several CTPA findings have been evaluated for their predictive value [6–9]. The best-validated scoring system taking into account the embolic burden assessed by CTPA is the computed tomography obstruction index (CTOI) . The CTOI is calculated by adding the number of occluded segmental arteries after assigning a weighting factor depending on the degree of obstruction to each occluded artery . Several studies evaluated the score for its prognostic ability with controversial results [11–16]. Most validation studies were limited by retrospective or single-center design [11–13], or small sample size [14, 15].
review
99.9
Besides the CTOI, the most commonly used CTPA parameter for assessing short-term prognosis is the right ventricular (RV) to left ventricular (LV) diameter ratio [11, 13, 17, 18]. However, low likelihood ratios (LR) limit its ability to risk classify patients with PE (positive LR 1.27, negative LR 0.71) .
study
99.9
Even though advanced age is associated with an increase in the incidence and overall mortality of PE [19–22], to our knowledge, no study has specifically examined the association between CTPA findings and mortality in elderly patients. Given that co-morbid conditions are more frequent in elderly patients and cardiac dimensions alter with increasing age even without underlying diseases , risk stratification by CTPA findings may differ for elderly patients. We therefore aimed to (1) prospectively evaluate the prognostic performance of the CTOI and the RV/LV diameter ratio in a multicenter cohort of elderly patients with acute PE and (2) to compare the prognostic accuracy of the CTOI, the RV/LV diameter ratio, and the Pulmonary Embolism Severity Index (PESI), a validated clinical prognostic score for acute PE .
study
99.94
The study was conducted between September 2009 and December 2013 as part of the SWIss venous Thromboembolism COhort (SWITCO65+), a prospective, multicenter cohort study to assess long-term medical outcomes in elderly patients with acute symptomatic venous thromboembolism (VTE) from all five university and four high-volume non-university hospitals in Switzerland . Consecutive patients aged 65 years or older with objectively diagnosed, symptomatic VTE were identified in the in- and outpatient services of all participating study sites. For this study, we only considered patients with objectively diagnosed, acute symptomatic PE, defined as positive CTPA in patients with acute chest pain, new or worsening dyspnea, hemoptysis, or syncope . Patients with PE who did not undergo CTPA were excluded from the present analysis.
study
99.94
Exclusion criteria were inability to provide informed consent (e.g., severe dementia), conditions incompatible with follow-up (e.g., terminal illness, geographic inaccessibility), thrombosis at a different site than lower limb, catheter-related thrombosis, or previous enrolment in the cohort.
other
63.28
Informed consent was obtained from all participants. The ethics committee at each participating center approved the study. The approving ethics committees were the "Commission cantonale d'éthique de la recherche sur l'être humain Vaud" (site of Lausanne), "Commission cantonale d'éthique de la recherche Genève" (site of Geneva), "Kantonale Ethikkommission Bern" (site of Bern), "Kantonale Ethikkommission Zürich" (site of Zurich), "Ethikkommission Nordwest- und Zentralschweiz" (sites of Basel, Lucerne and Baden), "Ethikkommission des Kantons Thurgau" (site of Frauenfeld) and "Ethikkommission des Kantons St. Gallen" (site of St. Gallen). A detailed description of the study methods has been published previously .
other
99.9
Trained study nurses prospectively collected baseline demographics (age and gender), co-morbid conditions (active cancer, arterial hypertension, diabetes mellitus, acute or chronic heart failure, chronic pulmonary disease, cerebrovascular disease, chronic liver disease, chronic renal failure), history of VTE, type of PE (provoked versus cancer-related versus unprovoked), vital signs (mental status, heart rate, blood pressure, temperature, respiratory rate, and arterial oxygen saturation), routine laboratory findings (hemoglobin and serum creatinine), concomitant antiplatelet therapy and VTE-related treatment using standardized data collection forms.
study
99.94
Follow-up included one telephone interview and two surveillance face-to-face evaluations during the first year of study participation and then semi-annual contacts, alternating between face-to-face evaluations (clinic visits or home visits in house-bound patients) and telephone calls as well as periodic reviews of the patient’s hospital chart.
study
99.9
CTPA was performed in each participating study center, recorded on compact discs, and anonymously sent to Lausanne University Hospital where two certified radiologists evaluated them independently. Disagreement was resolved by consensus. The radiologists were blinded to patients’ baseline characteristics and treatments.
study
99.94
To calculate the CTOI, the arterial tree of each lung was considered to have 10 segmental arteries (three in the upper lobes, two in the middle lobe and in the lingula, and five in the lower lobes). The presence of an embolus in a segmental artery was scored 1 point. Central or paracentral emboli were scored a value equal to the number of segmental arteries arising distally. Depending on the degree of vascular obstruction a weighting factor was assigned to each value (0, no thrombus; 1, partial occlusion; and 2, total occlusion). Isolated subsegmental embolus was considered as a partially occluded segmental artery and was assigned a value of 1. Thus, the CTOI could vary from 1 to 40 points per patient. Dividing the patient score by the maximal total score and multiplying the result by 100 calculated the percentage of vascular obstruction. Based on the percentage of vascular obstruction, patients were then divided into three groups (<15% versus 15–50% versus >50%).
study
100.0
The PESI is a validated prognostic score for patients with acute PE and comprises 11 easily available clinical variables, including patient demographics, comorbid diseases, and vital signs . Based on patient demographics and the first available baseline clinical data obtained by chart review, we determined the presence of the prognostic variables comprising the PESI. Whenever necessary, missing values were assumed to be normal. This strategy is widely used in the clinical application of prognostic models and reflects the methods used in the original derivation of the PESI [4, 27].
study
100.0
The primary outcome was overall mortality within 90 days of PE diagnosis. We assessed the clinical outcomes using patient or proxy interviews, interview of the patient’s primary care physician, and/or hospital chart review. A committee of three blinded, independent clinical experts adjudicated the cause of death. Death was judged to be a definite fatal PE if it was confirmed by autopsy, or if death followed a clinically severe PE, either initially or after an objectively confirmed recurrent event. Death in a patient who died suddenly or unexpectedly was classified as possible fatal PE. Final classification was made on the basis of the full consensus of this committee.
study
99.9
Secondary outcomes were PE-related mortality within 90 days, the recurrence of an objectively confirmed, symptomatic VTE during the whole follow-up, defined as a fatal or new non-fatal PE or new deep vein thrombosis , and the length of hospital stay (LOS) of patients who were hospitalized for the index PE.
study
85.3
We compared baseline and procedural characteristics of patients by level of the CTOI using the chi-squared test and the non-parametric Kruskal-Wallis rank test as appropriate. We compared the cumulative overall mortality, PE-related mortality, and recurrence of VTE among patients with different levels of the CTOI using Kaplan-Meier curves and the log-rank test.
study
100.0
We examined associations of the CTOI, the RV/LV diameter ratio, and the PESI with the time to death using Cox-regression with robust standard errors. For VTE recurrence and PE-related death, we used competing risk regression according to Fine and Gray , accounting for non-PE-related death as a competing event. The strength of the association is reflected by the sub-hazard ratio (SHR), which is the ratio of hazards associated with the cumulative incidence function in the presence of a competing risk.
study
100.0
Due to low event numbers, only a minimal adjustment could be performed. Associations of the CTOI and the RV/LV diameter ratio with clinical outcomes were adjusted for provoked PE, the PESI, and anticoagulation treatment as a time-varying covariate. Associations of the PESI with clinical outcomes were adjusted for provoked VTE and anticoagulation treatment as a time-varying covariate.
study
100.0
We compared the discriminative power of the CTOI, the RV/LV diameter ratio, and the PESI to predict mortality and VTE recurrence using Harrell’s C concordance statistic. We assessed the association of the CTOI, the RV/LV diameter ratio, and the PESI with LOS among patients admitted due to the index PE event using a shared-frailty lognormal survival model accounting for variation of LOS between study sites. LOS was censored if a patient died in hospital. Associations were adjusted for age, gender, type of PE (unprovoked versus provoked versus cancer-related), body mass index (BMI), prior VTE, central PE, concomitant deep vein thrombosis (DVT), arterial hypertension, diabetes mellitus, heart failure, chronic pulmonary disease, cerebrovascular disease, chronic liver disease, chronic renal failure, the PESI, and antiplatelet therapy/non-steroidal anti-inflammatory drugs.
study
100.0
Of the 316 consenting patients with available CTPA, we excluded 24 patients with inadequate CTPA quality and one patient with cancer-related non-thrombotic obstruction, leaving a final study sample of 291 patients with acute PE. Analyzed patients had a median age of 75 years (interquartile range [IQR] 69–81), 138 (47%) were women, 69 (24%) had provoked and 45 (15%) had cancer-related VTE (Table 1). Overall, 29%, 55%, and 16% of patients had a CTOI of <15%, 15–50%, and >50%, respectively. Patients with a higher CTOI were significantly younger (P = 0.01), had more often an unprovoked PE (P = 0.047), a higher BMI (P = 0.01), a higher rate of thrombolysis (P <0.001), a longer duration of anticoagulation (P = 0.003) and a higher RV/LV diameter ratio (P <0.001) (Table 1). Median follow-up was 31 months (IQR 24–42 months). Overall, 5% (15/291) of patients died within 90 days (6 from definite or possible PE). During the whole follow-up, 12% (34/291) of patients had a recurrent VTE (16 patients within 12 months). Median LOS was 8 days (IQR 5–12 days).
study
99.94
Abbreviations: CTOI, computed tomography obstruction index; PE, pulmonary embolism; DVT, deep vein thrombosis; BMI, body mass index; VTE, venous thromboembolism; AC, anticoagulation; PESI, Pulmonary Embolism Severity Index; RV, right ventricular; LV, left ventricular.
other
99.94
In the Kaplan-Meier analysis, the cumulative incidence of overall as well as PE-related death did not differ significantly between the different CTOI strata after 90 days (P = 0.46 and P = 0.79, respectively) (Fig 1). However, there was a significant association between continuous CTOI and PE-related 90-day mortality (adjusted sub-hazard ratio [SHR] per 10% CTOI increase 1.36; 95% confidence interval [CI] 1.03–1.81; P = 0.03), but not between continuous CTOI and 90-day overall mortality (adjusted hazard ratio [HR] per 10% CTOI increase 0.92; 95% CI 0.70–1.21; P = 0.54) (Table 2). In contrast to the PESI, the RV/LV diameter ratio was not associated with overall and PE-related 90-day mortality (Table 2).
study
99.94
The CTOI had a poor, the PESI a good predictive accuracy for 90-day overall mortality (C-statistics 0.43 and 0.79, 95% CI 0.28–0.57 and 0.71–0.87, respectively). Further, the CTOI and the PESI had a similar predictive accuracy for PE-related 90-day mortality (C-statistics 0.69 and 0.70, 95% CI 0.52–0.85 and 0.61–0.79, respectively). In contrast, the predictive accuracy of the RV/LV diameter ratio for overall and PE-related 90-day mortality was poor (C-statistics 0.39 and 0.51, 95% CI 0.25–0.52 and 0.36–0.66, respectively).
study
99.94
The cumulative incidence of VTE recurrence after 3 years differed significantly between different CTOI strata (P = 0.046) (Fig 2). Further, there was a significant association between continuous CTOI and VTE recurrence during the whole follow-up (adjusted SHR per 10% CTOI increase 1.27; 95% CI 1.12–1.45; P < 0.001), as well as between the RV/LV diameter ratio and VTE recurrence (adjusted SHR per unit increase 2.74; 95% CI 1.26–5.95; P = 0.01) (Table 3). The discriminative power of the CTOI (C-statistics 0.63; 95%CI 0.55–0.72) and the RV/LV diameter ratio (C-statistics 0.59; 95% CI 0.49–0.68) for VTE recurrence was moderate.
study
100.0
cAdjustment was done for age, gender, type of PE (unprovoked versus provoked versus cancer-related), BMI, prior VTE, central PE, concomitant DVT, hypertension, diabetes, heart failure, chronic lung disease, cerebrovascular disease, chronic liver disease, chronic renal disease, the PESI, and antiplatelet therapy/antiplatelet therapy/non-steroidal anti-inflammatory drugs.
other
98.75
dAdjustment was done for age, gender, type of PE (unprovoked versus provoked versus cancer-related), BMI, prior VTE, central PE, concomitant DVT, hypertension, diabetes, heart failure, chronic lung disease, cerebrovascular disease, chronic liver disease, chronic renal disease, and antiplatelet therapy/antiplatelet therapy/non-steroidal anti-inflammatory drugs.
other
99.0
The CTOI (adjusted time ratio (TR) per 10% CTOI increase 1.06; 95% CI 1.02–1.11; P = 0.01), the RV/LV diameter ratio (adjusted TR per unit increase 1.36; 95% CI 1.08–1.72; P = 0.01) and the PESI (adjusted TR per 10 points increase 1.11; 95% CI 1.07–1.16; P < 0.001) were significantly associated with LOS for patients admitted to hospital due to index PE.
study
100.0
In our prospective cohort of elderly patients with acute symptomatic PE, the CTOI was not associated with all-cause mortality, but with PE-related mortality at 90 days. Neither all-cause nor PE-related mortality differed significantly across the three pre-specified CTOI strata. The discriminative power of the CTOI for predicting PE-related 90-day mortality was moderate (C-statistics 0.69). In contrast, the RV/LV diameter ratio was neither associated with overall nor PE-related 90-day mortality.
study
100.0
The predictive value of the CTOI for short-term overall mortality was previously assessed only in few prospective studies and one meta-analysis, which all showed no association between the CTOI and 30-day mortality or clinical deterioration (cardiopulmonary resuscitation, mechanical ventilation, administration of inotropic or thrombolytic agents) in the hospital [6, 14–16]. None of these studies assessed PE-related death. While our results did not show an association between CTOI and overall mortality, the CTOI was significantly associated with PE-related mortality. This is not astonishing. Given the fact that elderly patients with VTE are often multimorbid, other clinical factors may be more influential with respect to survival than the embolic burden alone [29, 30]. Despite its association with PE-related mortality, the CTOI does not appear to offer any advantage over the PESI in terms of mortality prediction.
study
99.94
In contrast to the results of three meta-analyses [7–9], RV/LV diameter ratio assessed by CTPA was not associated with mortality in our prospective cohort. Overall, only 8 of 39 studies included in these meta-analyses were prospective and according to a meta-regression analysis, the risk estimates derived from retrospective studies were significantly higher than the estimates from the prospective studies . Moreover, patients in all prospective studies that demonstrated an association between RV/LV diameter ratio and mortality were younger than the patients enrolled in our cohort (mean age 54–67 versus 75.4 years) [17, 31, 32]. Because co-morbid conditions as well as ageing itself may lead to left ventricular enlargement or right ventricle dilatation, the prognostic accuracy of RV/LV diameter ratio may be lower in elderly multimorbid patients. Indeed, the largest prospective cohort including 848 mostly elderly patients (median age 72 years) did not demonstrate an association between RV/LV diameter ratio and mortality .
study
99.94
In contrast to two previous studies [33, 34], our study is the first to show an association between the CTOI and VTE recurrence. Den Exter et al. focused on the association between thromboembolic resolution assessed by CTPA and VTE recurrence, mentioning that the CTOI was not associated with recurrent VTE (no data shown) . Zhang et al. showed no association between the CTOI and recurrent VTE in patients with mainly provoked VTE (67% versus 39% in our study), which may be a reason for the different findings. Further, their analysis on the association between the CTOI and VTE recurrence was lacking adjustment for time periods in which patients were anticoagulated during follow-up. Only few studies showed an association between echocardiography-assessed right ventricular dysfunction and VTE recurrence [35, 36]. To the best of our knowledge, our study is the first to confirm the relationship between CTPA-assessed right ventricular dysfunction and recurrent VTE. However, the clinical value of risk stratification for recurrent VTE by CTPA if at all would be limited to patients in whom optimal duration of anticoagulation is unclear (e.g., patients with unprovoked VTE).
study
99.94
All three measures of disease severity, the CTOI, RV/LV diameter ratio, and the PESI were associated with LOS in our study. Given that treating physicians were not blinded to these measures in our study, the greater true or perceived severity of illness may have led to an extended hospital stay.
study
100.0
The strengths of our study include the prospective multicenter design, inclusion of consecutive patients with objectively diagnosed PE, blinded assessment of the CTOI, the outcome assessment by a blinded independent committee using pre-defined criteria, and the focus on elderly patients who are at particular risk of PE-related complications.
study
99.94
Our study has potential limitations. First, due to the low number of deaths (9 and 15 patients after 30 and 90 days, respectively) only a minimal adjustment and no analysis of the association between the CTOI and 30-day mortality could be performed. Second, our study might be underpowered to detect an association between the CTOI or RV/LV diameter ratio and all-cause mortality. Third, we enrolled exclusively patients aged 65 years or older with acute VTE. We thus cannot extrapolate our results to younger patients. Fourth, the fact that patients with a higher CTOI were more likely to receive thrombolytic therapy might have biased the results. However, when we excluded thrombolyzed patients in a sensitivity analysis, our results remained unchanged. Finally, we could not analyze the interobserver agreement for the CTOI assessment, because disagreement was immediately resolved by consensus and only the final CTOI value was available in our database.
study
100.0
In conclusion, our results showed that the CTOI is not associated with overall 90-day mortality in elderly patients with acute PE. However, we showed an association between the CTOI and PE-related 90-day mortality and VTE recurrence. The RV/LV diameter ratio was associated with recurrent VTE but not with overall or PE-related mortality.
study
100.0
For the last few decades, the energy crisis threatened the world due to the excessive utilization of the world’s depleting oil reserves by the ever-increasing human population. High worldwide demand for energy, unstable and uncertain petroleum sources, and concern over global climate change has led to the resurgence in the development of alternative renewable resources that can displace fossil fuel or petroleum-based polymers. Recently, environmental awareness has become greater and the demand for sustainable plant-based raw materials for the eco-friendly economy continues to grow into the foreseeable future . In response, many countries have initiated extensive research and development programs in nanocellulose production, a green, bio-based and renewable biomaterial that has the broad possibility of use in various fields of innovative material. In fact, scholar-researchers have moved towards the utilization of this fully bio-based nanomaterial as a prominent candidate to replace synthetic reinforcing fillers in biodegradable composites and polymer matrixes as well as for the production of nanotubes and thin films .
review
98.56
Cellulose is the most abundant and renewable biopolymer available on Earth, which can be obtained from a variety of sources, including woody and non-woody plants, animals and bacteria. It has been estimated that more than 1010–1011 tons of cellulose are synthesized and destroyed globally per annum. Therefore, rational and sustainable utilization of these abundant lignocellulosic biomasses to develop new valuable bio-products would be of great benefit not only to increase renewable, value-added products but also to diminish adverse environmental/ecological impacts. Cellulose, the principal component in the plant cell wall polysaccharides, is a linear polymer built up from linearly connected β-d-anhydro-glucopyranose units (AGUs). Cellulose is a natural high-molecular-weight macro-polymer covalently linked by β-1,4-glycosidic linkages in a variety of arrangements, and several cellulose polymer chains will eventually bundle together to form fibrils or microfibrils due to its strong hydrogen bonding . In nature, cellulose can be processed into its nano-dimensional structure, also known as nanocellulose via various hydrolysis treatments. From a 2015 study conducted by Usov and his group , it is well known that cellulose chains consist of different degrees of order, from highly crystalline arrangements to slightly perturbed distribution of chains. They proposed an alternative model to describe the ordered and disordered regions on the cellulose nanocrystals (CNC). Instead of using amorphous regions to designate the randomly packed arrangements, less-ordered surface chains may possibly be more suitable to describe the cellulose polymer chain arrangement in cellulose. In other words, the dissolution process in cellulose fibrils normally happens faster on defective crystalline parts (soft parts which are visible as kinks in single fibrils) in the long polymer chains than the crystallinity phases as there are more active sites and defect regions for chemicals or catalysts to attack. Usually, cellulose nanocrystals (CNC) can be isolated mainly by strong acid hydrolysis (e.g., 64 wt % H2SO4) in order to remove the defective crystalline domains, obtaining a typical needle-like morphology with an average diameter of 5–20 nm and fiber length up to several hundred nanometers (100–300 nm) . Contrariwise, cellulose nanofibrils (CNF) are prepared by several methods, such as mechanical treatment, TEMPO-mediated oxidation, or enzymatic hydrolysis to yield long flexible fiber networks with a fibril diameter longer than CNC (5–50 nm) and length up to several micrometers, depending on its degrees of fibrillation . In contrast to CNC, the produced CNF preserves the crystalline and less-ordered surface chains state in the cellulose microfibrils, rather than individual microfibrils.
review
99.44
Due to its nano-dimensional structure, nanocellulose possesses numerous excellent physicochemical properties, such as biodegradability, high aspect ratio, lightweight, renewability, distinctive mechanical strength with high stiffness and Young’s modulus, low density, modifiable surface properties and low coefficient of thermal expansion compared with natural cellulose . Due to its high availability and remarkable properties, nanocellulose is anticipated to be as cost effective as prominent candidates to replace conventional petroleum-based polymers in various potential applications, such as bio-nanocomposites, biodegradable composites, and polymer matrixes . Other potential applications of this green material include barrier films, pharmaceutical and medical applications, surface coatings, textiles and fibers, separation membrane, electroactive polymers, supercapacitors, batteries, food packaging, food additive, drug delivery, biosensors and enzyme immobilizations . Thus, nanocellulose manufacture is currently an interesting field to study. Notably, greater length and aspect ratio CNC particles may become entangled, allowing them to reinforce composites by interacting with the polymer matrix.
review
99.3
In recent decades, there has been increased interest in the manufacturing of nanoscale cellulosic particles. The excitement generated is the result of its distinction properties of nanocellulose. The top-down destruction of cellulosic fibers can be conducted by mechanical disintegration , acid hydrolysis , TEMPO-mediated oxidation and enzyme-assisted process , all of which are preferential and widely accepted as promising processes employed for converting the cellulosic feedstock to its nano-dimensional structure. Even though the mechanical process is the most direct way to produce nanocellulose by diminishing the cellulosic fibers using mechanical stress along the longitudinal axis of the cellulose structures, the treatment is not cost effective due to high energy consumption, the involvement of complex equipment, and repetitive process. In addition, enzymes hydrolysis is a costly treatment as the enzymes are hard to recycle and the process requires a longer period (2–6 weeks) to achieve a satisfactory conversion (~80%) . Therefore, acid hydrolysis conducted by strong mineral acid such as sulfuric (H2SO4), hydrochloric (HCl), and phosphoric acid (H3PO4) is recognized as the most effective to hydrolyze defective cellulose crystalline parts while leaving the highly crystalline nano-sized cellulose segments unaltered . In the case of strong acid treatment, the hydronium ions (H3O+) will attack and cleave β-1,4-glycosidic linkages within the single cellulose chains as well as break down the extensive network of intra- and intermolecular bonds between the cellulose chains . Unfortunately, the yielded sulfated nanocellulose suffers from low product yield and less thermal stability compared to that of its starting material due to the introduction of active sulfate ester groups on the fiber surface that can be detrimental to the thermal stability of treated product . Moreover, this process is highly corrosive, hazardous, and requires harsh conditions and the involvement of tedious neutralization treatment of concentrated acidic effluent, which further limits its industrialization. For HCl hydrolysis, the aqueous nanomaterial suspensions tend to flocculate rather than well-disperse, and only a low yield of 20% could be obtained . Even if some studies reported a higher yield of 60.5% via phosphotungstic acid hydrolysis, the hydrolysis efficiency was low and the reaction process is time-consuming , or else mechanical activation (ultrasonication, ball milling or mechanochemical) is necessary to enhance the competence . Similar to that of diluted or organic acids that have been proposed for milder hydrolysis reactions, such treatments were less effective in hydrolyzing the high recalcitrant cellulose macromolecule. For these reasons, an increasing effort has been made to use the other mineral acids for cellulose hydrolysis. Recently, the mixed acid solution (hydrochloric acid and sulfuric acid) may be a good choice for the production of CNC suspension, but this process still also took a long period.
review
99.44
According to some published papers, the typical hydrolysis methods, such as TEMPO-mediated oxidation and mechanical disintegration are popular in the preparation of CNF from various cellulosic feedstocks. Briefly, TEMPO process is initiated by (2,2,6,6-tetramethylpiperidin-1-yl)oxidanyl, and the CNF obtained are more uniform and can be well dispersed in aqueous solution. In contrast, mechanically induced deconstructing strategies by different approaches (such as grinding, microfluidization, high-pressure homogenization, high-speed blending, refining, cryocrushing or ultrasonication) are considered as promising methods for isolating CNF. However, large-scale applications of these techniques have several restrictions. In fact, mechanical refining could efficiently separate the microfibrils, but the high crystalline ordered structure might be broken due to the lower selectivity of high shearing force, resulting in a lower crystallinity. Likewise, the TEMPO-oxidation process may turn some of the crystalline cellulose molecules into disordered structures during the oxidation and the resultant CNF product exhibited a lower CrI value, even compared to its starting material . Ultrasonication is another renowned way for CNF fabrication in which 20–50 kHz of ultrasound is applied to defibrillate cellulose for cell disruption. This is a greener hydrolysis technique as no chemical is involved during the generation of nanocellulose. The high frequency is generated and converted into the mechanical energy, which can be transmitted to cellulose via a metal probe. Unfortunately, the product obtained by this technique is a mixture of nano- and microfibers since some of the fibrils tended to be peeled off from the original cellulose fibers while some still remain on the surface of fibers . In addition, cellulose can be cycled through a high-pressure homogenizer to yield nanofibers. Typically, smaller and more uniform nanoparticles can be obtained with increasing number of homogenization cycles, but such high-pressure treatment is more likely to reduce and/or destroy the crystalline domains by separating the cellulose molecular mass, or instead fail to defibrillate the cellulosic pulp sufficiently . It is worth mentioning that the energy demand increases significantly with the increase of homogenizing time, and this may be the main drawback of application for CNF. The main issue often experienced by a homogenizer is extensive clogging and flocculation of the nozzle as the CNF tend to aggregate, and this means that extra capital cost is required to overcome this frequent issue. Therefore, some authors proposed that an additional pre- or post-treatment is necessary in order to obtain nanofibers product, such as hydrolysis, grinding, homogenization, or other mechanical treatments.
review
99.56
To accomplish a technical feasibility and high selectivity controllable hydrolysis pathway, transition metal salt catalysts have been used as a potential hydrolyzing agent in the cellulose depolymerization process. Today, metal salts have been discovered to have the following notable advantages over inorganic acids or organic solvents in cellulose hydrolysis: (i) can disrupt the hydrogen bonds more efficiently and induce the degradation of cellulose; (ii) less corrosive and more environment-friendly; and (iii) better contact between molten state metal salts and solid cellulose . According to literature studies, chromium-based metal salt catalysts have been found to be highly effective in direct conversion of cellulosic resources into different chemicals and liquid fuels such as glucose, hydroxymethylfurfural (HMF), xylose and levulinic acid. Binder and Raines reported that CrCl2-catalyzed hydrolysis of cellulose could produce a high yield of HMF (ca. 54%) in the N,N-dimethyl acetamide solvent containing [EMIM]Cl and lithium chloride (LiCl) additives at 140 °C for 2 h. Su et al. investigated the effect of CrCl2 in [EMIM]Cl ionic liquid on the cellulose hydrolysis, and the results revealed that 58% of HMF was successfully achieved under the reaction conditions of 120 °C for 8 h. In recent years, a study performed by Peng et al. found that CrCl3 catalyst was exceptionally effective in the conversion of cellulose to levulinic acid with a maximum yield of 67 mol % at 200 °C for 180 min. During the catalytic hydrolysis, the multivalent metal ions act as a Lewis acid that possess a high ability for hydrolyzing the cellulose matrix by disrupting the bonding system to produce fermentable sugar molecules, and it had better efficiency compared to diluted acid alone . Therefore, it is believed that the intermediate solid crystalline nanocellulose could be produced instead of water-soluble monomer molecules under controlled low severity hydrolysis settings.
study
99.9
In this study, we propose for the first time the use of Cr(NO3)3 hydrolysis system as an efficient and sustainable technique for producing nanocellulose. Many papers have reported the combination of metal salts and mineral salt or mechanical process for hydrolyzing the cellulosic feedstock into hydrocellulose or nanocellulose, but there is limited research on the involvement of metal ion catalyst alone to produce cellulose nanocrystals (CNC) without the addition of any mineral acid or assistance of mechanical treatment. The main objectives of this study include the following: (i) exploiting the feasibility and practicability of Cr(NO3)3 as a high potential catalyst for converting cellulose into its nanostructured morphology; (ii) optimizing the reaction conditions of Cr(NO3)3 hydrolysis system in order to produce a high performance nanocellulose; and (iii) evaluating the physicochemical changes in terms of functional groups (Fourier transform infrared; FTIR), crystalline structure (X-ray diffraction; XRD), morphological properties (Field emission scanning electron microscopy; FESEM and transmission electron microscopy; TEM) and thermal stability (Thermogravimetric analysis; TGA) during the production of nanocellulose from macro-sized native cellulose. We also attempted to produce CNC from the same starting material by concentrated H2SO4 hydrolysis, in order to perform a comprehensive comparison and assessment between resulting nanocellulose (CNCCr(NO3)3 and CNCH2SO4) using a similar analytic instrument systematically. We used microcrystalline cellulose (MCC) as a starting material instead of extracted cellulosic pulp from lignocellulosic biomass to better understand the role of Cr(NO3)3 hydrolysis system in the conversion of cellulose, avoiding impure bulk materials that might cause irreproducible results.
study
100.0
Reaction temperature plays an important role in initiating the hydrolytic depolymerization process of cellulose and eventually enhancing the chemical degradation of cellulose macromolecule into the nano-dimensional. The crystallinity index of yielded nanocellulose affected by reaction temperature is shown in Figure 1a. The reaction temperature was set at different levels (20, 40, 60, 80 and 100 °C) while other operating parameters were fixed, including Cr(NO3)3 concentration (0.8 M), reaction time (1.5 h) and solid–liquid ratio (1:30). The crystallinity index of nanocellulose was significantly increased (62.1% to 86.7%) with the rise of temperature from 20 to 80 °C. The increment was mainly contributed by the increased diffusion rate of H3O+ and Cr3+ ions in aqueous solution and penetration into loosely-packed non-crystalline regions of cellulose to cleave the glycosidic bonds. In fact, the hydrolysis reaction is generally more kinetically favorable, which means that a higher reaction temperature could significantly enhance the degree of progressive removal of defective crystalline domains in the cellulose matrix, releasing the individual crystallite segments . This led to the breakage of the glycosidic linkage in cellulose long polymeric chains into a smaller dimension. However, the extremely high temperature of 100 °C resulted in the destruction of the crystalline structure of cellulose under excessive heat energy, and eventually promoted the formation of unfavorable side products, such as levulinic acid and HMF . Therefore, the optimal reaction temperature of 80 °C was selected to conduct further experiments with the advantages of energy-saving, ability to produce higher crystallinity product and, most importantly, the reaction was more manageable under normal conditions of atmospheric pressure.
study
100.0
Surprisingly, the yield of produced CNCCr(NO3)3 gradually decreased with increasing hydrolysis temperature, which decreased to ca. 75% when the reaction temperature rose from 20 to 100 °C. This finding suggested that less-ordered cellulosic surface chains started to disintegrate into water soluble product by the acidic solution, which in turn lowered the final yield of the nanomaterial . It was observed that the yield of nanocellulose dropped sharply from 83.1% to 75.4% when the reaction was heated up from 80 to 100 °C. Therefore, the reaction parameters must be well-controlled in order to compromise between product yield and the crystallinity index of the nanocellulose particles.
study
100.0
Besides the reaction temperature, hydrolysis time also presents a significant effect on the crystallinity index of nanocellulose, which is illustrated in Figure 1b when the hydrolytic depolymerization of cellulose was conducted at 80 °C, 0.8 M Cr(NO3)3 concentration and solid–liquid ratio of 1:30. The nanocellulose yield showed a gradual decline trend while the crystallinity index of nanocellulose was increased over time. The increment reached a maximum of 86.8%, followed by a decreasing trend when the reaction was further prolonged to 2.5 h. The decrement was due to unfavorable hydrolysis process occurring in the crystalline cellulose, which led to the damage of high crystalline segments. However, insufficient reaction time (0.5 h) resulted in incomplete hydrolysis process of breaking down the strong bonding network of cellulose macromolecules, contributing to a low crystallinity index (79.6%). It is widely accepted that increasing contact time between hydrolyzing catalyst and reactant would eventually cause the swollen cellulose and even the enlargement of the intra- and inter-fiber pores of treated matrix, so that the degradation of cellulose network structure can happen effectively. This would also allow the catalyst to penetrate into the interior part of the cellulose pulp to liberate the nanoscale cellulose segments to accelerate the production of nanocellulose. For these reasons, it is very important to choose an appropriate hydrolysis time to maximize the removal of non-crystalline areas in cellulose, while preserving the crystalline parts. Therefore, hydrolysis time of 1.5 h was chosen as the optimal preparation condition and was used in the subsequent experiments, as the extension of time would cause a negative effect in enhancing the crystallinity of nanocellulose.
study
100.0
Theoretically, higher catalyst concentration can enhance the degree of hydrolysis of cellulose. Unfortunately, the hydrolysis process is not only occurs in the cellulose defect domains but also in its crystalline structure. The defective crystalline phase in cellulose chains is more easily decomposed to its water soluble oligomers in the presence of metal ion catalyst than that of crystalline phase owing to its tightly packed structure and strong hydrogen bonding . Thus, enhancing the hydrolysis degree of less-ordered cellulose phase by increasing the metal ion concentration benefits the production of high crystallinity nanocellulose. However, it is necessary to find an optimal metal ion concentration, as it is possible that some crystalline nanocellulose segments could gradually be hydrolyzed to further convert to liquid organic molecules as the secondary product when treated with an extremely high concentration of metal salt catalyst. Figure 1c reveals that when Cr(NO3)3 concentration increased from 0.2 to 1.2 M, the nanocellulose crystallinity progressively increased from 62.5% to 87.1% while maintaining reaction duration for 1.5 h, and a solid–liquid ratio of 1:30 at 80 °C. Simultaneously, the product yield decreased almost linearly with increasing concentration of metal salt. This implied that the successive degradation of defective crystalline allomorphs after the catalytic hydrolysis process induces the exposure of high crystalline segment in the nanocellulose, which leads to increasing crystallinity. However, it is worth noting that the increasing tendency of crystallinity index became slower when the metal salt concentration was higher than 0.8 M. Therefore, for better economic competence, 0.8 M metal salt catalyst was selected as the optimum concentration for hydrolysis.
study
100.0
In this study, Cr(NO3)3 acted as a Lewis acid to react with cellulose molecules for the production of the nanostructured product. The possible mechanism behind this phenomenon is that Cr3+ ions can dissociate into a coordination complex structure with water molecules (H2O) at the initial hydrolysis stage. The H2O molecules tend to be polarized by this central metal ion by withdrawing its electron density, resulting in the H-atoms in hydroxyl groups (O–H) becoming more electropositive. However, the original state of these complex ions was unstable and there was a high probability of becoming deprotonated by releasing hydronium (H3O+), further enhancing the acidity of the reaction mixture . The detail reaction steps are as follows: Cr(NO3)3 + 6H2O → [Cr(H2O)6]3+ + 3NO3−(1) [Cr(H2O)6]3+ + H2O → [Cr(H2O)5(OH)]2+ + H3O+(2) [Cr(H2O)5(OH)]2+ + H2O → [Cr(H2O)4(OH)2]+ + H3O+(3) [Cr(H2O)4(OH)2]+ + H2O → [Cr(H2O)3(OH)3] + H3O+(4)
study
100.0
Enhancing the acidity of the hydrolytic system would weaken and thus promote the breakdown of anhydro-glucopyranose units between cellulose polymeric chains, producing smaller cellulose units. Otherwise, there were insufficient hydronium ions present in the reaction system if the metal salt concentration was low. In addition, the Cr3+ metal ions could easily interact with the oxygen atoms of C–O–C glycosidic bonds between the glucose units, leading to the formation of the oxygen-chromium complex intermediate. Due to the adsorbed metal ions, the bonding energy of oxygen atoms and carbon atoms of pyranose rings in the intermediate were reduced because of the increase of its bond length and bond angle . Therefore, the activation energy of the hydrolysis process was reduced. However, as stated earlier, the operational conditions of the hydrolysis reaction must be optimized in order to prevent the carbonization process (formation of char).
study
100.0
As shown in Equations (1)–(4), water molecules play a vital role in initiating the Cr3+-catalyzed hydrolysis process by generating H3O+ ions. In order to accelerate the rate of the hydrolysis reaction, it was suggested that water content be increased in order to produce more hydronium ions by shifting the reaction forward to the product side (Le Chatelier’s principle). Therefore, the effect of solid-liquid ratio of crystallinity index of nanocellulose was investigated by varying the ratio from 1:10 to 1:50 at a constant metal salt concentration (0.8 M), and 1.5 h reaction at 80 °C. As shown in Figure 1d, the nanocellulose yield increase when more water was added into the hydrolysis system (higher solid-liquid ratio); this might be similar to decreasing the metal salt concentration. In other words, the acidity of the reaction mixture was reduced by water molecules. However, the crystallinity of nanocellulose reached a maximum value (87.2%) at the solid-liquid ratio of 1:30 and gradually decreased when the cellulose matrix was treated with a more diluted hydrolysis system (1:40 and 1:50). This result might reflect that a small amount of H2O could reduce the cellulose hydrolysis efficiency due to the incompletion reaction between the cellulosic material and hydrolyzing catalyst. In addition, it was observed that some portion of the cellulosic material failed to immerse completely in the reaction solution when the solid-liquid ratio set at 1:10, which also leads to the hydrolysis process not taking place effectively. However, the catalyst concentration significantly reduced when a large amount of water was introduced to the reacting system, and this eventually led to a decrement of the crystallinity of nanocellulose as shown in Figure 1c. In fact, it is important to control the consumption of water in actual industrial production. Taking the cost and efficiency into consideration, 1:30 was chosen as a suitable loading ratio between cellulose solid material and the catalyst aqueous solution.
study
100.0
From the several single-factor experiments conducted, it can be summarized that the optimum operational conditions for cellulose hydrolysis were determined as 1.0 g of cellulose material and 0.8 M of Cr(NO3)3 catalyst with a solid-liquid ratio of 1:30 at the temperature of 80 °C for 1.5 h. In addition, the yield of solid product cellulose nanomaterial (CNCCr(NO3)3) was approximately 83.6% ± 0.6%. Meanwhile, the CNCH2SO4 obtained via sulfuric acid hydrolysis was determined to produce about 54.7% ± 0.3%. The yield of CNCH2SO4 was much lower than that of CNCCr(NO3)3 as an acid hydrolysis process is well-known to be a harsh treatment that results in disintegration and degradation, leading to fewer crystalline domains, causing the dramatic decrease in product yield. Both manufactured CNC products were further characterized.
study
100.0
It is widely recognized that natural state cellulose consists of both less-ordered and highly-arranged crystalline regions in their molecular structure, and wide-angle X-ray scattering (WAXS) analysis was conducted to determine the crystalline index of native cellulose and optimized nanocellulose. XRD studies were performed to evaluate the crystallinity behaviors of native cellulose and yielded nanocellulose specimens. As shown in Figure 2, the XRD diffractogram profiles for all samples exhibited a similar pattern, which represented the typical semi-crystalline materials with a crystalline peak and non-crystalline broad halo. In addition, all XRD profiles showed the major peaks at around 2θ = 15.1 (1–10), 16.5 (110), 22.5 (200), and 34.6° (004), which indicated the presence of cellulose I structure . From the X-ray curves, it was clearly observed that the XRD patterns of both nanocellulose samples and commercial MCC were similar. This observation suggested that the crystalline structure of cellulose I of MCC had been well maintained after the hydrolysis process to different extents.
study
100.0
The crystallinity index for native cellulose, CNCH2SO4 and CNCCr(NO3)3 were calculated based on Equation (5) and found to be 65.7%, 81.4% and 86.5%, respectively. It was noticeable that the produced CNCH2SO4 and CNCCr(NO3)3 exhibited higher crystallinity compared with starting material. This is because of the successive removal of the less-ordered surface cellulose chains in the matrix during the catalytic hydrolysis process, leading to the exposure of solid elementary crystalline cellulose phases . In this study, the untreated MCC was less crystalline than the treated ones. The higher intensity peak associated with CNCH2SO4 and CNCCr(NO3)3 was predominantly due to the successive dissolution of the defective cellulose regions with the consequent increase in the sample crystallinity index. Although H2SO4 and Cr(NO3)3 exhibited the strong catalytic effects toward the degradation of cellulose, the maximum crystallinity index was obtained with CNCCr(NO3)3 (86.5% ± 0.3%) compared to that of CNCH2SO4 (81.4% ± 0.1%). In the process of hydrolysis, the H3O+ ions tend to penetrate into the non-crystalline regions of the cellulose matrix, promoting the hydrolytic cleavage of glycosidic linkages and eventually releasing the individual crystallites. On the other hand, the higher crystallinity of nanocellulose product could be attained because the nanocellulose tends to be realigned into a better organized and crystalline structure, which results in the self-assembly phenomenon of nanocellulose, enabling close packing and hydrogen bond formation . Besides that, there is no remarkable difference in the XRD patterns between CNCH2SO4 and CNCCr(NO3)3. These results confirmed that the proposed Cr(NO3)3 hydrolysis system does not have a negative effect on the crystallites structure of cellulose.
study
100.0
In this study, the crystallinity index of CNCH2SO4 was lower than that of CNCCr(NO3)3, which could be ascribed to the reasonable explanation that the strong H2SO4 not only attacks the randomly-ordered structure in cellulose but also destroys the crystalline one during hydrolysis. For industrial uses, the addition of nanocellulose to a polymer matrix is one of the effective ways to reduce the oxygen transmission rate in packaging application; however, the slightly lower crystallinity of CNCH2SO4 might have a negative effect on its gas barrier property .
study
100.0
FTIR spectroscopy is an appropriate technique to evaluate the changes in the chemical compositions of the samples in response to native cellulose and yielded nanocellulose (CNCH2SO4 and CNCCr(NO3)3). As presented in Figure 3, the FTIR spectra of all the samples exhibited two major absorbance regions, which included the high (3500–2800 cm−1) and low wavenumbers (1700–500 cm−1) regions, consistent with previous studies . A broad and dominant absorbance peak located in the region of 3600 to 3200 cm−1 was observed in all spectra, which corresponded to the characteristic of hydrogen bonding O–H stretching vibrations . In addition, C–H stretching gave rise to the band at 2900 cm−1 . A small peak was detected at 1640 cm−1 associated with –OH bending vibration of absorbed water by the fibers as the cellulose is hygroscopic in nature , whereas the peak at 1430 cm−1 related to the intermolecular hydrogen attraction at the C6 group . Although all the samples were subjected to a proper drying process prior to FTIR analysis, the complete elimination of moisture in the samples was very difficult due to the strong cellulose–water interaction . Moreover, the stretching vibration of the C–O–C pyranose ring within the cellulose molecules was observed at a sharp peak of 1054 cm−1 . Besides that, the fingerprint region of the FTIR spectra, which lies in the range within 600 to 1100 cm−1, is a significant signal for tracking the presence of β-glycosidic linkages that contributed by the vibration of wagging, deformation and twisting modes of the anhydro-glucopyranose units . Furthermore, the absence of NO3− peaks at 1384 cm−1 in the spectra implied the complete removal of Cr3+ metal salt during the washing by centrifuge and dialysis process .
study
100.0
Based on the FTIR spectra obtained from native cellulose and yielded nanocellulose, there were no significant changes in the functional groups after the cellulose hydrolyzed by H2SO4 and Cr(NO3)3 hydrolyzing catalysts, except peak intensities. Therefore, this suggested that the chemical structure of yielded nanocellulose that was not altered after the hydrolysis process, where the typical structure of the parent cellulose remained well-preserved, only different in term of morphology and crystallinity. This observation was in good agreement with other works of literature exhibited a similar FTIR pattern compared to its corresponding cellulosic materials. It is important to note that the extracted nanocellulose exhibited absorbance signals at 1428, 1160, 1110, and 898 cm−1, suggesting that nanocellulose was primarily in cellulose I structure .
study
100.0
Figure 4a illustrates that native cellulose is primarily comprised of aggregated cellulosic fibrils with an irregular shape. In nature, each cellulose fiber is made up of several to hundreds of microfibers that tend to be assembled together and lead to the formation of the compact structure of cellulose . The aggregation of microfibers can contribute to the strong hydrogen bonding between the individual cellulose chains . In addition, each elementary cellulose fiber appeared to be long in length with a rough surface and ow aspect ratio. The aggregation of native cellulose is due to the strong hydrogen bonding system between each individual cellulose microfibril in the macrostructure of cellulose, which is consistent with the literature .
study
100.0
The long cellulosic fibrils were broken down to a great extent after the hydrolysis treatment catalyzed by Cr(NO3)3 and H2SO4, which can be clearly observed in the FESEM micrographs as presented in Figure 4b,c. Smaller and more individualized fragments were successfully disintegrated and/or separated from the bundles of micro-sized cellulose fibers, which resulted in further reduction of its diameter. These findings were in excellent accordance with the reported literature . The intermittent breakdown in the fibrillar structure of CNC in this study could be correlated with the successive hydrolysis treatment initiated by the Cr(NO3)3 metal salt and H2SO4 in the dissolution of the less-ordered defective crystalline regions as well as the hydrolytic cleavage of β-1,4-glycosidic linkages. The highly ordered crystalline arrangements appear in nanocellulose samples due to the formation of intra- and intermolecular H-bonding between the hydroxyl groups.
study
100.0
In the present study, finer and shorter nanoscale fibrils were expected in the CNCH2SO4 and CNCCr(NO3)3; however, this was not obvious from the FESEM micrographs, as shown in Figure 4b,c. This was accredited to the strong intermolecular hydrogen bonding within the cellulose chains. As a result, the cellulose fibrils tended to agglomerate with each other during the freeze-drying process owing to the higher number of OH groups on the cellulose fibrils. A similar morphology has also been reported for the nanocellulose derived from flax and microcrystalline cellulose . In order to confirm the successive production of nanocellulose, more insight into the particle size and dimension of CNCH2SO4 and CNCCr(NO3)3 in suspension was observed through TEM analysis.
study
100.0
The results of EDX analyses of the CNCH2SO4 and CNCCr(NO3)3 are given in Table 1. The chemical analysis of both nanocellulose products revealed carbon (C) and oxygen (O) as its major elements. However, the elemental analysis by EDX identified that a small amount of sulfur (S) content was introduced on CNC specimen treated by H2SO4. The trace amounts of sulfur components probably contributed by the sulfate groups that came from H2SO4. In fact, for sulfuric acid hydrolyzed CNC, sulfur will always be present since the sulfate half esters have a role in stabilizing the CNC aqueous suspensions and are introduced during the hydrolysis step. Moreover, the presence of sulfur indicated that the chemical interaction of sulfuric acid with the surface of cellulose fiber, and, most importantly, this also further evidenced that cellulose fiber had been hydrolyzed by H2SO4 during the treatment. Sulfated nanocellulose was believed to be due to the esterification process occurred during the acid hydrolysis according to the procedure as follows : Cellulose–OH + HOSO3H → Cellulose–OSO3H + H2O
study
100.0
Most importantly, the elemental chromium was not detected in CNCCr(NO3)3, which indicated that Cr3+ could be completely removed during centrifugation steps. Thus, it can be concluded that the yielded CNCCr(NO3)3 had been washed cleanly after the hydrolysis process and metal ion was completely detached in the treated samples. This finding was in good agreement with FTIR result.
study
100.0
Because the manufactured nanocellulose products were matted and not individualized as seen in FESEM images, the nanoparticles size and morphology of CNCH2SO4 and CNCCr(NO3)3 were confirmed by transmission electron microscopy (TEM). This technique is considered one of the most accurate methods for direct measurement and observation at a nanoscopic scale. Under the controlled hydrolysis conditions, the non-crystalline regions of cellulose were cleaved transversely by H2SO4 and Cr(NO3)3 hydrolyzing catalysts with heat treatment. The average diameter of the nanocellulose fibers was calculated by ImageJ software for at least 200 measurements. The results showed that the macro-sized native cellulose fiber diminished to the nanometer scale after the hydrolysis reaction. However, it can be clearly observed that the morphology of the nanocellulose obtained for both hydrolysis procedures exhibited noticeably different structures: CNCH2SO4 displayed a rice-shape structure (Figure 5a) and CNCCr(NO3)3 revealed a spider-web-network-like structure (Figure 5b), confirming a successful extraction of nanocellulose from MCC. However, part of the CNCH2SO4 and CNCCr(NO3)3 samples were observed in a somewhat agglomerated form, as bundles of particles with some dispersed individual fibrils present. In fact, it was challenging to obtain separated nanocellulose fibrils due to the presence of strong hydrogen bonding and the high surface area between these fibrils fosters agglomeration, overlapping and assembly. This phenomenon occurred mainly during the evaporation of the dispersing medium in order to dry the particles for TEM imaging . Similar observation has been previously reported .
study
100.0
Besides the morphological aspects, the TEM images allowed the observation of a variation in the average particle width (d) and length (L), thus enabling determination of the aspect ratio (L/d) of the yielded nanocellulose products. Significant differences were noticed for the average particle length. It was clearly observed that the average diameter (d) of CNCCr(NO3)3 was comparable to that of CNCH2SO4, the length (L) of the former product seems to be much longer than the latter product, and this gave rise to the presumably higher aspect ratio (L/d). The use of MCC as starting material hydrolyzed in the H2SO4 system led to the formation of CNCH2SO4 having an average fiber width of approximately 9.9 ± 3.2 nm with an average length of 34.8 ± 14.2 nm providing an aspect ratio of 3.5. Meanwhile, the inorganic salt-treated CNCCr(NO3)3 displayed a mean width of 29.1 ± 7.8 nm, a length of 455.7 ± 38.1 nm and an aspect ratio of 15.7. Briefly, CNCCr(NO3)3 rendered stronger reinforcing effect (i.e., higher tensile strength and modulus) than that of CNCH2SO4 contributed by its high aspect ratio, making it become an ideal reinforcing agent for polymer matrix . In addition, CNCCr(NO3)3 with higher aspect ratio might exhibit higher optical transmittance when it is used for preparing nano-paper or reinforcement for polymer composites .
study
100.0
In this study, the aspect ratio values of CNCCr(NO3)3 are well above 10 (~15.7), which is considered the minimum value required for a good stress transfer from the matrix to the fiber for a good reinforcement when nano-biocomposite is targeted . Generally, it is known that to constitute the composites, nanocellulose structures with a higher aspect ratio have a good reinforcing capability of the final polymer products, resulting in an improvement in the thermal and mechanical properties . Besides the outstanding mechanical properties, another advantage of Cr(NO3)3-catalyzed hydrolysis over H2SO4 hydrolysis is that no sulfate groups were introduced to the surface of cellulose fibrils after the hydrolysis process.
study
100.0
The pyrolytic study was used to investigate the thermal stability of native cellulose, CNCH2SO4, and CNCCr(NO3)3. Determination of thermal properties of reinforcing materials is an important parameter in order to evaluate and identify their applicability of these cellulose nanomaterials in biocomposite applications which are always processed at high temperatures. Both TG and DTG curves were plotted and graphically presented in Figure 6 to track the thermal stability differences between native cellulose and yielded nanocellulose samples (CNCH2SO4 and CNCCr(NO3)3). In fact, DTG curves provided a more precise assessment and comparison, especially in term of maximum decomposition temperature (Tmax) value.
study
100.0
An initial weight loss was observed for all cellulosic samples in the region below 100 °C, mainly corresponding to the evaporation of moisture content or any other volatile low-molecular-weight compounds loosely bound to the surface or inside the materials . The presence of absorbed water was also observed by FTIR characteristic peaks at the signal of 1640 cm−1, which represents the bending vibration of intermolecular hydrogen bonding of water molecules. The DTG curve revealed there is a major difference in the decomposition temperature between native cellulose and CNCH2SO4, in which the CNCH2SO4 started to undergo a degradation process much earlier (at 210 °C) than that of cellulose as well as CNCCr(NO3)3. This phenomenon was accredited to several possible reasons: (i)The replacement of hydroxyl groups (O–H) by active sulfate groups (O–SO3H) via either esterification or direct catalysis would initiate the dehydration reaction to occur on the sulfated nanocellulose fiber, leading to the water formation, further catalyzing the decomposition of remaining cellulose fiber into smaller particles. On the other hand, the unsulfated crystals tend to collapse at higher decomposition temperature ;(ii)It was believed that the concentrated H2SO4 not only removes the loosely-packed defective parts but also has high potential to damage crystalline domains, making the molecules more susceptible to the degradation step in response to increased temperature ;(iii)The rapid reduction of molecular weight of nanocellulose during hydrolysis compared to that of its starting material contributed to its early decomposition during heating treatment ;(iv)The short nanocellulose chains provide a high specific surface area and result in the formation of a large number of free-end-chains in the surface that tended to decompose easily at a lower temperature. The high surface area of nanocellulose plays a significant role in diminishing their thermal stability due to the increased exposure to heat source .
study
100.0
The replacement of hydroxyl groups (O–H) by active sulfate groups (O–SO3H) via either esterification or direct catalysis would initiate the dehydration reaction to occur on the sulfated nanocellulose fiber, leading to the water formation, further catalyzing the decomposition of remaining cellulose fiber into smaller particles. On the other hand, the unsulfated crystals tend to collapse at higher decomposition temperature ;
study
99.94
It was believed that the concentrated H2SO4 not only removes the loosely-packed defective parts but also has high potential to damage crystalline domains, making the molecules more susceptible to the degradation step in response to increased temperature ;
other
97.9
The short nanocellulose chains provide a high specific surface area and result in the formation of a large number of free-end-chains in the surface that tended to decompose easily at a lower temperature. The high surface area of nanocellulose plays a significant role in diminishing their thermal stability due to the increased exposure to heat source .
study
99.94
Thermal degradation of native cellulose started at approximately 225.4 °C in an N2 atmosphere, while the CNCCr(NO3)3 product degradation began at ca. 238.2 °C. The increased thermal stability of CNCCr(NO3)3 might be attributed to the removal of defective crystalline cellulose, as evidenced above by XRD results. Nevertheless, the CNCH2SO4 showed a lower thermal stability (201.2 °C) than the untreated cellulosic material. This was presumably due to the attaching of sulfate groups on fiber surface during the hydrolysis step, which clearly reduced the activation energy for the degradation of nanocellulose product, making the nanomaterial less resistant to pyrolysis. Additionally, the lower degradation temperature of CNCH2SO4 could be due to the smaller fiber dimension, resulting in more surface areas exposed to the heat source and the partial disruptions of crystal structure compared to that of cellulose precursors . These findings are in good accordance with Kallel’s study , in which the nanocellulose isolated from garlic straw by H2SO4 hydrolysis started to degrade at a lower decomposition temperature (200 °C) than that of its starting material (220 °C). In summary, the introduction of the sulfated groups and massive decrement of molecular weight significantly decreased the thermal stability of CNC.
study
100.0
Compared to CNCH2SO4 prepared via H2SO4 hydrolysis procedure (Tmax = 273 °C), the CNCCr(NO3)3 produced using Cr(NO3)3 hydrolysis system rendered much higher thermal stability (Tmax = 344 °C) than its raw material (Tmax = 295 °C), which can be explained by the fact that less damage happened in crystalline regions of the cellulose matrix benefiting from the milder hydrolysis conditions. In addition, the additional thermal stability is affected by the crystalline structure of cellulose, which increased as a result of the Cr(NO3)3 hydrolysis treatments. Some studies reported that the high crystallinity nanocellulose could lead to higher thermal stability, but the thermal stability of high crystallinity sulfated CNC was reduced. Similar results have been reported by Mohamed et al. where the nanocellulose derived from newspaper pulp rendered a lower Tmax value (187.4 °C) than its starting material (346.4 °C) although it had higher crystallinity (90.15%) compared to untreated pulp (82%). In fact, the authors believed that the thermal destruction of cellulose was primarily affected by the presence of sulfate groups from H2SO4, in which the lower thermal stability was correlated to the content of sulfur impurities as well as removal of thermostable minerals during acid hydrolysis of the MCC.
study
100.0