text
string
predicted_class
string
confidence
float16
Figure 2 shows the DW signal attenuation for each shell in the genu dataset, with stars in the legend indicating which shells were left out for testing. In this plot, a small number of data appear as ‘outliers’ (two such data are shown with arrows in the bottom‐left subplot of Figure 2). Specifically, we counted about 10 of them among more than 4812 measurements, most of them being in the b=300 s/mm2 shell. Since these outliers appear to be specific to the b=300 s/mm2 shell and are not in other shells with similar b value, we attribute them to a momentary twitching of the subject rather than more systematic effects, such as perfusion.
study
100.0
Diffusion‐weighted signal from the genu ROI, averaged over the six voxels. Across each column and row, the signal pertains to one of the gradient strengths or pulse times δ used; in each subplot, the six shells shown in different colours are Δ‐specific, increasing in value (22, 40, 60, 80, 100, 120 ms) from top to bottom. Inside the legend, the b value is in s/mm2 units; here, the HARDI shells kept for testing are those marked with a star; the remaining shells comprise the training data. On the x‐axis is the cosine of the angle between the applied diffusion gradient vector G and the fibre direction n. Some models in this study omit data outliers; two such data points are shown in the bottom‐left subplot with vertical arrows — obviously each model has its own criteria for determining the outliers
study
100.0
Diffusion‐weighted signal from the fornix ROI, averaged over the six voxels. The legend's b value is in s/mm2 units. Testing shells are marked with a star. On the x‐axis is the cosine of the angle between the applied diffusion gradient vector G and the fibre direction n
other
60.5
Models were evaluated and ranked based on their ability to predict the unseen DW signal accurately. Specifically, the metric used was the sum of square differences between the hidden signal and the predicted signal, corrected for Rician noise:57 (1)SSE=1N∑i=1N(S˜i−Si2+σ2)2σ2 where N is the number of measurements, S˜i is the ith measured signal, S i its prediction from the model and σ the noise standard deviation.
study
99.94
Here we give a short summary of the competing models. Additionally, Table 2 provides a summary of their key characteristics. More details are included in the Appendix. Ramirez‐Manzanares: a dictionary‐based technique that accounts for multiple fibre bundles and models the distribution of tissue properties (axon radius, parallel diffusivity) and the orientation dispersion of fibres. Nilsson: a multi‐compartment model that models isotropic, hindered and restricted diffusion and accounts for varying (T 1, T 2) relaxation times for each compartment.58 Scherrer a multi‐compartment model in which each compartment is modelled by a statistical distribution of 3‐D tensors.16 Ferizi1 and Ferizi2: two three‐compartment models that account for varying T 2 relaxation times for each compartment. As regards the intracellular compartment, Ferizi1 models the orientation dispersion by using dispersed sticks as one compartment; Ferizi2 uses a single radius cylinder instead.42 Poot: a three‐compartment model comprising an isotropic diffusion compartment, a tensor compartment and a model‐free compartment in which an Apparent Diffusion Coefficient (ADC) is estimated for each direction independently. T 2 relaxation times are also estimated for each compartment.59 Rokem: a combination of the sparse fascicle model43 with restriction spectrum imaging60 that describes the signal arising from a multi‐compartment model in a densely sampled spherical grid, using L1 regularization to enforce sparsity. Eufracio: an extension of the Diffusion Basis Function (DBF) model that accounts for multiple b‐value shells. Loya‐Olivas1 and Loya‐Olivas2: two models based on the Linear Acceleration of Sparse and Adaptive Diffusion Dictionary (LASADD) technique. Loya‐Olivas1 uses the DBF signal model, while Loya‐Olivas2 uses a three‐compartment tissue model. The optimization uses linearized signal models to speed up computation and sparseness constraints to regularize. Alipoor: a model of fourth‐order tensors, corrected for T 2‐relaxation across different shells. A robust LS fitting was applied to mitigate influence of outliers. Sakaie: a two‐compartment model of restricted and hindered diffusion with angular variation. A simple exclusion scheme based on the b=0 signal intensity was applied to remove outliers. Fick: a spatio‐temporal signal model to represent 3‐D diffusion signal simultaneously over varying diffusion time. Laplacian regularization was applied during the fitting.61 Rivera: a regularized linear regression model of diffusion encoding variables. This is intentionally built as a simplistic model to be used as a baseline for model comparison.
review
99.8
Ferizi1 and Ferizi2: two three‐compartment models that account for varying T 2 relaxation times for each compartment. As regards the intracellular compartment, Ferizi1 models the orientation dispersion by using dispersed sticks as one compartment; Ferizi2 uses a single radius cylinder instead.42
other
99.6
Poot: a three‐compartment model comprising an isotropic diffusion compartment, a tensor compartment and a model‐free compartment in which an Apparent Diffusion Coefficient (ADC) is estimated for each direction independently. T 2 relaxation times are also estimated for each compartment.59
other
99.8
Loya‐Olivas1 and Loya‐Olivas2: two models based on the Linear Acceleration of Sparse and Adaptive Diffusion Dictionary (LASADD) technique. Loya‐Olivas1 uses the DBF signal model, while Loya‐Olivas2 uses a three‐compartment tissue model. The optimization uses linearized signal models to speed up computation and sparseness constraints to regularize.
other
99.75
Summary of the various diffusion models evaluated. Tissue models are models that include an explicit description of the underlying tissue microstructure with a multi‐compartment approach. In contrast, signal models focus on describing the DW signal attenuation without explicitly describing the underlying tissue and instead correspond to a ‘signal processing’ approach
review
99.9
Figure 4 shows the averaged prediction error in each ROI (top subplot is for the genu, bottom subplot is for the fornix) and the corresponding overall ranking of the participating models in the challenge. The first six models in the genu ranking performed similarly, each higher ranked model marginally improving on the prediction error. The prediction error clearly increased at a higher rate for the subsequent models. In the fornix dataset, the prediction error was higher than in the genu. For both datasets, the first six models were the same, albeit permuted. Most of the models performed similarly in terms of ranking in both genu and fornix cases, i.e. Nilsson (second in genu/first in fornix), Scherrer (third/second) and Ferizi_2 (fourth/fourth). Others performed significantly better in one of the cases, with Ramirez‐Manzanares (first/sixth) being the most notable.
study
100.0
Figure 4 also details the prediction error for different ranges of b values in the unseen dataset. Models inevitably vary in their prediction capabilities; some models perform better within a given b‐value range but are penalized more in another. Across the models, as the figure shows, the ranking between models was dominated by the signal prediction accuracy for b values between 750 and 1400s/mm2; specifically, the shell that has the largest weight on this error is the b=1100 s/mm2 one. The top‐ranking models, nevertheless, were better at predicting the signal for higher b‐value images as well. The prediction performance of lower b‐value images (<750s/mm2) in the genu was less consistent across ranks. For example, the models of Rokem and Sakaie outperformed most of the higher ranking models in this low b‐value range. The fornix is a more complex region than the genu, hence the performance across the shells is less consistent. In the fornix, the prediction errors were generally larger than in the genu across all b values for all models, except Rivera's, which showed the opposite effect. The prediction errors of the b=0 images were also larger than in the genu, especially for the highly ranked models of Poot and Ferizi. The prediction errors in other b‐value shells followed the overall ranking of the models more closely.
study
100.0
Figure 5 shows the prediction error for each voxel independently. In the genu plot, the best performing models had high consistency of low prediction errors across all individual voxels. These were followed by the models with consistent larger prediction error in all voxels. Most of the lowest ranking models not only had largest prediction errors, they also showed large variations in prediction performance. For example, while the model of Loya‐Olivas2 was competitive in voxel 5, it ranked low due to large prediction errors in voxels 4 and 6. The results in the fornix show a lower consistency of prediction errors between the voxels than in the genu. Specifically, two voxels (3 and 4) showed substantially larger prediction errors and were likely responsible for much of the overall ranking.
study
100.0
Genu signal for the group consisting of the best seven from 14 models. We show only four (of twelve) representative shells; these are shown by blue stars, while red circles denote the model‐predicted data. The best models are listed first. The x‐axis is the cosine of the angle between G and n
other
99.75
Fornix signal for the group consisting of the best 7 from 14 models. We show only four (of twelve) representative shells; these are shown by blue stars, while red circles denote model‐predicted data. The best models are listed first. The x‐axis is the cosine of the angle between G and n
other
99.6
The challenge set out to compare the ability of various kinds of models to predict the diffusion MR signal from WM over a very wide range of measurement parameters – exploring the boundaries of possible future quantitative diffusion MR techniques. The 14 challenge entries were a good representation of the many available models that are proposed in the literature. The acquired data aimed to cover the broadest spectrum of experimental parameters possible. The participating models use a variety of fitting routines and modelling assumptions, providing additional insight into the effects of algorithmic and modelling choices during parameter estimation. Although the set of methods tested is not sufficient to make a full comparison of each independent feature (diffusion model, noise model, fitting routine, etc.) and the number of combinations prohibits an exhaustive comparison, the results of the challenge do reveal some important trends.
study
99.9
In contrast with earlier model comparisons,18, 43, 44 the results provide new insight into which broad classes of model explain the signal best and what features of the estimation procedure are important. This information is very timely, as recent model‐based diffusion MRI techniques, such as NODDI,15 SMT,17, 40 DIAMOND,16 DKI62 and LEMONADE,63 are starting to become widely adopted in clinical studies and trials. Despite their success, intense debate continues in the field about applicability of different models and fitting routines.64, 65 The insights from this challenge provide key pointers to the important features of the next‐generation of front‐line imaging techniques of this type. Moreover, the data and evaluation routines remain available to form the basis of an expanding ranking of models and fitting routines and a benchmark for future model development.
review
99.8
The first insight is on the type of model used. Signal models do not necessarily outrank tissue models; indeed, using our dataset, models of the signal (Alipoor, Sakaie, Fick, Rivera) ranked on average lower than models of the tissues, despite their theoretical ability to offer more flexibility in describing the raw signal. This is quite surprising, as the current perception within the field is that, generally, we can capture the signal variation much better through a functional description of the signal (signal models) rather than via a biophysical model of the tissue (tissue models). The former generally consist of bases of arbitrary complexity, whereas the latter are generally very parsimonious models that rely on extremely crude descriptions of tissue (e.g. white matter as parallel impermeable cylinders). The results suggest that the flexibility of signal models can rapidly lead to overfitting. However, the tissue models can explain the signal relatively well even with just a few parameters (compare the quality‐of‐fit plots of the Rivera model in Figure 7 with the signal prediction of the top models in Figure 6: the higher the b value, the worse the prediction of the linear signal model). Certain underlying assumptions may cause the signal models to perform less well than expected. For example, they are often designed to work with data with a single diffusion time and do not generalize naturally to incorporate the additional dimension (although see Fick et al.61 for some steps towards generalization). Many of the tissue models, on the other hand, naturally account for finite δ, varying diffusion times and gradient strength (e.g. the Ramirez‐Manzanares, Nilsson and Ferizi models in our collection). We cannot draw any conclusion about the benefits of an adjustable number of parameters in a model, because of the limited number of models in our study that do this and because the models differ in a range of other aspects.
study
99.94
The second insight concerns the choice of noise modelling. Despite the fact that SNR at b=0 and T E=152 ms falls to about 6, use of the Rician noise model does not appear to be a significant benefit in predicting unseen signal; here, however, we do not investigate the effect on estimated model parameters, which may still benefit from the more accurate noise model. In this challenge, most participants used non‐linear least‐squares or maximum‐likelihood optimization. Additional regularization of the objective function (Eufracio & Rivera/Lasso, Rokem/Elastic Net, Fick/Laplacian) appeared to have little benefit over non‐regularized optimization.
study
100.0
The third observation is about removing signal outliers. Five of the eleven models preprocessed the training data by clearing out outliers, including the top two models. We tried this procedure with two good models that did not use such a procedure, Ferizi1 and Ferizi2, and observed that it did not affect the ranking, though it did improve the prediction error marginally. This is understandable, considering the relatively little weight these apparent outliers have on the total number of measurements (10 points from a 4812‐strong dataset). Additionally, specific strategies for predicting the signal, e.g. bootstrapping or cross‐validation, as used by the top two models of Ramirez‐Manzanares and Nilsson, may also help the model ranking.
study
100.0
Although this challenge provides several new insights into the choice of model and fitting procedure for diffusion‐based quantitative imaging tools, it has a number of limitations that future challenges might be designed to address. One limitation of the study is that we use a very rich acquisition protocol that is not representative of common or clinical acquisition protocols. In particular, we cover a very wide range of b values and the data acquisition (protocol) we use consists of many TEs, unlike many other multi‐shell diffusion datasets that use a fixed TE. As stated in the Introduction, our intention is to sample the measurement space as widely as possible to support the most informative models possible. Varying the TE makes it possible to probe compartment‐specific T 2 (the decay of which Ferizi et al.42 finds to be monoexponential at the voxel level), an investigation that would be impossible with a single TE. However, the good performance of DIAMOND also shows that a model with fixed δ and Δ can still capture the signal variation in multi‐TE datasets and that, while the majority of the full data was ignored in each of the reconstructions, its prediction error compared favourably with other techniques.
study
100.0
We use the unique human Connectome scanner51 to acquire a dataset with gradients of up to 300mT/m, which is not readily available in most current MR machines. However, previous preclinical work by Dyrby et al.13 suggests that high diffusion gradients enrich the signal, which helps model fitting and comparison. Future challenges might be designed that focus on explaining the signal and estimating parameters from data more typical of clinical acquisitions.
study
99.94
Assessing the prediction performance on unseen data as in this challenge is different from assessing the fitting error: it implicitly penalizes models that overfit the data. However, since most of the missing shells lie in between other shells (in terms of b values, TEs, etc.), the quality of signal extrapolation was not assessed. We get a glimpse of this from Figure 4, where the SSE is unevenly distributed between the b values. Here, the shell that bore the largest error is the b=1100 s/mm2 one; see also Figures 6 and 7. Of all ‘unseen’ shells, this shell combines the lowest Δ and highest |G|, placing it on the edge of the range of the measurement space sampled. Such a b‐value shell combines high signal magnitude with high sensitivity, i.e. the gradient of signal against b‐value is highest in this range, which makes it hard to predict. (We stress that this observation is in the context of the wider multi‐shell acquisition, and is not to be seen in isolation for its potential impact on single‐shell acquisition methods.) On the other hand, the variability of prediction errors in the b<750 s/mm2 range could arise from the varying sensitivity of different models to the free water component, which is challenging to estimate as it can easily be confounded with hindered water, or physiological effects, which are mostly observable in this low b‐value range. Future work can take this further, by selecting unseen shells outside the min–max range of experimental parameters. This is likely to penalize more complex models that overfit the data even more strongly.
study
100.0
We did not take into account the computational demand of each model, and this might limit the generalization of the results. Models that use bootstrapping generally have a higher computational burden and may not be feasible for large datasets, e.g. whole brain coverage.
study
72.8
The dataset used in this challenge is specific to one subject who underwent a long‐duration acquisition, which adds to the question of generalizability. The subsequent preprocessing of the data is also a factor to bear in mind: the registration of two 4h datasets, across such a broad range of echo times, poses its own challenges for certain non‐homogenous regions in the brain, such as the fornix (compared with, for example, the relatively large genu). Thus the results may be somewhat subject‐specific and may be affected by residual alignment errors.
study
99.94
Another limitation is that we only look at isolated voxels inside the corpus callosum and the fornix. Questions still remain about which models are viable even in the most coherent areas of the brain with the simplest geometry, so we believe our focused challenge on well‐defined areas is an informative first step necessary before extending the idea to the whole of the white matter, which would make for an extremely complex challenge. We note, however, recent work by Ghosh et al.66 that illustrates such an approach with Human Connectome Project (HCP) data.
study
99.94
We focused here on comparing models based on their ability to predict unseen data. Although models that reflect true underlying tissue structure should explain the data well, we cannot infer in general that models that predict unseen data better are mechanistically closer to the tissue than those that do not. As we discuss in the Introduction, the main power of evaluating models in terms of prediction error is to reject models that cannot explain the data. Thus, while the identification of parsimonious models that explain the data certainly has great benefit, further validation is necessary through comparison of the parameters that they estimate with independent measurements, e.g. obtained through microscopy (our challenge makes no attempt to assess the integrity of parameter estimates themselves, but future challenges might use such performance criteria). Models can be evaluated to some extent by sanity checking the realism of their fitted parameter values, as in for example Jelescu et al.64 or Burcaw et al.67 However, obtaining accurate ground‐truth values for quantitative evaluation remains a hard and yet unsolved problem for diffusion MRI in general. In particular, histology can only roughly approximate the in vivo ground truth and introduces its own set of challenges in sample preparation, acquisition and biophysical interpretation.12, 13, 65, 68, 69, 70, 71 This challenge highlights the need for improved model comparison and validation methods.
study
99.9
Challenges such as this have great value in bringing the community together and provide an unbiased comparison of wide‐ranging solutions to key data‐processing problems. They raise new insights and ideas, motivating more directed future studies. The data are publicly available for others to use, with more details of the dataset given on the Challenge website at http://cmic.cs.ucl.ac.uk/wmmchallenge/. On this website, an up‐to‐date ranking of the models will be available, where additional models can be added after the publication of the article and where the community will be able to evaluate further the impact of noise correction, compartment‐specific T 2 estimation, inter‐class model assumptions, e.g. tissue versus signal models, or indeed intra‐class model assumptions, e.g. whether cylinders or sticks are optimal models for the given dataset.42 This will provide an important benchmark for future models and parameter estimation routines.
other
99.6
Diagnosis in life, of Alzheimer’s disease (AD) and Frontotemporal dementia (FTD) is made on clinical grounds, but currently used criteria are burdened with considerable subjective judgments,1,2 and yield an overall accuracy of 81% to 88% in AD cases.3 Given the high prevalence of both diseases and the increasing treatment options,4 simple and sensitive quantitative indicators of both forms of dementia in its early stages would represent valuable clinical tools. Measures of hippocampal atrophy have proven the most sensitive way of differentiating mild to moderate Alzheimer’s disease from non-demented elderly. Of these measures, the width of the temporal horn yields the highest sensitivity, predicting the disease in 73% of cases with 95% specificity.5
review
99.9
Cerebral atrophy occurs in almost all types of dementia and is characterized by a loss of global cerebral volume that can be indirectly observed by ventricular and cerebral sulcal enlargement.12 Sensitive imaging providing linear and volumetric measures of atrophy rates have been proposed to track this decline.13-17 Generally these measures are larger in patients with dementia than in healthy elderly.18
review
99.2
In this study, we aimed to better understand the relationship between the severity of cerebral subcortical atrophy and the type of dementia (FTD and AD), as well as to explore the relationship of age, duration and aggravation of dementia, educational level, daily living activities and cognition, with cerebral subcortical atrophy. Finally we test the usefulness of a linear measure of atrophy in differentiating AD from FTD.
study
99.94
A total of 44 participants diagnosed with dementia were recruited from the Clinicas Hospital at the Federal University of Goiás Medical School (FM-UFG), Brazil. There were no gender or ethnic restrictions. The study involved 22 men and 22 women, aged 33 to 89 years, with mean age (±SD) of 68.52±12.08 years, with schooling ranging from 1 to 20 years, with mean (±SD) of 7.35±5.54 years and disease duration with a mean (±SD) of 3.66±3.44 years.
study
100.0
The clinical diagnoses were reached by an experienced psychiatrist/neurologist (LC) based on patient history, neuroimaging results and neuropsychological tests. Diagnosis of dementia was based on the criteria of the Diagnostic and Statistical Manual Mental Disorders, Fourth Edition (DSM-IV).20
other
99.9
Etiology of dementia included patients with Alzheimer’s disease (n=21) and Frontotemporal Dementia (n=23). Diagnosis of FTD was based on Neary et al. criteria21 while the diagnosis of probable AD was based on the National Institute of Neurological Disorders and Communicative Disorders and Stroke-Alzheimer´s Disease and Related Disorders Association (NINCDS-ADRDA) criteria.22
study
66.56
Magnetic resonance was performed on a 1.5–T MRI unit with a quadrature head coil. T1-weighted sequences were analyzed for this study. From the axial slice of structural neuroimaging (Magnetic Resonance Imaging), the BFI was measured on a plane parallel to the temporal lobe plane at the level of the maximum width between the tips of the frontal horns of the lateral ventricles, and defined as the ratio of this value to the diameter of the inner skull table at the same level. The resulting ratio was then multiplied by 100 and expressed as a percentage (Figure 1).15,16,23,24 A graded caliper with a 0.1 mm scale was used for this linear measurement on film copy.
study
100.0
Clinical Dementia Rating (CDR) – Dementia severity was determined by total on the Clinical Dementia Rating Scale. The CDR assesses cognitive function in six domains: memory, orientation judgment and problem solving, community affairs and personal care. Based on six scores, a global CDR score is assigned in which CDR 0 indicates no dementia, CDR 0.5 indicates very mild dementia, CDR 1 indicates mild dementia, CDR 2 indicates moderate dementia, and CDR 3 indicates severe dementia.19
study
99.7
All 44 individuals were assessed using the BFI, CDR, MMSE and FAQ. Thus, duration of dementia and education level (in years) were also examined and served as inputs for the survival analysis. The neurological examination was performed during the same period as the clinical imaging. The patients were divided into two groups: one with FTD (n=23) and the other with AD (n=21). The BFI was compared in both groups for all analyzed variables.
study
100.0
We conducted all statistical analysis using the SPSS 12.0 software for Windows. The Mann-Whitney test (U) was performed to compare mean rates of variation between the two patient groups. The analyzed variables were: age, duration of dementia, MMSE scores, Functional Scale of Pfeffer’s scores, level of education in years, Clinical Dementia Rating Scale and BFI rate. We established the confidence interval as 95% for the statistical tests.
study
99.94
The Spearman Coefficient (rs) was used to obtain the correlation p and to verify the correlations between mean rates of brain atrophy (measured by BFI) and all other variables. The Spearman’s Coefficient was the non-parametric alternative when the data was not Gaussian and linear.
study
99.94
Table 1 shows the means (including standard deviation and confidence interval) of all the clinical features along with BFI for AD and FTD groups. Both patient groups were closely matched for age, duration of dementia, MMSE scores, Pfeffer Functional Activities Questionnaire (FAQ) scores and educational level. There was no significant difference in BFI between groups.
study
100.0
No significant difference between groups p>0.05; MMSE, Mini-Mental State Examination; BFI, Bifrontal Index; EPSs, Extrapyramidal Signs; FAQ, Pfeffer-Functional Activities Questionnaire; M, Mean; SD, Standard Deviation; CI, Confidence interval; Z, standard normal deviation.
other
95.94
In the FTD group, only the MMSE score showed a strong correlation with BFI (Table 2). The AD group also showed a significant correlation between MMSE score and BFI, but weaker than that observed in the FTD group. There was a significant correlation (p<0.05) between BFI and Pfeffer Functional Activities Questionnaire (FAQ) scores in the AD group only.
study
100.0
Indirect measures of subcortical atrophy, such as the BFI, Bicaudate Index and Ventricle-Brain ratio have been reported by many researchers to evaluate structural brain damage in patients with dementia. Both linear and volumetric measurements are probably more reliable than those made postmortem when ventricles are usually smaller than the same ventricles before death.13-16,29,30
study
99.25
AD and FTD can be difficult to differentiate clinically because of overlapping symptoms. Distinguishing the two dementias based on volumetric measurements of brain atrophy with MRI has been only partially successful.9 Our study did not demonstrate BFI differences between AD and FTD groups.
study
100.0
Age was not correlated with rates of BFI in either group across all analyses performed. This finding is consistent with the results reported by Brinkman et al.33 in the study of quantitative indexes of computed tomography in 28 patients with Alzheimer’s dementia and 30 elderly persons. Nevertheless, other authors34 have shown that age-related increases in BFI most probably reflect losses in adjacent brain structures including the caudate nuclei in normal aging.
study
99.94
Concerning the analysis of cognitive performance, measured by the MMSE, there was a negative correlation with BFI in both patient groups, mainly in the FTD group (p<0.001). This finding is in line with previous reports in the literature that have shown distinct types of cerebral changes predicting impaired performance on specific cognitive tests.35-37 Soderlund et al.35 also observed that subcortical atrophy estimated by means of ventricular enlargement were associated with cognitive deficits. Nevertheless, the measures used in the cited study were the BFI, the Caudate Ventricular Index and Occipital Ventricular Index. The average of the three indexes was used to calculate a global score. Furthermore, the 1254 participants had an MMSE score above 24 and were non-demented individuals.
study
100.0
A small number of studies have focused attention on the relationship between activities of daily living and linear brain measures in dementia patients, but only in Vascular Dementia and normal aging.35,38 Activities of daily living performance decreased with increased subcortical atrophy only for the AD group. Perhaps, one explanation for this fact is that FTD patients present a reduced capacity to perform daily tasks from the early stages of disease (a difference from AD),39 when BFI values still remain low.
study
99.94
We found no correlation between duration of symptoms and the linear measurement of subcortical atrophy. This may be expected because the extent of dementia is only an estimate. To our knowledge, no previous study has reported the association involving duration of dementia and subcortical atrophy measured by BFI.
study
100.0
We have also demonstrated that subcortical atrophy is not correlated with educational level. This could possibly be explained by the fact that participants had a large discrepancy in terms of years of education. Clinical pathological studies are necessary to clarify the association between subcortical atrophy and progression of dementia.
study
99.94
Studies including only one brain variable can be misleading because their putative association may be due to a correlated brain change while cerebral atrophy is an indirect measure of pathological processes occurring on a cellular level. In addition, the BFI is a non-specific finding which can result from brain injury or degeneration and which occurs normally in ageing, although many disease processes result in distinctive patterns of atrophy due to differential involvement of specific areas of the brain.
review
99.25
In conclusion, a linear measurement of subcortical atrophy such as BFI probably is not useful for providing a differential diagnosis between AD and FTD. Furthermore, cognitive function (in both FTD and AD groups) and capacity for independent living (only in AD group) decreased with increased subcortical atrophy. Our findings also revealed that age, duration of dementia and educational level do not significantly correlate with degree of cerebral atrophy.
study
100.0
Graphene is a two-dimensional honeycomb lattice of carbon atoms which was first systematically isolated by mechanical exfoliation from bulk graphite in 2004 , and is a material that presents many interesting properties . For example, the envelope function equation of graphene formally coincides with the relativistic wave equation of massless spin-1/2 particles (the Dirac–Weyl equation) . This has made it possible to experimentally observe relativistic phenomena such as Klein tunneling, Zitterbewegung, and anomalous quantum Hall effect, at the low energies that are typical of condensed matter physics . Moreover, graphene is a one-atom-thick material that is light and flexible but with a large mechanical strength, it has very high electrical and thermal mobilities, and is transparent and impermeable. Many applications have been proposed for it, some of which are rapidly finding their way to the market .
other
99.2
From the point of view of electronic devices, the thinness, two-dimensional geometry, and high mobility of graphene have suggested its application as the conduction channel of scaled transistors. However, the absence of an energy gap in two-dimensional monolayer graphene has hampered the fabrication of graphene-based MOS (metal-oxide-semiconductor) transistors with a ION/IOFF ratio sufficiently high to allow their use for digital circuits (the ION/IOFF ratio is the ratio between the current flowing through the device in its ON state and in its OFF state). The ambipolar behavior characteristic of pristine graphene represents a further drawback, since the possibility of using complementary devices would be desirable. To date, these limitations have limited the possible application of graphene transistors to analog electronics, particularly in the radiofrequency field . Meanwhile, for the realization of graphene-based digital devices, alternative operating principles such as that of the tunnel-field-effect-transistor have been proposed. However, many efforts have been focused on the improvement of the switching properties of graphene-based transistors. Even though the possibility of surpassing or even approaching the commutation properties of traditional transistors currently seems out of reach, the availability of well-operating digital graphene field-effect-transistors (FETs) would allow us to benefit from the excellent properties of this material. For example, it would make it possible to fabricate flexible, transparent, and low-cost all-graphene electronic circuits, high-speed devices, or devices in which also the mechanical, thermal, or optical properties of graphene could be exploited. Several methods have been proposed to induce an energy gap in graphene , such as transversal confinement , doping , strain , the introduction of an antidot lattice , functionalization , or the application of an electric field orthogonally to bilayer graphene .
review
99.9
A common substitutional dopant for graphene (and, more generally, carbon materials) is boron, an element of the III chemical group (and thus with only one valence electron less than carbon) that has a size comparable to carbon and presents a quite low ionization energy if substituted into the graphene lattice. In particular, in Reference we have shown that the introduction of boron atoms in substitutional positions breaks the ambipolar behavior of graphene, thus giving rise to asymmetric device trans-characteristics, opens a mobility gap (i.e., an energy region with very low conductance), and increases the ION/IOFF ratio. While these effects become more pronounced when increasing the doping concentration, the values of the ION/IOFF ratio obtained in the devices examined in Reference are still unsatisfactorily low. Here, starting from those results, we examine the possibility of improving the ION/IOFF ratio by increasing the length of the device channel. The analysis is performed by means of an atomistic simulation code in which the Poisson equation for the device electrostatics and a tight-binding-based non-equilibrium Green’s function (NEGF) transport calculation are self-consistently solved. The substitutional boron doping is mimicked through a proper distribution of fixed charges. To ensure the successful completion of the simulations, we adopt enhanced convergence schemes with variable convergence thresholds and a recursive scanning of the simulation domain. Our results show that the ION/IOFF ratio, the width of the mobility gap, and the asymmetry of the transport characteristics increase when longer ribbons are considered, due to the cumulative effect of the boron atoms distributed along the ribbon. We demonstrate that this improvement of the switching behavior is intimately related to the different transport regime in the ON and OFF states of the device. We conclude that the adoption of longer ribbons in graphene-based transistors should have a positive effect from this point of view, thus representing a further advancement in the direction of functional devices.
study
100.0
The effect on transport of the boron impurities is strictly related to the details of the (short-range) atomistic potential around the boron dopants. Therefore, an envelope-function approach such as that of References would not be accurate enough, and more computationally demanding atomistic models are needed. On the other hand, a complete ab-initio simulation of the considered device which includes thousands of atoms would not be numerically feasible. Therefore, for our simulations we adopted a tight-binding model, which is a more simplified approach but preserves atomistic details and is able to correctly reproduce ab-initio results. In our tight-binding scheme we have included only the 2pz atomic orbitals of the carbon and boron atoms, which give rise to the delocalized π molecular orbitals and thus mainly determine the transport properties. In particular, we have used the NanoTCAD ViDES simulation code , which self-consistently solves the transport and electrostatic equations in the device.
study
100.0
Regarding the transport side of the simulation, the Schrödinger equation with open boundary conditions is solved within the Green’s function formalism . More in detail, the Green’s function is obtained as (1)G(E)=EI−H−ΣS−ΣD−1, where E is the energy of the charge carriers, I the identity matrix, and H is the nearest-neighbor tight-binding Hamiltonian matrix including only the 2pz orbitals. ΣS and ΣD are the self-energy matrices of the source and drain contacts, which enforce boundary conditions on the Schrödinger equation corresponding to the presence of Schottky contacts at the two ends. For the self-energies, the method described in Reference is adopted, which mimics the effect of real metal contacts.
study
100.0
In order to determine the device electrostatics, the Poisson equation is solved (using the Newton–Raphson numerical technique) on a tridimensional domain including the device. We consider Neumann conditions (corresponding to constant—and in particular zero—electric field) at the boundaries of the solution domain, and Dirichlet conditions (constant potential) at the gates. In particular, the Poisson equation (2)∇[ϵ(r→)∇ϕ(r→)]=−e[p(r→)−n(r→)]−ρfix(r→) is solved over a rectilinear grid defined over the tridimensional domain. In this equation, ϵ is the dielectric constant, ϕ is the electrostatic potential, e is the elementary charge, p and n are the electron and hole concentrations, ρfix is the density of fixed charge, and r→ is the position vector.
study
99.94
The transport and electrostatic equations are mutually connected. The Schrödinger equation depends on the potential energy ϕ(r→) (i.e., on the solution of the Poisson equation), while the Poisson equation depends on the electron and hole concentrations, which can be obtained from the Green’s function of the system (i.e., from the solution of the transport calculation). To perform the simulation, we start from an initial guess potential, we perform the NEGF calculation, and then we solve the Poisson equation with the electron and hole concentrations obtained from the NEGF procedure. The new potential obtained from the Poisson equation is then passed to the NEGF module, and so on recursively until the norm of the difference between the potentials obtained in two successive cycles is less than a given threshold. In our approach, the free charge around each atom is uniformly spread in the cell of the rectilinear grid that contains the atom.
study
100.0
The current flowing through the device can be computed from the converged Green’s function through the Landauer equation:(3)I=2eh∫−∞+∞T(E)f(E−EFS)−f(E−EFD)dE. here, h is Planck’s constant, f is the Fermi–Dirac occupation function, EFS and EFD are the Fermi energies of the source and drain, respectively, and the transmission coefficient T(E) is given by (4)T=−TraceΣS−ΣS†GΣD−ΣD†G†.
study
99.94
The tight-binding parameters and the fixed charge distribution of our model have been calibrated on ab-initio simulations for a simplified structure consisting of a graphene ribbon with a single substitutional boron atom at different positions along the ribbon transverse section. In Reference , we reported results obtained from ab-initio density functional theory (DFT) simulations, performed with the SIESTA code with a local density approximation and a double-ζ basis set. From these simulations, it emerged that the quasi-bound states localized around the boron atom introduce resonant backscattering—and thus a conduction reduction—in the hole branch of the transmission spectrum. In the devices we simulated, this effect dominates over the electrostatic effect of the impurity . The energy of the conductance dip depends on the position of the boron impurity with respect to the ribbon edges . From a purely electrostatic point of view, the boron atom, being an electron acceptor, introduces a repulsive potential for electrons; this potential profile was also extracted from the ab-initio simulation. We compared the transmission spectra and the potential profiles obtained from the DFT calculations with those obtained from the previously described self-consistent tight-binding calculation performed through the ViDES code. A very good agreement was obtained by using the tight-binding parameters typical of undoped graphene and a particular choice of fixed charges. In detail, all the onsite energies were assumed equal to zero. The transfer integral between nearest neighbor atoms was always taken equal to tp = −2.7 eV, except between atoms belonging to the edge dimers of armchair ribbons. In this last case, the transfer integral was set to 1.12tp, in order to account for the reduced interatomic distance at the edges, and to correctly reproduce the energy gap of the ribbon . Regarding the distribution of fixed charges ρfix(r→), the best fitting was obtained considering a fixed charge null at each carbon atom and equal to −e at each boron atom. This can be physically explained observing that the sum of the charge of the nucleus and of all the electrons that are not in the 2pz orbitals is equal to +e for carbon atoms and to 0 for boron atoms. The total charge is the sum of this charge and of the charge of the electrons in the π orbitals (which derive from the 2pz atomic orbitals—i.e., the only orbitals that are considered in the tight-binding calculation). Let us call LDOS the local density of π states for unit area and energy, EF the Fermi energy and Ei the midgap energy. Then, the total charge contained in an area S of the ribbon (containing NC carbon atoms and NB boron atoms) is given by (5)+eNC−e∫SdS∫−∞+∞dELDOS(E)f(E−EF)=+eNC−e∫SdS∫−∞EidELDOS(E)f(E−EF)−e∫SdS∫Ei+∞dELDOS(E)f(E−EF). we can sum and subtract from Equation (5) the charge of the electrons that would fill all the states in the π valence bands; that is, (6)−e∫SdS∫−∞EidELDOS(E). the total number of states in the π valence bands is half the total number of π states, which is 2(NC+NB) because each atom contributes with two 2pz states with different spin. Therefore, the value of Equation (6) is −e(NC+NB). Summing and subtracting this charge from Equation (5), we obtain that the total charge in S is given by (7)+eNC−e(NC+NB)+e∫SdS∫−∞EidELDOS(E)[1−f(E−EF)]+−e∫SdS∫Ei+∞dELDOS(E)f(E−EF)=−eNB+e∫SdS(p−n). therefore, the fixed charge to consider in Equation (2) is given by a charge null at each carbon atom and equal to −e at each boron atom.
study
100.0
In particular, we have simulated a double gate FET (sketched in Figure 1). Its channel is an armchair graphene ribbon with width W= 3.81 nm (corresponding to 32 dimer lines) and length L variable between 10 nm and 70 nm. The channel is substitutionally doped with randomly located boron atoms, and connects the source and drain contacts. Two 1-nm-thick silicon oxide layers, located above and under the ribbon, separate it from the top gate and the bottom gate. The two gates are kept at the same potential and bias the transport channel. In our simulations, performed at room temperature, we applied a voltage VDS= 0.1 V between the source and the drain, and we computed the current ID flowing through the channel as a function of the voltage VGS applied between the gates and the source. In this way, we obtained the transfer characteristic ID(VGS) of the device.
study
100.0
Let us first consider the case of the undoped ribbon. When the gate voltage coincides with the average between the contact voltages (i.e., VGS=VDS/2), the energy of the charge carriers falls in the transition energy range between the valence and the conduction bands, where the density of states is minimum, and therefore the current is minimum. Instead, when increasing (decreasing) the gate voltage, the graphene band structure shifts towards lower (higher) energies. As a consequence, the energy of the charge carriers falls in the range of the conduction (valence) bands, where the electron (hole) concentration is large, and thus the current ID increases. This gives rise to a typical ambipolar transport behavior. In particular, in the case of the undoped ribbon, the transfer characteristic is symmetric with respect to VGS=VDS/2, as an effect of the symmetry of the dispersion relations with respect to the midgap energy.
study
100.0
In the presence of doping, the cumulative backscattering effect of the boron atoms breaks this symmetry and a unipolar behavior appears, together with a mobility gap. In our numerical simulations, we have considered several realizations of the random dopant spatial distribution. These have been used as a statistical ensemble for the extraction of average transistor characteristics.
study
97.44
Figure 2 shows the results reported in Reference for a device based on a 20-nm-long graphene ribbon with a boron concentration equal to 0.3% and 0.6%, compared with those achieved for the undoped graphene ribbon. The boron concentration was defined as the ratio between the number of substitutional boron atoms and the total number of atoms within the channel. Numerical convergence problems precluded the simulation of ribbons with boron percentages larger than 0.6%. For each boron concentration, the dashed thin lines represent the characteristics obtained for the different random distributions of dopants, while the thick solid lines correspond to their average. When increasing the boron concentration, the hole and electron branches of the ID(VGS) characteristic become more asymmetric, the voltage range for which we have low conduction through the device (and thus the mobility gap) widens, and the ION/IOFF ratio increases.
study
100.0
However, the values of the ION/IOFF ratio obtained with the 20-nm-long ribbon are still largely insufficient for a well-operating digital transistor. Indeed, a device with good switching properties should be characterized by a current flow in the OFF state much smaller than in the ON state. From this point of view, the adoption of longer doped graphene ribbons could improve the device behavior, due to the combined effect of a larger number of dopants. Therefore, our present analysis has focused on the dependence of the transport behavior on the length L of the boron-doped ribbon.
study
100.0
From the numerical point of view, the convergence of the self-consistent calculation is quite a difficult task, due to the presence of localized charges. In particular, this is particularly challenging for longer ribbons, which contain a larger number of dopants. To solve this problem, we adopted an enhanced convergence scheme with respect to Reference , based on a progressive approach to the final characteristic. In detail, we chose the starting potential profile for the self-consistent calculations in the following way. For the gate potential for which the convergence was found to be easier (in our simulations, VGS=0.5 V) we used the flat band potential as a guess starting point. Then, the potential profile obtained at the end of the self-consistent simulation was used as a guess potential profile in the calculation performed for the successive gate potential value we considered. We repeated this procedure scanning the entire range of considered gate potentials a few times, alternatively moving for increasing and decreasing values of VGS. At the same time, the error threshold adopted for the termination of the self-consistent calculations was progressively reduced to increase the accuracy of the solution. Finally, we considered that the convergence was achieved once the difference between the ID(VGS) transfer characteristics obtained from successive scans of the domain became negligible.
study
100.0
We investigated 3.81-nm-wide ribbons with a 0.6% boron concentration (i.e., the highest percentage studied in the case of the 20-nm-long ribbon). We considered ribbons with lengths 10, 20, 30, 40, 50, 60, and 70 nm. For each length, we analyzed several different realizations of the boron dopant spatial distribution (20, 23, 24, 19, 18, 9, and 9 distribution realizations, respectively, for the seven considered lengths). In Figure 3, for each of the seven ribbon lengths, we report the sets of characteristics obtained for the different doping realizations (the dispersion of these curves is due to the low number of dopants). With a thicker line, we also plot the mean behavior achieved averaging, for each VGS, the current obtained for the different boron distributions. These average behaviors are directly compared in Figure 4. In Figure 5, the black dots represent the values of the ION/IOFF ratio (obtained from these seven curves dividing the values of ID in the regions of high and low conduction) as a function of the ribbon length.
study
100.0
We observe that the increase of the graphene channel length has the positive effect of enhancing the ION/IOFF ratio. Moreover, it widens the OFF region of the transistor (i.e., the range of gate voltages for which the current flowing through the device is low). This can be explained by the fact that, for a given concentration of boron atoms, longer channels contain a larger number of dopants. The combined scattering action of the dopants on the charge carriers flowing through the device gives rise to the observed positive effect. At the same time, we observe an overall decrease of the current profile. However, this detrimental effect could be compensated by operating several transistors in parallel.
study
100.0
In order to better analyze our numerical results, Figure 6a reports the behavior of the current as a function of the length L for the four values of the voltage VGS identified with vertical lines in Figure 4. In Figure 6b, we show the corresponding values of the channel resistance R=VDS/ID as a function of the length L. When the device is in its OFF state, the resistance R increases exponentially with the length L, thus suggesting that the device is working in a strong localization transport regime. Instead, when moving towards the ON regime, the resistance dependence on the channel length becomes linear. The transport in the ON state is therefore diffusive. Only when further increasing the channel length, the transport regime in the ON state is expected to become localized, with an exponential suppression of the current on a length scale much larger than that in the OFF state. In further detail, let us assume the behaviors for VGS=−0.075 V and VGS=0.75 V (where, approximately, the minimum and the maximum current regimes are reached in all cases) as representative of the OFF state and of the ON state, respectively. Then, a good fit of the dependence of the device resistance as a function of the channel length is given by the relations (8)ROFF=11.5KΩ×expL15.4nm in the OFF regime, and (9)RON=7.8KΩ+0.615KΩnmL in the ON regime. Since R=VDS/IDS and VDS is the same in the two regimes, the ION/IOFF ratio should be reasonably fitted by the relation:(10)IONIOFF=ROFFRON=1.474×expL15.4nm1+L12.683nm.
study
100.0
Equation (10) allows us to extrapolate some considerations for longer ribbons, for which a complete numerical treatment would be too computationally challenging. Indeed, following Equation (10), further increasing the channel length L should lead to a nearly-exponential enhancement of the ION/IOFF ratio. Therefore, the adoption of longer graphene ribbons with a sufficiently high boron doping concentration could be usefully exploited in order to achieve graphene-based devices with an improved switching behavior and asymmetric electron–hole transport characteristics.
study
99.94
We numerically analyzed the transport characteristics, as a function of the channel length, of MOS transistors based on graphene ribbons randomly doped with substitutional boron atoms. The study was carried out through self-consistent quantum simulations based on a tight-binding atomistic model, validated through comparison with ab-initio DFT calculations. The numerical complexity of the calculations, increasing with the length of ribbons, required the adoption of enhanced convergence schemes. From the results of our simulations, performed for ribbon lengths ranging from 10 nm to 70 nm, we were able to observe a strongly localized transport in the OFF operating region and a diffusive transport in the ON operating region. This made it possible to extrapolate an analytical, nearly exponential relationship between the ION/IOFF ratio and the channel length. In addition, by increasing the length of the channel we observed an enhanced electron–hole asymmetry in the ID(VGS) characteristics and a widening of the voltage range corresponding to the mobility gap. The results of our simulations suggest that graphene-based transistors should strongly benefit—from the point of view of both the ION/IOFF ratio and the device unipolarity—from an increase of the length of the boron-doped channel. Therefore, this solution could be exploited to improve the switching performances of graphene-based devices.
study
100.0
It is known that hypertrophy of adipose tissue, fibrosis in the skin and subcutaneous tissue, and degeneration of the lymphatic vessels are present in lymphedema.1-5 After lymphadenectomy or radiotherapy, lymphatic flow is dammed up and accumulation of lymphatic fluid in the distal lymphatic vessels occurs.6 In turn, this causes an increase in lymphatic inner pressure.7 However, it is still unknown how injury of the lymphatic vessels progresses.
study
99.7
In this case report, we present a case wherein we observed a disruption of lymphatic vessels in indocyanine green (ICG) lymphography and found a suddenly shrunken lymphatic vessel intraoperatively. This may be helpful when considering the mechanism of injury progression in the lymphatic vessels or when determining the operation site of lymphatic venous anastomosis (LVA).
clinical case
99.94
A 77-year-old woman underwent right mastectomy and axillary lymph node dissection accompanied by radiotherapy and chemotherapy for right breast cancer. She noticed swelling in the right upper limb 22 years after the surgery and consulted our hospital (Fig 1).
clinical case
100.0
We performed ICG lymphography.8,9 To explain shortly, we injected 0.1 mL of ICG (0.5% Diagnogreen; Daiichi Pharmaceutical, Tokyo, Japan) subcutaneously into the second interdigital space of the right hand and the palmer side of the wrist and observed with an infrared camera system (Photodynamic Eye; Hamamatsu Photonics, Hamamatsu, Japan). We observed dermal backflow, which indicated the lymphatic stasis in the whole right upper limb and diagnosed our patient with lymphedema.10 Lymphoscintigraphy showed dermal backflow in the same area as ICG lymphotraphy. She started wearing elastic sleeve (20 mm Hg), and improvement in lymphedema symptoms was observed. As there was still stiffness in the right upper limb, which meant that there was still lymphatic stasis, we decided to perform LVA 5 months after the first consultation.
clinical case
99.94
In the preoperative ICG lymphography to mark the location of the lymphatic vessels (preoperative day 1), we observed a linear pattern that indicated good lymphatic function in the medial side of the right forearm, which suddenly stopped in the middle of the forearm (Video). Although we massaged the arm and waited for more than 17 hours hoping ICG to move ahead, the disrupted line never advanced (Fig 2). The finding was similar on the next day, more than 17 hours after the injection. We decided to make a skin incision for LVA at the distal part of ICG disruption.
clinical case
99.94
Findings of preoperative ICG lymphography. Although a normal linear pattern could be seen in the radial side of the forearm at first, it suddenly stopped at the middle of the forearm. We observed progressive accumulation of ICG in the lymphatic vessel and the collateral lymphatics. ICG indicates indocyanine green.
clinical case
99.8
At that point, we observed dilated lymphatic vessels that seemed to have a good lymphatic function intraoperatively. However, when we dissected the lymphatic vessel proximally, we found that it was suddenly shrunken (Fig 3). Although we waited for a while, considering the possibility of lymphatic spasm, it remained in the same condition. We concluded that this suddenly shrunken part of the lymphatic vessel was a cause of ICG disruption and lymphatic flow was blocked at this point; hence, we decided to perform LVA at the distal part of this lymphatic vessel. We also performed 4 additional LVAs using 2 surgical microscopes with 2 surgeons (H.H. and M.M.). The conditions of the other lymphatic vessels were as follows: normal type for 1 lymphatic vessel; ectasis type for 2; contraction type for 1; and sclerosis type for 1. The operation time was 2 hours 10 minutes, and the amount of bleeding was minimal.
clinical case
99.94
There were no problems intra- and postoperatively. The patient started wearing compression sleeve 1 week after the operation, as she wore preoperatively. She did not undergo manual lymph drainage. The right upper limb of the patient got softer, and she was satisfied with the result 3 months after the operation (Fig 4). The average circumference change at the 5 points in the right upper limb was −1.26 cm (range, −2.3 to −0.3 cm), and the decrease was statistically significant with the Student t test (P = .042).
clinical case
99.94
Recently, degeneration of the lymphatic vessels in the extremities with lymphedema has been reported.2,3 Ogata et al11 reported transgenesis of smooth muscle cells. In the extremities with lymphedema, lymphatic vessels are dilated by abnormally increased inner pressure at the early phase. Subsequently, tunica media gets thick due to proliferation of smooth muscle cells and an increase of collagenous fiber, finally leading to complete occlusion.3 These changes do not occur in all of the lymphatic vessels simultaneously. The condition of the lymphatic vessels differs in different regions. Sometimes, different types of lymphatic vessels can be seen in 1 incision site. It is unclear whether lymphatic degeneration occurs gradually or suddenly. In addition, the reason for the different types of lymphatic vessels to appear in 1 region remains unknown. LVA is said to be the most effective when the lymphatic vessel of ectasis type is anastomosed.8 In addition, clarification of the mechanism of lymphatic degeneration or the location of the lymphatic vessels of ectasis type seems essential in the progression of LVA efficacy.
study
68.8
In the current case, we chose the incision site according to the findings of ICG lymphography, considering there may be some occlusive mechanism at the ICG disruption point. As expected, we could find a suddenly shrunken proximal part and the dilated distal part of the lymphatic vessel. This fact suggests that if we performed LVA at the distal side of the disruption in ICG lymphography, this may lead to good results of LVA.
clinical case
99.8
There should be some hypothesis about the lymphatic failure mechanism in the present case. In usual cases, ICG flowing in the lymphatic vessels can move ahead with the support of manual lymph drainage even if the lymphatic vessel has pump failure. In the present case, ICG would not proceed despite our manual lymph drainage. The lymphatic vessel we found at that point intraoperatively, which is the next day of ICG lymphography, got sclerotic, not having an absolute lumen, and there was a dilation of the lymphatic vessel at the distal side. Therefore, we believe that there must have been some local occlusive mechanism in the lymphatic vessel. The mechanism lasted for about at least 12 hours, from the time of ICG lymphography to the operation, and it is not likely that it was due to a temporary lymphatic spasm.
clinical case
99.75
This is only a case report and accumulation of the similar cases is needed. Also, we did not perform ICG lymphography intraoperatively in the present case, and it is hoped that intraoperative ICG lymphography findings will elucidate the lymphatic injury mechanism in the future.
clinical case
99.94
A spergillus fumigatus is currently the most important airborne fungal pathogen and the leading cause of invasive aspergillosis (~85% of cases). Despite the introduction of new effective antifungal drugs, the mortality rate of invasive aspergillosis still exceeds 40%. Furthermore, the emergence of drug resistance adds more complications to the treatment process. Hence, new antifungal agents are urgently needed.
review
97.1
While the classical cell-based screening method has been successfully used in drug discovery process, it is an expensive, time-consuming and labor-intensive approach. Recent advances in application of computation methods in genome, proteome, and metabolome mining coupled with complete genome sequences of pathogen organisms have opened an alternative path toward identifying new drug targets. For instance, several reports have confirmed the usefulness of comparative genome analysis in identification of pathogen-specific drug and vaccine candidates.
review
99.8
The genome of A. fumigatus contains ~9900 genes, but few antifungal targets have been developed yet. In the present study, we compared the whole proteome of A. fumigatus with Saccharomyces cerevisiae, as a non-pathogen fungal organism, to identify Aspergillus-specific proteins. Amongst the proteins, those with close homologues to human were removed, and the final targets were selected. One novel target, the aspergillus fumigatus RNA-binding protein (AfuRBP), was selected and found to be a peptidylprolyl isomerase (PPI). For further functional studies, AfuRbp knock out and knock down mutant strains were generated. MIC assay with known inhibitors, phenylglyoxal and juglone, and a synthetic derivative of juglone confirmed the higher sensitivity of the mutants to these compounds when compared to the wild-type strain.
study
100.0
The identification of novel drug targets against A. fumigatus, as a human pathogen, was based on alignment of its proteome with the proteome of S. cerevisiae. The proteomes of these two fungi were extracted from NCBI (https://www.ncbi.nlm.nih.gov/genome/browse/). Three different approaches were used to identify and confirm those sequences that are present in A. fumigatus but are absent or significantly diverge from S. cerevisiae. LAST (http://last.cbrc.jp) has been used as a fast sequence alignment tool to identify unique sequences for A. fumigatus. LAST uses variable-length (spaced) seeds realized by a suffix array. The expected value of 10 (-e84) was chosen to obtain higher sensitivity (-m1000000), and the other settings and parameters were used as default. In the next step, unique sequences from A. fumigatus were compared using suffix tree analysis by Mummer (version 3.0) (http://mummer.sourceforge. net). Minimum maximal match was seven to restrict output proteins, but other settings were chosen as default. Finally, the proteins that were unique to fungi were identified by BlastP (NCBI Blast v. 2.2.1). Modeling was carried out by I-TASSER server (Michigan University, USA). Docking of the compounds was performed by HEX 6.3 software when required.
study
100.0
A. fumigatus akuAKU80 and its pyrG- derivative (akuAKU80 pyrG-) with a highly efficient homologous recombination background were used for isolation of AfuRbp gene homologue and gene disruption or gene silencing experiments. E. coli Top10 (Invitrogen, USA) cells and pGEM–T Easy cloning system (Promega, USA) were used in all DNA recombinant procedures. Plasmid PGEM-GlaA comprising Aspergillus niger glucoamylase A (glaA) promoter and glaA termination signal was used for preparation of RNAi plasmid. All molecular methods including PCR and RT-PCR were performed based on established protocols. In RT-PCR settings, 1 µg of total RNA was used in cDNA synthesis reactions and the A. fumigatus actin gene (AFUA_6G04740) was used as a loading control. Fungal strains were grown and kept on SAB agar or SAB agar medium supplemented with uridine and uracil. Modified Vogel’s medium was used in isolation of fungal transformants.
study
100.0
For construction of the AfuRbp deletion cassette, a sequential cloning strategy was applied as described before. Briefly, 1.5 kb of 5’ and 1.2 kb of 3’ flanking regions of the gene were amplified separately, using RBP_KO1/RBP_KO2 (containing SphI/NcoI restriction sites) and RBP_KO3/RBP_KO4 (containing EcoRI/SalI restriction sites) primer sets, respectively (Table 1). These fragments were cloned next together in pGEM-Teasy vector. At the final step, the A. fumigatus pyrG gene, as a selection marker, with its own promoter and terminator containing EcoRI site at both ends was cloned into EcoRI site, between the 5’and 3’ flanking regions, resulted in pRBP-KO (Fig. 1A).
study
100.0
Gene constructs and molecular validations. Schematic representation of constructed plasmids used in gene deletion (pRBP-KO) (A) and RNAi (pRBP_RNAi) (B) experiments. (C) PCR analysis of wild-type (lane 1) and deletant strain (lane 2) genome using primers RBP-F1 and RBP-R1. The amplification of a 4.6-kb product confirmed the replacement of native AfuRbp with the disrupted fragment. (D) RT-PCR analysis of RNAi transformant grown in maltodextrin or glucose medium for 24 h. Lane 1, actin fragment amplified from the genome (560 bp); lane 2, actin fragment amplified from cDNA originated from glucose; lane 3, maltodextrin cultures (485 bp); lane 4, AfuRbp expression level in RNAi transformant grown in glucose medium; lane 5, maltodextrin medium. AfuRBP-specific RT-PCR primers, RBP_rt1/rt2, amplified a 300-bp product. M, DNA size marker
study
100.0
To generate the AfuRbp silencing cassette, an inducible RNAi transcriptional unit was designed based on a previously described protocol[19, 20]. This unit was driven by the glaA promoter and comprises an approximately 500 bp Rbp fragment (sense, nucleotides 342-841), followed by a 100-bp buffer EGFp fragment and the same ~500 bp Rbp fragment in the opposite direction (antisense). Sense and antisense fragments were amplified by PCR using primers RBP_SENSE_F and RBP_SENSE_R carrying appropriate restriction sites (Table 1) and cloned into PGEM-GlaA vector. A 100-bp EGFP fragment was then amplified by PCR using specific primers (Table 1) and cloned between the latter fragments to generate final silencing cassette (pRBP_RNAi) (Fig. 1B).
study
100.0
The prepared constructs were used to transform A. fumigatus KU80ΔpyrG strain as described before. Positive transformants were selected on Vogel’s minimal medium lacking uracil/uridine supplements. To find the knock out strains, transformants were screened by PCR using primers RBP-F and RBP-R (Table 1). As the silencing construct, pRBP_RNAi did not contain any fungal selection marker, the pRG3- AMA1 vector containing the pyrG selection marker was used as the second plasmid in co-transformation reaction. RNAi transformants were also selected based on PCR using gfp- and AfuRbp-specific primers.
study
100.0
MICs were determined based on CLSI Broth Microdilution Method (M38-A2 Document) with some modifications. Briefly, fungal spores (104/well) were inoculated in 96-well microtiter plates containing RPMI 1640 medium enriched by 2% glucose (or maltodextrin), and the susceptibility was assessed at a final compound concentration range of 0-200 µg/ml after 48 h. All MIC assays were performed in triplicates. Based on orthology and EC number (EC: 5.2.1.8) of AFUA_2G07650 (AfuRBP), two organic inhibitors of the PPI activity, i.e. phenylglyoxal and juglone (5-hydroxy-1,4-naphthoquinone), were found by searching BRENDA enzyme database (www.brenda-enzymes.info) and tested (Fig. 2). A synthetic derivative of juglone, 5-O-Acetoxy-1,4-naphthoquinone (juglone acetate, JA), was also prepared using a previously published method and assessed in MIC tests. This derivative was made to make the compound more lipophilic.
study
100.0
For bioinformatics assessment of the chemical inhibitors used in MIC assays, the 3D structure of AfuRBP was predicted by I-TASSER. The predicted binding sites were subsequently used for docking of the inhibitors. HEX 6.3, ArgusLab (v.4.0.1, http://www.arguslab.com/arguslab.com/Publications.html), and WebLab Viewer Lite (v. 4.2) were used for docking of the inhibitors and manipulation of the protein.
study
99.94
Based on the LAST alignment of the A. fumigatus proteome (9630 proteins) against S. cerevisiae, 474 unique proteins were recognized in A. fumigatus as the homologous of these proteins were not present in S. cerevisiae. All of the 474 proteins were further examined in KEGG, NCBI, EXPASY, and EBI databases. It was found that the functions of 161 proteins have already been explained, and their accession numbers are available. The remaining 313 proteins were found to be hypothetical (data not shown). Finally, 50 out of 161 protein candidates were chosen as they were fungal-specific but not present in S. cerevisiae (Table 2). The RBP, as a potential drug target, was selected for further evaluation. This protein is encoded by an 1158-bp gene located on chromosome 2 and has been annotated as a PPI.
study
100.0
The SphI/SalI pRBP-KO fragment was used for transformation of A. fumigatus (akuAKU80 pyrG-) protoplasts. Through the PCR screening of transformants, a deletion strain was isolated and examined further. In this transformant, RBP-F1 and RBP-R1 primers amplified a fragment of ~4.6 kb (KO) instead of a 3.8-kb wild-type fragment, confirming the replacement of native gene with disrupted one (Fig. 1C). RT-PCR analysis of the deletant revealed that AfuRbp does not express in this mutants (data not shown). The growth phenotype of this mutant in various media containing different carbon and nitrogen sources was similar to the parental strain, indicating that AfuRbp is not essential in A. fumigatus. In PCR screening of RNAi transformants, one positive integrant was selected. This transformant was grown in glaA-inducing medium containing 1% maltodextrin, as the sole carbon source. Semi-quantitative RT-PCR analysis using primers RPB-rt1 and RBP-rt2 showed the down-regulation of AfuRbp expression in the presence of maltodextrin (Fig. 1D). Phenotypic analysis of this transformant in inducing medium re-confirmed the non-essential role of AfuRbp in fungal growth.
study
100.0
To test whether the deletion or down-regulation of AfuRbp has any effect on drug susceptibility of the fungus, MIC levels were determined for phenylglyoxal, juglone, and JA. Table 3 demonstrates MIC values for these compounds. The results demonstrated a higher sensitivity of all strains to juglone compared to phenylglyoxal and JA. While the phenylglyoxal compound did not appear as a strong antifungal inhibitor, the mutant strains showed more sensitivity to this chemical. MIC assays for RNAi transformant was performed in maltodextrin medium. The higher sensitivity of RNAi transformant in maltodextrin medium compared to the glucose medium indicated that the down-regulation of AfuRbp can affect the strain susceptibility to PPI inhibitors.
study
100.0
As we could not find any template protein having a minimum 35% similarity to our target protein, a threading algorithm for RBP protein modeling was used by I-TASSER server. Five models were generated, and according to the I-TASSER validation criteria, model number one had the least calculation errors and fitted best within the X-ray/NMR 3D population criteria (Fig. 3). In this study, five top models proposed by I-TASSER were used as receptors of the docking experiments, and phenylglyoxal, juglone, and JA were used as ligands. Docking outcome showed that the interaction of model number 4 with JA produced a proper energy (-186.3), which indicates the highest absolute value among the docking results. Other docking outcomes that were sorted by energy are shown in Table 4.
study
100.0
The synthetic compound (JA), juglone and phenylglyoxal, were used as ligands and the five predicted models (1, 2, 3, 4, and 5) were used as receptors. Position of docked ligand and binding sites for docking results with their energy values are indicated in the Table. Docking was performed by HEX 6.3 software.
study
99.9
Because of the common eukaryotic origin of fungi and mammals, the development of new antifungal drugs has been challenging. This may be the reason for the introduction of only five classes of antifungal agents, including fluorinated pyrimidine analogs, polyenes, allylamines, azoles, and echinocandins. Here, we have tried to find some novel drug targets against A. fumigatus using bioinformatics. Our alignment has demonstrated a group of proteins/genes (Table 2) that are fungal-specific and may be essential to A. fumigatus. Our highlights differ from other studies such as Abadio and colleagues in comparative alignment as we searched for those novel targets that only exist in A. fumigatus. In this study, we found a RNA-binding protein (AfuRBP) that has been annotated as a member of PPI enzyme family. The PPI family contains three members, including cyclophilins, FKBPs, and Parvulin. Cyclophilins are described by an eight-standard β-barrel that forms a hydrophobic pocket. FKBPs, instead, contain an amphipatic, five-stranded β-sheet that enfolds around a single, short α-helix[25, 26]. Members of Parvulin family contain a PPI domain consisting of a half β-barrel and its four antiparallel strands surrounded by four α-helices. The results of I-TASSER modeling and docking demonstrated that the AfuRBP is structurally closer to Parvulin family members of PPI enzyme family. To elucidate the function of this protein, AfuRbp-deficient strains were generated using gene deletion and RNAi strategies. Normal phenotype of these strains indicated a non-essential role for this protein in growth physiology of Aspergillus fumigatus. Some studies have demonstrated that the inactivation of a drug target gene can result in increased sensitivity of the organism to the drug or other compounds that inhibit the same pathway[27, 28]. In this sense, the sensitivity of AfuRbp-disrupted strains to PPI inhibitors was assessed.
study
99.94
Juglone showed the highest inhibitory effect on the mutants and wild strain. In addition, we used JA as an inhibitor to investigate the change in the structure of juglone through modification of its hydroxyl group in order to increase the hydrophobicity. The decision to produce JA was partly due to the predicted activity of this derivative through the docking scores, which had a higher absolute energy of interaction than juglone. Overall, MIC test of JA demonstrated different results for the mutant and wild strains. Although docking showed a high potential for JA, the MIC test indicated slight decrease in activity, which might be due to lipophilic binding of JA to other cell components.
study
100.0