id
stringlengths
24
24
idx
int64
0
402
paragraph
stringlengths
106
17.2k
65c0b54ae9ebbb4db9b4e168
0
The exact factorization formalism is an overarching strategy in quantum mechanics that is closely related to probability theory. It allows to express the given Schrödinger equation into two coupled equations by writing the wavefunction as a product of a marginal and conditional amplitude. Initially proposed as a way to tackle the full electron-nuclear problem beyond the Born-Oppenheimer approximation in a static setting , the exact factorization has later been extended to the time-dependent one , proving considerably successful as a trajectory-based method for ab initio molecular dynamics .
65c0b54ae9ebbb4db9b4e168
1
The application of this formalism in a purely electronic framework, sometimes referred to as exact electron factorization, to distinguish it from its electro-nuclear counterpart, sits well within the scope of density-functional theory (DFT), both in its pure Hohenberg-Kohn version and in its more popular Kohn-Sham (KS) one . In this context, the exact factorization has been adopted to gain physical insight in the effective potentials that map the many-body (electronic) problem into a one-body problem . There are stark features of the Kohn-Sham potential as well as of the effective potential in orbital-free DFT , in the form of peaks and steps, that are extremely hard to model but have been shown to be paramount for the proper description of phenomena such as molecular dissociation or Mott-Hubbard transitions . Such features have been extensively studied in simple systems for which accurate or exact solutions can be calculated. They have been attributed to specific terms of the total potential, named the kinetic potential, v kin , and the N -1 potential, v N -1 , * that were identified using the exact factorization and are responsible for the peaks and the step(s) features, respectively . However, these local potentials have been defined and studied only for real wavefunctions. This restriction can be imposed with no loss of generality for ground states but it is limiting in the study of excited states where complex wavefunctions may occur. For complex wavefunctions, extra terms, in addition to the well-studied local potentials, are expected to appear, notably a vector potential .
65c0b54ae9ebbb4db9b4e168
2
Recently, there has been a lot of effervescence towards excited states and different flavours of DFT that can describe them . While time-dependent DFT is still a very popular method , most calculations are performed using an adiabatic approximation, in which the instantaneous density is fed into a ground-state functional. Adiabatic functionals are incapable of reproducing dynamical peaks and steps of the exact time-dependent KS potential, which in turn are essential to describe atoms and molecules driven far from their ground state . Density-functional theories that are time-independent have also been developed to deal with excited states. One prominent approach is ensemble density-functional theory (EDFT), based on an ensemble of states giving an averaged density rather than a pure-state density . While EDFT has experienced substantial recent developments crucial for its progress , it suffers from drawbacks. Notably, when addressing a high-lying excited state, it typically necessitates the inclusion of all lower-lying states in the ensemble. Additionally, modeling the weight dependence of the exchange-correlation functional poses challenges. More recently, orbital-optimized DFT, a time-independent framework in which the state of interest is selected by converging the KS self-consistent procedure to saddle points, has demonstrated relative success in computing certain kinds of excitations including those that are typically challenging for TDDFT . Theoretically, functional approximations specific for the state of interest should be used , however such functionals have not been developed yet (to the best of our knowledge) and therefore most applications of DFT to excited states are performed using ground state functionals. On these bases, the development of state-specific functionals appears to be an area of DFT where progress would be particularly useful.
65c0b54ae9ebbb4db9b4e168
3
The exact electron factorization formalism can be adopted on ground and excited states likewise and naturally distinguishes components of the effective potential that depend on slowly-varying features of the quantum state, which are typically easier to model, from those that are instead responsible for peaks and steps, which are important to generate a correlated density. Therefore, it seems a priviledged standpoint to devise approximations that can model such stark features for ground as well as for excited states. Nonetheless, for the latter, it is important to consider not only the scalar potentials generating peaks and steps but also the role of the mentioned vector potential. This work offers a first study in this direction by showcasing a simple example of the electronic vector potential associated with a complex current-carrying state.
65c0b54ae9ebbb4db9b4e168
4
In the time-dependent electro-nuclear formulation of the exact factorization, the development of approximations to compute the "effective potentials" of the theory, usually referred to as time-dependent potential energy surface and time-dependent vector potential, for molecules has been achieved after assessing their behavior on prototypical case studies where the exact numerical solution of the problem was accessible . Following the successful strategy of the electro-nuclear formulation, in this paper, we first reconstruct the time-independent exact electron factorization formalism in a general case manifesting a non-vanishing electronic vector potential and then we analyze such vector potential on a prototypical case study. Adopting an exactly-solvable two-electron non-interacting problem, we aim to characterize the most relevant features of the electronic vector potential, in particular demonstrating its relation to the property of the electronic wavefunction of being complex and showing that it cannot be gauged away (thus it must be accounted for in an approximate treatment of the problem).
65c0b54ae9ebbb4db9b4e168
5
The paper is organized as follows. Section II introduces the definitions of the local scalar and vector potentials stemming from the exact factorization in the case of a complex wavefunction and discusses how the vector potential is related to the current density. Section III illustrates the conditional amplitude and the electronic vector potential in the adopted model, while Section IV discusses the related geometric phase. Finally, Section V gives some conclusions.
65c0b54ae9ebbb4db9b4e168
6
In the right hand side (RHS) of Eqs. ( ) and (4), the subscript 1 from the coordinates of the reference electron is dropped, i.e. σ 1 = σ and r 1 = r, a notation that will be kept for the rest of the paper. The functions χ m and Φ m in Eq. ( ) are called marginal and conditional amplitude, respectively, and are constrained by the so-called partial normalization condition (PNC):
65c0b54ae9ebbb4db9b4e168
7
where the Dirac brakets . . . | . . . r stand for dσd2 • • • dN , i.e. integration over all the coordinates of the system but r. The PNC ensures that the probability of finding the N -1 electrons anywhere in space with spin either up or down (including the spin of the reference one) is 1, regardless of where the reference electron is placed.
65c0b54ae9ebbb4db9b4e168
8
The PNC together with Eq. ( ) determines only the modulus squared of the marginal amplitude, i.e. |χ m | 2 = ρ m /N , but leaves an r-dependent phase factor freedom in χ m (r) = ρ m (r)/N e iS(r) and Φ m = Ψe -iS(r) √ ρm/N .
65c0b54ae9ebbb4db9b4e168
9
Making a choice for S(r) is equivalent to setting the gauge of the electronic vector potential that will be introduced later. The gauge choice of Eq. ( ) is not only in line with past literature but, most importantly, an extremely convenient one, as will be evident in the following sections.
65c0b54ae9ebbb4db9b4e168
10
To obtain an effective equation for the marginal amplitude, χ, let us apply Eqs. ( ) and (4) to Eq. ( ), then left-multiply by Φ * m and integrate over all variables but r. The nabla operator acting on the factorized wavefunction of Eq. ( ) makes extra terms appear compared to the usual (Schrödinger-equation-like) expression. Note that, from the PNC, it follows that
65c0b54ae9ebbb4db9b4e168
11
In the assumption that the conditional amplitude collapses to the corresponding ion when the distance of the reference electron from the bulk of the system is large enough, i.e. Φ(σ, 2, . . . , N ; r) → Ψ N -1 (2, . . . , N ) for |r| → ∞, shifting ṽN-1 by the energy of the corresponding ion, E N -1 , gauges it to vanish asymptotically. However, note that the validity of such an assumption is observed to break down in the ground state case in the presence of nodal planes , which in turn are expected to occur more frequently for higher energy states. Definitions ( ) -( ) of v kin , v cond , v N -1 correspond to the usual definitions of these potentials when the conditional amplitude is purely real, Φ(σ, 2, . . . , N ; r) = R(σ, 2, . . . , N ; r), and are extended in this paper to the complex case for θ(1, . . . , N ) = 0, 2π, . . ..
65c0b54ae9ebbb4db9b4e168
12
This form is perhaps more familiar and leads one to identify A(r) as a magnetic-like vector potential. Indeed, equation for the marginal amplitude is formally equivalent to a Schrödinger equation for a one-body quantum system (such as the hydrogen atom) subject to an external magnetic field. However, A(r), along with the scalar potential in parenthesis in the RHS of Eq. ( ), encodes the effect of the conditional amplitude and, thus, of the remaining N -1 electrons on the reference one.
65c0b54ae9ebbb4db9b4e168
13
To dig deeper into the meaning of the gauge choice in the exact electron factorization, we take a few steps back and derive the equation for the marginal before imposing any gauge: the phase factor in the marginal amplitude and in the conditional amplitude can be anything, not just zero. We call it S (r). To distiguish these different marginal and conditional amplitudes from the χ and Φ that we have considered so far, we will label them χ and Φ . Because the electronic vector potential depends on the conditional amplitude, we shall consider also a primed electronic vector potential A . In formulas, we are considering the following relations
65c0b54ae9ebbb4db9b4e168
14
In the above equation, v G is gauge-invariant by construction, v is the given external potential of the original Schrödinger equation, while v cond is gauge invariant because its definition involves a multiplicative potential and therefore this potential is blind to any phase factor, as visible already in Eq. ( ). Finally, the potential v N -1 defined as
65c0b54ae9ebbb4db9b4e168
15
For some applications, it may be useful to work with a gauge-invariant potential v G . However, this does not remove the gauge-dependence on the N -1 potential or on the electronic vector potential. Thus, if one's interest is to design approximations for these local scalar and vector potentials, making a sensible choice for their gauge and adhering to it accross different approximations is of utmost importance. Since it is undeniably convenient to work with a marginal which is simply the square-root of the electron density, we believe that our definitions Eqs.( ) -( ) are bona fide definitions in which the choice of a particular gauge choice does not restrict their applicability but rather enhances it. Further advantages of the chosen gauge are to be discussed in the next section.
65c0b54ae9ebbb4db9b4e168
16
In a DFT setting, the theoretical framework which considers the electronic current density as a basic variable was introduced in the '80s and is now known as current-density-functional theory . In general, an external vector potential A ext induces a current density and allows to define the mechanical momentum operator π r = -i∇ r +A ext (r). The induced current density, j ψ (r), carried by an eigenstate ψ of the Hamiltonian under study is
65c0b54ae9ebbb4db9b4e168
17
This straightforward relation is a remarkable advantage of the chosen gauge as it allows to directly relate the effective vector potential of the "reduced" system, described by the marginal amplitude, to a physical property of the full (many-body) system. Furthermore, by substituting Eq. ( ) in Eq. ( ), one obtains
65c0b54ae9ebbb4db9b4e168
18
(Equation ( ) is also a consequence of the chosen gauge.) On the other hand, focusing now on the equation for the marginal wavefunction [Eqs. or ], containing the effective vector potential A, the paramagnetic term is zero because of the choice of gauge which makes χ real while the diamagnetic term is non-zero and equal to
65c0b54ae9ebbb4db9b4e168
19
As an additional observation on the vector potential that follows from the above discussion concerns its dependence on the gauge. As we have discussed, the electronic vector potential can be transformed as in Eq. ( ) and it is, thus, not an observable quantity (as in classical electromagnetism). However, its curl is a robust physical observable and is a gauge-independent quantity: the curl of such an effective vector potential is related to the curl of the (physical) paramagnetic current of the system. Therefore, in Sec. III, we find it interesting to analyze the curl of the vector potential.
65c0b54ae9ebbb4db9b4e168
20
We have seen in Sec. II A that the original many-body Schrödinger equation ( ) can be reduced to a one-body equation involving effective vector and scalar potentials [Eq. ( )], whose solution is the marginal amplitude, χ. However, the scalar and vector potentials appearing in Eq. ( ) are functionals of the conditional amplitude, Φ. It seems then appropriate to look at the equation satisfied by the conditional amplitude. By simply substituting Eqs. ( ) and (4) into Eq. ( ), after some rearrangements, one gets:
65c0b54ae9ebbb4db9b4e168
21
on the RHS depends on the parameter r and is sometimes referred to as the effective potential v eff . Clearly, Eq. ( ) can be shifted by any constant, and usually v eff is defined so as to vanish at large |r| (if possible), by adding the constant shift -E N -1 . The effective potential in this case equals the sum of the local scalar potentials in Eq. ( ), i.e.
65c0b54ae9ebbb4db9b4e168
22
Equations ( ) and are interesting in that they bring to light a local relationship between the modulus and the phase of the conditional amplitude. In particular, in the equation for the modulus [Eq. ( )], it is interesting to notice that the square of the gradient of the phase of the conditional amplitude acts as an extra scalar potential term in the N -electron Hamiltonian; while in the equation for the phase [Eq. ] there is a compensation between the Laplacian of the phase and the "coupling" between this latter and the modulus of the conditional wavefunction. This can also be seen directly from the total wavefunction, expressed in polar representation with its modulus R Ψ and phase θ as done in Eq. ( ) for the conditional amplitude, i.e.:
65c0b54ae9ebbb4db9b4e168
23
We consider a triplet state whose total spatial wavefunction is written as an antisymmetrized product of two hydrogenic functions ψ nlm , a real one with n = 1, l = 0 and m = 0 (the 1s orbital) and a complex one with n = 2, l = 1 and m = 1:
65c0b54ae9ebbb4db9b4e168
24
The modulus squared (times r 2 2 sin ϑ 2 ) of the conditional amplitude [Eq. (37b)] is plotted in Fig. as a function of the coordinates of the remaining electron r 2 and ϑ 2 , with ϕ 2 fixed at π rad. The reference electron is placed at polar angle ϑ = π/2 and azimuthal angle ϕ = π, while the radial distance is varied from r = 0.1 bohr in the left panel to 20.0 bohr in the right panel, passing through r = 2.5 bohr in the center panel. The modulus squared of the conditional amplitude represents the probability density of finding the remaining electron in a certain region of space, depending on where the reference electron is placed.
65c0b54ae9ebbb4db9b4e168
25
As visible in Fig. , when the reference electron is in the region dominated by ψ 100 (r = 0.1 bohr), the probability density to find the remaining electron essentially mirrors the shape of the ψ 211 function (left panel), viceversa when the reference electron is in the region dominated by ψ 211 , the conditional probability amplitude resembles the ψ 100 function (right). For intermediate distances, the conditional amplitude is a mixture of the two functions (center). In other words, the behaviour of the (modulus squared of the) conditional amplitude reflects the common understanding that the remaining electron "occupies" one or the other orbital according to where the reference electron is placed and clearly exemplifies the so-called Fermi correlation. This is the type of correlation that stems from the indistinguishability of electrons and it manifests itself in the spatial part of the wavefunction for systems with parallel spins .
65c0b54ae9ebbb4db9b4e168
26
Having the basic quantities in analytical form, it is possible to calculate the electronic vector potential explicitly. Because the full electronic wavefunction and, therefore, the conditional amplitude are imaginary only along the ϕ component, the radial and polar components of the vector potential, A r and A ϑ , are exactly zero and the only non-zero component is the azimuthal one:
65c0b54ae9ebbb4db9b4e168
27
In Fig. , we schematically represent the vector potential in Cartesian coordinates as a vector field that somehow "rotates" around the z-axis. The magnitude of the vector potential appears to increase as the distance from the nucleus (a proton, in Fig. , since Z = 1) which is in agreement with the interpretation of A as the current density divided by the density in our gauge. Moreover, since A ϕ only depends on the radial distance r and the polar angle ϑ, the left hand side of Eq. ( ) is itself zero, implying that, in this case, not only the divergence of the current density is zero, but the electronic vector potential (or the related current density) is orthogonal to the gradient of the density
65c0b54ae9ebbb4db9b4e168
28
Since under a gauge transformation the vector potential transforms with the gradient of a scalar function, the curl of the vector potential is a gauge-independent quantity. We have shown here, for the particular case of two noninteracting electrons in an external Coulomb potential, that the vector potential has non-vanishing curl. There- fore, it cannot be gauged away in this situation. Whether this property holds valid in more realistic cases, as for instance in the presence of electron-electron interaction, will be investigated in future work.
65c0b54ae9ebbb4db9b4e168
29
The formal analogy between the reduced Schrödinger equation for the square root of the electron density [Eqs. or ] and that used in the electro-nuclear formulation of the exact factorization, together with the appearance of a not-irrotational (i.e. not curl-free) vector potential suggests the possibility of observing a geometric phase in this purely electronic system.
65c0b54ae9ebbb4db9b4e168
30
A local gauge potential such as A(r) and the associated curl, or gauge field, are identified as a connection and a curvature, similarly, but not exactly equivalently, to the Berry correction and the Berry curvature introduced in Ref. for an adiabatic evolution. Therefore, a geometric phase can be defined in relation to our electronic vector potential as the line integral of the connection along a path C, that we consider closed in this case, namely
65c0b54ae9ebbb4db9b4e168
31
where ξ identifies the variable(s) which the connection depends on. If γ (C) depends on the geometry of the path, it is called a geometric phase. Usually, however, the con-cept of geometric phase has been discussed in relation to the adiabatic approximation (not that of time-dependent DFT), namely when considering that the Hamiltonian of the system depends on some parameter that varies slowly, i.e. adiabatically. In this case, when an eigenstate of the Hamiltonian is tranported along a closed path in the parameter space, the eigenstate can pick up a geometric phase factor in addition to the standard dynamical phase factor that comes from the Schrödinger equation.
65c0b54ae9ebbb4db9b4e168
32
Beyond the adiabatic approximation, geometric phases have been discussed as well in the context of the exact factorization using both a time-indpendent and a time-dependent formalism but only in the electronuclear formulation. We find, therefore, interesting here to point out the property of the exact electron factorization of a complex wavefunction: that of giving rise to a non-vanishing geometric phase in relation to the vector potential of the theory.
65c0b54ae9ebbb4db9b4e168
33
Similarly to the time-dependent electro-nuclear case, such a phase is not quantized, thus it is not topological (at least in the portion of "paramater" space that we analyzed), but it is related to an observable property of the system which is the curl of the electronic paramagnetic current density of the system (see Section II B on the relation between the vector potential and the current density).
65c0b54ae9ebbb4db9b4e168
34
The geometric phase γ Z (r cyl , z) in Eq. ( ) is pathdependent: when the radius or the nuclear charge go to zero, γ vanishes. In turn, at large distances, z → ∞ and/or r cyl → ∞, or for large nuclear charge, Z → ∞, γ approaches 2π [see Figs. . By virtue of Stokes' theorem, the value of γ Z (r cyl , z) corresponds to the flux of the effective magnetic field B = ∇ × A [Eq. and Fig. i.e. the curvature, through the surface bounded by C. Thus, it reaches an asymptotic value at large distances because then all flux is enclosed. Similarly, increasing the nuclear charge corresponds to concentrating all the flux at the origin leading to a path-independent value. The limit Z → ∞ is vaguely reminiscent of the M → ∞ limit of the nuclear mass(es) in the electronuclear problem when invoking the Born-Oppenheimer adiabatic approximation , typically π.
65c0b54ae9ebbb4db9b4e168
35
The value of 2π that we get here is mathematically a consequence of A ϕ being independent of ϕ. Physically, this means that the quantum system does not acquire a phase shift upon transportation around the z-axis, indicating that the path does not enclose non-analiticities or abrupt changes in the conditional wavefunction and that the limiting behaviour can be reached smoothly. In the case of the Z-dependence this should not surprise since our Hamiltonian is non-interacting and can therefore be trivially rescaled as Ĥ(Z, r 1 , r 2 ) = Z 2 Ĥ(Z = 1, Zr 1 , Zr 2 ).
65c0b54ae9ebbb4db9b4e168
36
In the case of an interacting Hamiltonian with a Zdependence such as the He isoelectronic series, increasing the nuclear charge corresponds to gradually turning off the interaction . It seems plausible that approaching the Z → ∞ there would inherently change the nature of the quantum system and lead to non-analiticities in analogy to the M → ∞ limit in the electro-nuclear case. This conjecture needs to be tested and will be the subject of future study.
65c0b54ae9ebbb4db9b4e168
37
To conclude, in this work, we have discussed the exact factorization formalism for the static Schrödinger equation of a purely electronic system and for a general eigenstate, introducing generalized local potentials v kin and v N -1 which could in principle serve the description of excited states within density-functional theorie(s). Our definitions presume the gauge choice of real positive marginal wavefunction, which however has the remarkable advantage that the electronic vector potential A, stemming from the fact that the wavefunction is complex [Eq. ( )], is related in a straightforward way to the current density [Eq. ( )].
65c0b54ae9ebbb4db9b4e168
38
Previous work on conditional amplitudes and on the exact electron factorization already discussed the features of various contributions of the total effective scalar potential that are important to retain in the design of approximations in relation to orbital-free or Kohn-Sham density-functional theory. In this work, we focused on the vector potential of the theory, which is essential when describing current-carrying states. Since, so far, not much work has been devoted to the electronic vector potential, we found it interesting to analyze it in a model case of non-interacting electrons, highlighting its behavior and that of its curl (related to an effective magnetic field generated by the conditional amplitude on the marginal wavefunction).
60c7543f0f50db1da9397ce6
0
The discovery, design and bring-to-market of a novel small-molecule drug is a very challenging process, and very expensive in terms of money, time and effort. Computer-Assisted Drug Design (CADD) methods can help to improve and refine the identification of hits in the first steps of drug development, thus having a huge positive impact on the costs of the whole process. Traditionally, interactions between ligands and targets have been predicted in CADD through a Quantitative Structure-Activity Relationship (QSAR) approach. In QSAR, a target is fixed and only information from compounds is used for modeling and predicting binding for said target. However, the compartmentalized nature of QSAR does not allow for discovering new cross-interactions between ligand and targets for which no training data is available. Proteochemometrics modeling (PCM) is an extension of QSAR which overcomes this drawback by combining information of both ligand and protein descriptors on a supervised prediction model. PCM allows for the integration of different sources of data in one model and for the general prediction of which ligands will bind to which targets. Both PCM and QSAR usally apply machine learning (ML) techniques such as random forests, support vector machine, logistic regression or partial least squares. Following the trends in other fields and the growing availability of data, deep learning (DL) has also been increasingly and succesfully applied on bioactivity prediction, 5 specially on QSAR modeling. The application of DL to PCM followed, taking advantage of public databases and improving the descriptor representation. However, an important issue for PCM and QSAR DL models is the amount and quality of data when compared to other fields of application, since increasing the number of data samples in drug discovery is expensive and thus, often infeasible. This poses a problem, since neural networks require a large quantity of training data in order to actually learn.
60c7543f0f50db1da9397ce6
1
While in other fields this problem is alleviated through data augmentation, i.e. an artificial increase of the number of observations of the training set to help the model generalize, this regularization technique is not yet commonly used in CADD. Some studies have considered different variants of the SMILES of each molecule as a way of data augmentation, but despite its proven benefits, its use is not widespread yet. This is partly due to the lack of consensus in the input representations, where alternatives to SMILES are often used.
60c7543f0f50db1da9397ce6
2
Another factor highly affecting QSAR and PCM models is data imbalance, since the class definitions based on bioactivity data can result in highly skewed labels. In this regard, Zakharov et al explored how data balancing affected self-consistent regression QSAR models using highly imbalanced PubChem bioassays. The study proposed a method including costsensitive learning and under-sampling approaches to obtain more accurate predictions. Using the same data, Korkmaz explored how data balancing affected DL-based QSAR models. The study concluded that imbalance has indeed a negative impact on the performance of the models, but that this impact could be alleviated by applying oversampling methods like SMOTE (Synthetic Minority Oversampling Technique) on the fingeprint representations of the molecules. Besides, oversampling methods could also serve the purpose of augmenting the original dataset.
60c7543f0f50db1da9397ce6
3
Recently it has been shown that for the validation of PCM models, it is important to control the chemical series bias through clustering techniques in order to get more reliable performance estimates. This adds a complexity layer to the imbalance handling, since clustering can affect the data balance in PCM. Since Korkmaz and Zakharov et al did not consider the potential similarity between different compounds when validating their results, its impact on data balancing is yet to be tested.
60c7543f0f50db1da9397ce6
4
We evaluated the different balancing models on the benchmark dataset used in DeepAffinity. The original dataset contains binding data from BindingDB, 20 merged with the amino acid sequence information from UniRef 21 and the SMILES representation of compounds from STITCH. The original dataset consisted on IC50, K i or K d values from 829,033 compoundprotein pairs. We classified the dataset proteins into the main protein families according to the release 2018_09 from Uniprot and restricted our study to proteins of the kinase and G protein-coupled receptors families (separately). Binding activities were in logarithm form, so a threshold of 6 was applied in order to have binary labels for classification (active/inactive).
60c7543f0f50db1da9397ce6
5
Each amino acid was represented by a binary vector of length 26. Protein sequences were then normalized to the maximum length of 1499. Those sequences shorter than 1499 were zero-padded. According to the recommendation of our previous work, we tuned the padding type and obtained the best results with pre-padding (adding zeros to the beginning of the sequence).
60c7543f0f50db1da9397ce6
6
A splitting strategy based on compound clustering (both of actives and inactives) was applied to the bioactivity data, omitting target information. Clustering-based validation strategies have been used to avoid the compound series bias, making sure that there are no similar molecules both in training, validation and test sets. We followed the implementation of our previous study on cross-validation strategies in PCM, where K-means clustering with k = 100 was applied to the fingerprint description of the compounds. Data was divided in training, validation and test sets with a proportion of 80/10/10%. This splitting was randomly performed 10 times (folds) in order to test the consistency of the results, thus training and testing each model in 10 different data partitions. As further explained in the next subsection, for some balancing strategies the clustering was applied before the resampling and for others it was applied afterwards.
60c7543f0f50db1da9397ce6
7
We chose an oversampling method to balance data since oversampling was shown to improve performance in the Korkmaz study of data imbalance in DL-based QSAR and in a systematic study of data imbalance with CNNs. Oversampling methods increase the number of samples in the minority class to create a balanced data set. Specifically, we used the SMOTE oversampling technique, which creates synthetic data points of the minority class similar to those available. Resampling with SMOTE was done in a per protein basis, so that each protein would be balanced. Some proteins had to be discarded in certain strategies, since there were either only active or inactive ligands, or the number of samples in the minority class was smaller than the number of neighbors used for constructing the synthetic samples (k = 5) and SMOTE was not applicable.
60c7543f0f50db1da9397ce6
8
Unlike Korkmaz, that applied data balancing methods to each training set, we tested four different combinations of balancing, data clustering and splitting (see Figure ): no_resampling, in which bioactivity data for each protein was taken as it was, and clustering was applied in order to perform the splitting; resampling_after_clustering, in which after clustering data and splitting it into training, validation and test, each protein activity data in each set was resampled and attained a 50% actives/inactives proportion; resam-pling_before_clustering, in which, opposite to the previous strategy, resampling was applied prior to clustering and splitting, so while the global protein-wise proportion of actives/inactives was 50%, it did not have to be 50% within each splitting set; and semi_resampling, in which the splitting performed in the no_resampling strategy was reused, the test set was kept without resampling but the training+validation set was resampled, re-clustered and re-splitted into train and validation.
60c7543f0f50db1da9397ce6
9
A random baseline was computed according to the actives/inactives ratio of the training set for each strategy and each fold. Let f be the fraction of actives in the training samples involving a protein, and n the number of samples to be predicted in the test set for that protein. The random baseline is obtained by first sampling f n + 0.5 values from a uniform distribution in [0.5, 1] (actives) and n-f n+0.5 values from a uniform distribution in [0, 0.5] (inactives), then concatenating both and shuffling. This procedure keeps the active/inactive balance by design while producing random activity predictions.
60c7543f0f50db1da9397ce6
10
We studied the impact of data balancing strategies on a DL model. We followed the Korkmaz strategy of selecting a simple, well-established architecture whose complexity issues would not be a confounder of the factor under study. We refrained from using Long Short-Term Memory networks since they have convergence issues when training sequences longer than 1000 elements. Model hyperparameters were tuned using the validation set, choosing the simplest working architecture. As in our previous work, 8 the DL PCM model consisted of two analysis blocks. The amino acid sequence analysis block was a 1D convolutional neural network. The fingerprints analysis block consisted of a feed-forward neural network.
60c7543f0f50db1da9397ce6
11
Dropout was used in both branches to prevent overfitting. The representations built by the compound and target analysis blocks were then merged and the information was passed through a softmax activation unit, which quantified the ligand-target pair activity probability. A schematic representation of the DL-based PCM model can be found in Figure of the Supporting Information, along with further details on the optimised hyperparameters.
60c7543f0f50db1da9397ce6
12
for 100 epochs, with a batch size of 128 both for training and validation. Models were implemented in Python 3.6.9 (Keras 32 2.3.1 using Tensorflow 33 2.1.0 as backend) and run on two NVIDIA GeForce GTX 1070 GPUs. SMOTE data balancing was applied using the imbalanced-learn Python package. The statistical processing of results was performed in R software (3.6.3).
60c7543f0f50db1da9397ce6
13
Data balance (protein) = Proportion of actives (protein) = n_active_compounds n_total_compounds Thus, a comprehensive analysis of data balance was carried to better understand and interpret performance results. For each of the balancing strategies, the original distribution of active ratios per protein was characterized. We also compared the original imbalance of the training and test sets for each strategy to explore possible trends, and studied the effect that other covariates (the protein length and the number of interactions of each protein in its corresponding set and fold) might have on the original test set imbalance.
60c7543f0f50db1da9397ce6
14
In the training process, the weights of the selected model were those from the epoch with the maximum accuracy (proportion of correct predictions) on the validation set. This process was run for each strategy and fold. Then, each selected model was used to predict on their corresponding test set. After the binarization of the test set predictions (probability threshold of 0.5), the proportion of predicted actives was computed by protein and also compared to the ratios of the original test and training sets.
60c7543f0f50db1da9397ce6
15
The resampling strategies were assessed with various performance metrics for binary classifiers and prioritisers. The selection was based on those used by Korkmaz: balanced accuracy, F1 score, Matthews correlation coefficient (MCC) and area under the ROC curve (AUROC). All of them are insensitive to class imbalance. In the case of F1-score, we used the macro-average, which is computed by averaging the F1-score for the active and inactive labels. Further details on the definition of these metrics can be found in the Supporting Information.
60c7543f0f50db1da9397ce6
16
The performance metrics were computed on the predictions of each selected model in its corresponding test set. AUROC was computed from raw predicted probabilities, while F1score, balanced accuracy and MCC were derived from the binarized predictions. We tested the significance of the differences between strategies by means of nonparametric two-sided Wilcoxon test for paired samples.
60c7543f0f50db1da9397ce6
17
Performance metrics and predicted ratios were further described through linear models built upon the different combination of variables considered in this analysis. Our prior work in similar scopes had found them insightful, since they allow for a statistical analysis of the contribution of each factor under study. Each of the data points used for fitting a explanatory linear model corresponded to a different protein. Simpler claims were investigated with Pearson's r for linear correlation, using confidence intervals (CI) and p-values for significance.
60c7543f0f50db1da9397ce6
18
Specifically, the main variables of interest in this model were the actual ratios in the training (r training ) and in the test (r test ) sets, both numeric between 0 and 1. As additional covariates, the number of interactions (n int ) and the sequence length (n seq ) (both numerical)
60c7543f0f50db1da9397ce6
19
However, before evaluating the DL model, the performance metrics of the baseline were characterised: the strategy variable was tested with a type 3 analysis of variance (ANOVA) in order to pinpoint the imbalance-sensitive and insensitive metrics. Metrics were called imbalance-sensitive if the imbalance-aware random baseline exhibited different performances between resampling strategies.
60c7543f0f50db1da9397ce6
20
The effect that the number of interactions for each protein in its corresponding set and fold, and the protein length (i.e. number of amino acids) had on the test set imbalance was investigated (Figures S7-S8 and Tables S4-S5 of the Supporting Information). Proteins with greatest imbalance tended to be among those with the least interactions (Table
60c7543f0f50db1da9397ce6
21
Figure represents the ratio of predicted actives by protein and Table of the Supporting Information summarizes the percentage of proteins with all actives or inactives (ex-treme cases). They show that no_resampling strategy was inclined to predict everything as positives (71.6% of the time, compared to 3.5% for predicting all negatives). Resam-pling_before_clustering and semi_resampling alleviated the imbalance in the predictions, but still retained a spike of proteins where all the compounds were predicted as positives (23.4% and 29.1%) and negatives (5.5% and 4%). Resampling_after_clustering kept a wide and symmetric distribution of predicted actives, with only 1.2% predicted as all actives and 0% as all inactives. ), but since the training and the test ratio are also positively correlated (Figure ), the latter could be the one driving the predicted ratio of positives;
60c7543f0f50db1da9397ce6
22
(2) resampling_after_clustering had a constant training ratio, meaning that the predicted ratio was not explainable by differences in training ratios; (3) resampling_before_clustering showed instead a negative relation between the training and the predicted ratio (Pearson's r 95% CI: [-0.130, -0.058], p = 3.77 • 10 -7 ), but since the former and the test ratio also anticorrelated (Figure ), the simplest explanation was that the test ratio drove the predicted test ratio; (4) semi_resampling showed no apparent correlation between the predicted ratio and the training ratio (Pearson's r 95% CI: [-0.029, 0.045], p = 0.68).
60c7543f0f50db1da9397ce6
23
The models in Equation 1 that describe the predicted ratio of actives for each balancing strategy are summarized in Tables S8-S9 of the Supporting Information. For semi_resampling and resampling_before_clustering (Table ), the original actives ratio in the test set had a positive, significant effect on the predicted actives ratio (β = 0.945 and 0.784, both p < 10 -16 ). However, the original actives ratio of the training set showed no evidence of affecting the predicted ratio (β = 0.197 and -0.446, p = 0.73 and 0.31). Conversely, for the no_resampling strategy (Table ), both the original training (β = 8.312, p < 10 -16 ) and test ratios (β = 1.102, p = 2.6 • 10 -9 ) had positive, significant effects on the predicted actives ratio. In the three models, the number of interactions per protein had a significant, negative effect (β = -0.391, -0.396 and -1.24, all p < 10 -16 ), and some of the folds entailed significant variations of the predicted ratio.
60c7543f0f50db1da9397ce6
24
Figure shows a fold-averaged picture of the metrics by protein and by model type (DL or input-naïve baseline). Visual inspection suggested that the F1-score, accuracy, and possibly balanced accuracy were affected by the baseline data imbalance. To quantify this finding, the model in Equation was fitted to the baseline performance metrics. According to Table of the Supporting Information, the strategy term was significant (type 3 ANOVA, p < 10 -16 , p < 10 -16 and 5.61 • 10 -11 ) for those three metrics, and non-significant in AUROC and MCC (p = 0.91 and 0.82). Based on this, metrics were divided in two types: (1) imbalancesensitive, if the baseline was different between strategies, and (2) imbalance-insensitive, if the baseline was constant.
60c7543f0f50db1da9397ce6
25
Figure displays an overview of fold-averaged performances, where strategies are paired with their baselines. Undefined metrics in edge cases were excluded. This mainly affected AU-ROC, where the number of proteins with metrics dropped around 25% for semi_resampling, resampling_before_clustering and no_resampling (Table of the Supporting Information). Figure brought the dilemma of direct strategy comparison with imbalance-sensitive metrics, which was especially apparent for the F1-score and its high baseline in no_resampling (quartiles: Q1 = 0.428, median of 0.611, Q3 = 0.756, Table of the Supporting Information).
60c7543f0f50db1da9397ce6
26
The strategy term would always explain variance (type 3 ANOVA, p-values ranged between 2.89•10 -15 and p < 10 -16 , see Table in the Supporting Information). The models showed different behaviour in imbalance-sensitive and insensitive metrics (Table of the Supporting Information). Pairwise comparisons of the strategy term coefficients using Tukey's method would point to two apparently conflicting scenarios (Figure of the Supporting Information), further confirmed when prioritizing the strategies according to their expected performance through the linear models (Figure and Table Baseline-adjusted performance A descriptive plot of the adjusted metrics (Figure ) pointed to a different scenario than that of the the adjusted ones (Figure ).
60c7543f0f50db1da9397ce6
27
Again, the strategy term was always significant (type 3 ANOVA, p-values ranged between 2.78 • 10 -9 and p < 10 -16 , Table of the Supporting Information). Baseline adjustment brought a unified behaviour across the models (Table of the Supporting Information), further confirmed in pairwise coefficient comparison (Tukey's method, Figure of the Supporting Information) and in their expected performance (Figure and Table of the Supporting Information): resampling_before_clustering and resampling_after_clustering had the highest performance estimates (expected improvements over baseline ranging from 0.149 to 0.263 and from 0.143 to 0.315 in all metrics), followed by semi_resampling (0.086 to 0.146) and finally by no_resampling (0.057 to 0.127).
60c7543f0f50db1da9397ce6
28
We repeated all the previous analysis on the GPCR family to confirm whether the claims obtained for the kinases protein family could be generalized to other families. While their active proportion distributions were not too different, GPCR proteins were more imbalanced towards the actives than kinases (Figure of the Supporting Information).
60c7543f0f50db1da9397ce6
29
Kinases and GPCRs essentially agreed on the predicted actives proportions analyses, the only exception being the n_interactions coefficient, non-significant in the semi_resampling strategy (β = -0.011, p = 0.3, Table of the Supporting Information). Regarding performance, the explanatory linear models on GPCRs led to facts equivalent to those of kinases in baseline metrics, in absolute and in baseline-adjusted performance. Regarding adjusted performances, semi_resampling significantly outperformed no_resampling in 3 metrics instead of 4 (Table of the Supporting Information), which still made it preferable. Supplement 3 gathers with detail all the results obtained in the analysis of the GPCR family.
60c7543f0f50db1da9397ce6
30
This study is focused on the characterization of the data imbalance present in bioactivity datasets, as well as how to address it. Bioactivity data also poses the problem of chemical series, i.e. sets of similar molecules with similar activities, that result in inflated performance metrics when split between training and test sets. We addressed those via a clustering prior to the splitting, ensuring that similar molecules would belong to the same set.
60c7543f0f50db1da9397ce6
31
The first observation was that clustering modified data imbalance in a strategy-dependent way. When the starting set was perfectly balanced (strategy resampling_before_clustering), clustering and splitting induced a degree of imbalance, particularly visible in the heavier tails of the active ratios distributions in the test set. Compared to training, the lower sample sizes in the test set may also cause extreme imbalances more often. On the other end, this effect was only moderate in no_resampling, where the distribution of actives ratio was similar in train and test, but that of test had more extreme proteins with either all actives or all inactives.
60c7543f0f50db1da9397ce6
32
Besides the overall changes in data imbalance, strategies differed in how the imbalance of a certain protein in the training set would translate to the test set. The positive trend in no_resampling suggests that existing data imbalances tended to persist after the clustering and splitting. The negative trend in resampling_before_clustering hints that, in the absence of imbalance, clustering will induce it. The flat trend in semi_resampling supports that the imbalance induced with the clustering in the training set, which was balanced with SMOTE beforehand, is independent from the original imbalance in the dataset (present in the test set).
60c7543f0f50db1da9397ce6
33
The original distribution of actives ratio in each of the balancing strategies affected the predicted actives ratio by the models. Due to the lack of correlation between training and test ratios (Figure ), the semi_resampling strategy was the ideal scenario to disentangle their effect on the predicted ratio of actives (see model in table of the Supporting Information).
60c7543f0f50db1da9397ce6
34
Its additive model suggested that the original ratio of actives in test explained the predicted proportions, rather than the training ratio. We also found that the number of interactions per protein was a relevant factor: the more interactions, the less active proportion, suggesting that the extreme cases with all predicted as actives tended to be proteins with few interactions.
60c7543f0f50db1da9397ce6
35
The explanatory model for the no_resampling strategy (Table of the Supporting Information) suffered from the positive correlation between training and test ratios, which could be confounded. Both original training and test ratios showed a positive effect on the predicted fraction of actives. Although the estimate was larger and more significant for the training ratio coefficient, the confounding effect and the very skewed distribution of the predicted ratios deemed this model inconclusive.
60c7543f0f50db1da9397ce6
36
The prediction task studied here posed a particular challenge: data imbalance happened on a protein basis, and the imbalance of certain proteins could be extreme (very low or high), moving away from the global actives ratio. Each resampling strategy would lead to different protein-wise imbalance patterns. The baseline performance of some metrics (accuracy, F1 score and balanced accuracy) was different between strategies, while it was constant for others (AUROC and MCC). The data-driven division into imbalance-sensitive and insensitive metrics was an important step to understand the opposite conclusions reached within each metric type after direct performance comparison between strategies (Figure ).
60c7543f0f50db1da9397ce6
37
The direct comparison of resampling strategies with imbalance-sensitive metrics would be confounded by the imbalance-induced bias in the metrics and the protein-wise imbalance differences between strategies. We found that adjusting by the baseline metrics (see Equation ) brought an agreement in the conclusions obtained by both imbalance-sensitive and insensitive metrics. In turn, the same conclusions were obtainable by direct comparison of imbalance-insensitive metrics. Because of this, our recommendation is to include imbalance-aware baselines and to adjust imbalance-sensitive metrics when used for model selection.
60c7543f0f50db1da9397ce6
38
Our results showed that the largest impact in performance estimates was the application of data augmentation to the test set: resampling_before_clustering and resampling_after_clustering tended to outperform semi_resampling and no_resampling. However, augmenting the test set might not faithfully reflect new data anymore, and could artificially inflate the performance estimates: models may specialize in discriminating between original and resampled data points instead of actives and inactives.
60c7543f0f50db1da9397ce6
39
On the other hand, semi_resampling outperformed no_resampling in four out of five metrics (Tukey's method, p < 0.05, Figure of the Supporting Information), which supported data augmentation usefulness even if the data balance in the test set differed from that of the training set. This was consistent with the observation that the main influence on the predicted actives ratio in the test set were their actual ratios in the test set instead of the original ratios in the training set. Combined with the less skewed distributions of predicted active ratios of semi_resampling against no_resampling (Figure ), we recommend semi_resampling for future studies.
60c7543f0f50db1da9397ce6
40
The results obtained by the kinases and the GPCR proteins, used as an external validation set for the model fitting and evaluation, point to the same general picture with aligned conclusions. The differences found (the effect of the sequence length on protein imbalance and n_interactions on predicted actives proportion is different to GPCRs) could be due to the fact that there is more imbalance of the GPCRs towards the actives. However, these results lead us to think that the guidelines for proteochemometrics models of this study provide sensible defaults to more protein families.
60c7543f0f50db1da9397ce6
41
In this paper we have confirmed that data balance has an impact in DL proteochemometric target-compound activity models. Zakharov et al and Korkmaz arrived to a similar conclusion in a QSAR setting, the latter also using DNN models for classification. More specifically, Korkmaz stated that the higher the imbalance for a protein, the worse the model performance (measured by F1-score and MCC).
60c7543f0f50db1da9397ce6
42
These studies got the best performances by controlling data balance by means of undersampling techniques (in the case of Zakharov) and oversampling techniques (in the case of Korkmaz). We chose SMOTE for data balancing, an oversampling technique, since the settings of the Korkmaz study were more aligned with ours and because DL models require a large quantity of training data. Specifically, in four out of five metrics, proteins with more interactions were better predicted (table S17 of the Supporting Information) which was also found in the Korkmaz paper.
60c7543f0f50db1da9397ce6
43
Technical differences existed in the descriptors used in the three studies. Zakharov et al used Quantitative Neighborhood of Atoms and biological descriptors, whereas Korkmaz used the PaDEL software. We, on the other hand, used the fingeprints from PubChem. The fact that the overall messages are consistent suggests a degree of independence from the input encoding.
60c7543f0f50db1da9397ce6
44
More importantly, Zakharov and Korkmaz studies did not take into account the control of the compound series bias. This step is necessary for obtaining realistic performance estimates in a real-world setting. Not only we accounted for it, but we also investigated if the stage in which the compound series control was introduced, in combination with the data augmentation (before or after applying SMOTE), had an impact in the outcome.
60c7543f0f50db1da9397ce6
45
Indeed, the order had an impact in the model performance and needed careful consideration. Resampling_before_clustering solved the global imbalance of the dataset, but clustering after oversampling would lead again to a protein-wise imbalance. Analogously, semi_resampling resampled the training and validation sets, but imbalance returned after their clustering. On the contrary, resampling_after_clustering first corrected the problem of similar compounds, and then augmented the data to reach a protein-wise balance.
60c7543f0f50db1da9397ce6
46
This study continues our incremental work on recommendations for DL models regarding input encoding and control of chemical series. While this study was limited to one architecture and two protein families, it provides a foundation to understand the basic behaviour of PCM models, insights on how to adjust performance metrics for a protein-wise analysis, and a first step towards exploring more general questions. Those could include architecturecentric analyses to confirm if the same trends are observed when changing the layers or the model structure, or using other protein families with a different distribution of actives ratios, which may be flat or skewed to the inactives.
60c7543f0f50db1da9397ce6
47
Although the effect of data balance and resampling techniques had been analysed for QSAR models, it had not been studied yet in the context of proteochemometrics models, even if the bioactivity datasets used in this setting are usually imbalanced. In this paper, we have tested four different combinations of data oversampling (through SMOTE) and clustering for controlling compounds similarity. While the clustering avoids overly optimistic performance estimates, it could introduce more data imbalance (in the form of splittings having proteins with mostly active or inactive compounds). Despite this potential conflict between the resampling and the clustering, we found that resampling was useful to improve the model behaviour and performance.
60c7543f0f50db1da9397ce6
48
Some common performance metrics were affected by the data imbalance and yielded misleading trends. We included an imbalance-aware random baseline and defined baselineadjusted metrics to overcome this issue, especially in F1-score and accuracy. After baseline adjustment, the metrics provided a unified picture: the largest impact in performance estimates came from the application of data augmentation to the test set (resampling_before_clustering and resampling_after_clustering outperformed semi_resampling and no_resampling). How-ever, augmenting the test set may not reflect a realistic scenario.
60c7543f0f50db1da9397ce6
49
On the other hand, semi_resampling outperformed no_resampling in four out of five adjusted metrics and provided a more equalized distribution of predicted actives ratio. This confirmed the data augmentation usefulness even if the data balance in the test set differed from that of the training set. This was consistent with the finding that the predicted proportion of positives of the proteochemometrics model was explained by the actual data balance in the test set, rather than that of the training set. We also found that proteins with more interactions were better predicted. While we cannot extrapolate these results to all the proteins and imbalance distributions, this sets a sensible starting point for improving proteochemometrics modelling and remains consistent with the corresponding data imbalance studies on QSAR models. Resampling_before_clustering, where resampling per protein is applied prior to clustering and splitting; resampling_after_clustering, where data is first clustered and splitted and then each protein activity data in each set is resampled; semi_resampling, in which the splitting is performed and then the test set is kept without resampling but the training+validation set is resampled and clustered; and no_resampling, in which the imbalance of the original data is kept and clustering is applied prior to splitting.
6787a417fa469535b91f38e7
0
Spin labelling enables the study of biomolecules using electron paramagnetic resonance (EPR) spectroscopy. Here, we describe the synthesis of a cysteine-reactive spin label based on a spirocyclic pyrrolidinyl nitroxide containing an iodoacetamide moiety. The spin label was shown to be highly persistent under reducing conditions while maintaining excellent EPR relaxation parameters up to a temperature of 180 K. After successful double spin labelling of a calmodulin variant, interspin distances were measured by the electron paramagnetic resonance (EPR) spectroscopy experiment double electron-electron resonance (DEER) at 120 K.
6787a417fa469535b91f38e7
1
Electron paramagnetic resonance (EPR) spectroscopy has emerged as an important technique for examining the structures and dynamics of biomolecules. The considerable applicability of this tool is attributed to its ability to function under biological conditions, independently of the size of the biomolecules, with high accuracy and sensitivity in detecting even small changes. EPR techniques typically require the presence of paramagnetic centres, which are not commonly found in the predominantly diamagnetic biopolymers. Therefore, paramagnetic probes can be incorporated into specific regions of the target molecule of interest, a strategy termed site-directed spin labelling (SDSL). Nitroxide-based probes have gained a unique position in this field, among other factors due to their small size and minimal impact on the biomolecular structure. In order for a nitroxide to be suitable and widely applicable as a spin label, particularly in an intracellular environment, it must possess sufficient aqueous solubility, efficient labelling reactivity (usually with the sulfhydryl group of a cysteine amino acid), be minimally perturbing to the protein structure, be resilient to reducing conditions, and have sufficiently long relaxation times (particularly for transverse relaxation, i.e. the phase memory time Tm). These properties are influenced by structural factors such as the nitroxide ring size and saturation, the nature of the α-substituents shielding the paramagnetic centre, or electronic effects of functional groups. Saturated five-membered pyrrolidinyl nitroxides exhibit greater kinetic stability towards reducing agents than their unsaturated pyrrolinyl or sixmembered piperidinyl counterparts. Increasing the bulk of the α-substituents, meanwhile, can be beneficial to reduction stability, but also detrimental for the relaxation parameters. For example, tetraethyl substituents around the nitroxide moiety can provide exceptional stability compared to the most widely used tetramethyl-substituted scaffold; however, as rotation of the CH3 of the ethyl groups leads to poor relaxation times at temperatures above 70 K, measurements require costly cryogenic conditions using liquid helium. Optimal relaxation parameters are exhibited by conformationally rigid nitroxides featuring spirocycles at the α-positions, which notably enable measurements at elevated temperatures. Our previous research revealed that the conformations adopted by these spirocyclic scaffolds is another important factor influencing nitroxide stability. Recently, we have also developed nitroxides based on a novel exo-methylene substituted spirocyclic scaffold and conducted extensive analyses of their physical properties. In this work, we build on these prior discoveries to develop a novel spin label based on an exo-methylene substituted spirocyclic nitroxide, and implement it in the examination of proteins through double electron-electron resonance (DEER; also known as pulsed electron double resonance, PELDOR) experiments. The synthesis of the new spin label began by the introduction of the spirocyclohexyl substituents by ketone exchange with 1,2,2,6,6-pentamethylpiperidin-4-one 1 and cyclohexanone, resulting in the spirocyclic piperidone 2 (Scheme 1). Bromination led to the quantitative formation of hydrobromide salt 3, which was subsequently subjected to a Favorskii rearrangement, resulting in the formation of spirocyclic pyrroline ester 4 in good yield. Oxidation of secondary amine 4 with m-chloroperbenzoic acid (m-CPBA) gave the nitroxide 5. As the further transformations required harsh conditions, the nitroxide moiety was at this point protected as the methoxyamine 6 using previously reported Fenton chemistry. The ester group in 6 was reduced with diisobutylaluminium hydride (DIBAL-H) to the alcohol 7 in very good yield. This allylic alcohol was then subjected to a two-step Overman rearrangement as the key step in our synthetic strategy. Alcohol 7 was first treated with trichloroacetonitrile in the presence of a base, yielding a trichloroacetimidate. This labile intermediate was immediately subjected to a basecatalysed, microwave-assisted [3,3]-sigmatropic rearrangement to form the exo-methylene substituted trichloroacetamide 8 in excellent yield over two steps. Deprotection of the methoxyamine by treatment with m-CPBA resulted in a Cope-type elimination to the nitroxide 9, albeit in moderate yield due to competing epoxidation of the exocyclic double bond. The crystalline trichloroacetamide was analysed using X-ray crystallography, which revealed that the nitroxide adopts a semi-open conformation of the spirocyclohexyl moieties in the solid state (Scheme 1). Previous reports suggest that such conformations may limit the accessibility of potential reducing agents to the nitroxide radical centre, thus enhancing its kinetic stability. Base-promoted hydrolysis of the trichloroacetamide moiety provided primary amine 10 in excellent yield, which was thereafter transformed to chloroacetamide 11 by acylation. Finally, a Finkelstein reaction provided the exo-methylene substituted spirocyclic pyrrolidinyl iodoacetamide 12, which was further used for protein spin labelling.
6787a417fa469535b91f38e7
2
Calmodulin (CaM) is a relatively small protein abundant in eukaryotic cells that plays a pivotal role in sensing and responding to modulation of calcium levels (Figure ). As such, CaM regulates the activity of a large number of important protein targets, and therefore plays a significant role in various aspects of human health. As a biologically important, thoroughly studied and structurally well-understood protein, we identified CaM as a suitable protein to benchmark the use of iodoacetamide 12 for spin labelling. More specifically, we chose to employ the complex formed by Ca 2+ -bound holo-CaM and the synthetic peptide M13 via a GG linker (CaM/M13), which mimics the CaM-binding domain of rabbit skeletal muscle myosin light chain kinase, as our model protein (Figure ). As the native CaM/M13 amino acid sequence lacks cysteine residues suitable for labelling with iodoacetamide spin labels, a double mutant with cysteines at residues 34 and 146 (highlighted yellow residues in Figure ) was produced by mutagenesis (see the Supplementary Information for details). Spin labelling of the double cysteine variant of CaM/M13 was performed using 50 µM protein and 500 µM nitroxide 12 in HEPES buffer (40 mM, pH 7.5) and NaCl solution (150 mM). Due to the light sensitive nature of iodoacetamides, the mixture was incubated in darkness at 4 °C for 16 h. This protocol yielded a 3:10 ratio of singly-and doubly-labelled protein, with a notable 77% efficiency in double labelling according to analysis by ESI- MS (see the Supplementary Information, Figure ). The doubly-labelled protein was subsequently isolated after functionalising any unreacted free cysteines with biotin and purification using streptavidin-coated beads (see the Supplementary Information, Figure ). Spirocyclic pyrrolinyl nitroxide spin labels have been synthesized previously, but to the best of our knowledge, this is the first example of fivemembered nitroxides with spirocyclohexyl moieties being used in protein spin labelling. To assess the stability of the spin label in a reducing environment, we recorded X-band continuous wave (CW) EPR spectra of the doubly-labelled protein (250 µM in HEPES buffer, pH 7.5) in the presence of a 32-fold (16-fold per label) excess of the reducing agent sodium ascorbate (Figure ). The data shows a 50% decrease in the intensity of the CW spectrum over approximately 20 minutes, and a 66% decrease after 40 minutes. These measurements confirm that the EPR signal of spin label 12 when attached to the protein remains sufficiently strong for long enough to permit spectroscopic measurements, possibly even in challenging reducing media such as the intracellular environment.
6787a417fa469535b91f38e7
3
Next, the relaxation profile of the new spin label was investigated by determining the phase memory time Tm for the spin label 12 in a protonated medium and the doubly-labelled protein in a solution containing deuterated solvents by pulsed EPR spectroscopy (Figure ). Tm reflects the intensity decay of an electron spin echo during pulsed EPR experiments, and this factor contributes significantly to the resolution and accuracy of DEER distance measurements. Spin echo decay curves were recorded at Q-band at various temperatures (Figure and), and numerical values for Tm were extracted (see the Supplementary Information). Spin label 12 has a Tm of 4.26 µs at 50 K, and remains stable up to 160 K (Tm,160K = 3.28 µs, Figure ). Tm at 180 K (just above the glass melting temperature) is 2.35 µs, which corresponds to a 45% decay and is in agreement with our previous results performed on a similar nitroxide scaffold. The same pattern is evident for the labelled protein, but with phase memory times of approximately double duration (8.56 µs at 50 K, Figure ) due to the protein now being measured in a deuterated solvent mixture. A comparison of the logarithm of the phase memory decay rates (i.e. the reciprocal of Tm) across temperatures is shown in Figure . It is evident that the absence of rotatable methyl groups and relative rigidity of the nitroxide scaffold 12 results in excellent relaxation behaviour both in the free label and when attached to the protein. These findings validate the utility of spin label 12 for the investigation of proteins at elevated temperatures.
6787a417fa469535b91f38e7
4
DEER experiments were conducted to determine the distance between the spin labels in the doubly-labelled CaM/M13 complex. The presented data was acquired at 120 K, with distance distributions obtained through analysis with DEERNet (Figure ). Additional DEER experiments performed at 50 K and even 180 K, and distances obtained with DeerAnalysis, are available in the Supplementary Information. The bimodal distribution in Figure has maxima at 36 Å and 42 Å. The bimodality of the peak can be explained as a consequence of a complex landscape for the spin label positions with respect to the protein backbone. The more intense 36 Å peak coincides well with measurements of this protein using MTSL, a commonly used cysteine-selective, five-membered, tetramethyl-substituted spin label. The 42 Å distance coincides well with a simulated distance (chiLife) of this protein complex with the two cysteine mutations and labelled with MTSL. The overlay of the distance distribution from Figure with the simulated and experimental MTSL distances is shown in the Supplementary Information (Figure ).
6787a417fa469535b91f38e7
5
In conclusion, we have synthesised an exo-methylenesubstituted pyrrolidinyl nitroxide iodoacetamide spin label with spirocyclohexyl substituents surrounding the paramagnetic centre. This label was successfully attached to a double mutant of calmodulin bound to the M13 peptide with high labelling efficiency. The protein-appended nitroxide showed high stability in the presence of excess ascorbate as a model system for a reducing environment. The lack of methyl group rotation resulted in long relaxation times up to 180 K, which enabled DEER measurements of interspin distance distributions even at elevated temperatures. These results provide a promising basis for the future application of spirocyclic pyrrolidinyl nitroxide spin labels in intracellular EPR measurements.
60d1daab403d994fbabbe7a8
0
Colloidal solution of metals has been utilized for energy, water treatment, catalysis, sensor, biomedical, and many other applications. In most of these cases, a large amount of emphasis has been given to processes to increase composition/concentration of metallic nanoparticles in the colloidal. There is limited study on the process of dilution (1bpm or ultra-dilution beyond and its effect on different physical and chemical properties. Since 1796, a highly diluted colloidal solution is used for medication in Homeopathy. The various literature of Homeopathy medicines suggest that colloidal solutions are more effective with higher potentization/dilution (composed of serial dilutions and vigorous triturations/successions beyond Avogadro no-10 23 ). Multiple studies about homeopathic potencies and nanoparticles (NPs) have been published . The study published by Chikramane et al. , showing the presence of starting material in the form of nanoparticles, has been the most pivotal. Following this study, multiple advancements have been made further to understand the complex interactions of homeopathic medicine. Basu et al. reported the physico-chemical changes in nanoparticles due to Homeopathic trituration and succussions. Wassenhoven et al. performed nanoparticle characterization of homeopathic medicines and showed the nanoparticle-induced hormetic activation . Bhattacharyya et al. demonstrated the nanoparticles enhance cellular uptake and increase bioactivity. Many groups are still working to explain the origin of nanoparticles while processing. The change in size and morphology of the nanoparticles during different potencies is not explained. Also, the increase in the effectiveness of the medicine in higher potency is not clear.
60d1daab403d994fbabbe7a8
1
In the current work, we have processed a serial dilution of Aurum Met (gold nanoparticle colloidal) by a conventional method. We have chosen different tools (spectroscopy and microscopy) to understand the effect of serial dilution and succussions. Spectroscopy measurement shows that the solution contains Au nanoparticles with different size distributions. Electron microscopy has been used to confirm the presence of Au NPs in different morphology and size.
60d1daab403d994fbabbe7a8
2
The Gold with 99.9% purity was used for the preparation of Aurum Met. The sample was prepared in a glass vial as well as in a plastic vial following the standard protocol of Homoeopathic Pharmacopeia, India . The sample and procedure were performed in the HAPCO Pharmaceutical laboratory. The detailed processing step is described in supporting information. A schematic representation of the process steps is shown in Figure .
60d1daab403d994fbabbe7a8
3
Characterizations: The structural analysis of the power sample (3X and 6X) was carried out using a PANalytical X'Pert diffractometer (XRD) with Cu-Kα incident radiation (λ = 0.15406 nm) in the scan range of 10° to 90° with a scan rate of 1° per min. UV-visible absorption spectra are recorded with a double beam spectrophotometer. Raman spectroscopy measurements were performed by WITec UHTS Raman Spectrometer (WITec, UHTS 300 VIS, Germany) at an excitation wavelength of 532 nm. Transmission electron microscopy (TEM-JEOL JEM 2200FS), was carried out to know the particle size in the diluted sample (24C and 200C). Zeta potential and particle size measured using Zeta analyzer and particle size analyzer (Model No: Horiba Scientific SZ-100).
60d1daab403d994fbabbe7a8
4
In order to characterize (presence of phases, size, and strain) the Gold (Au) and sugar of milk mixture during mechanical grinding, XRD analysis has been performed. Figure shows the XRD pattern of the powder sample with different concentrations of Au and sugar of milk. The diffraction peaks confirm the presence of Au NPs with lactose (sugar of milk) peaks. The peak at 38° corresponds to the (111) plane of Au nanoparticles and matches with the standard JCPDS card . Along with the Au NPs peak, we also observe other peaks corresponding to the lactose. We do not observe any other impurity peaks.